The following is most certainly wrong; wrong, but hopefully useful. It is a strong opinion, weakly-held… albeit stronger by the day.
Change on the Horizon
Future of Work pundits have waxed a-plenty on how AI accelerates automation’s affects on jobs.
However, beyond the usual “flatter org charts!”, “gig economy!”, and “up-/re-skilled employees!” talking points, few address the grand implications of this impending tsunami:
Company topologies will change, change hard, and change fast.
While Artificial Intelligence (“AI”) presents sea change to the human condition, I’m going to limit my discussion to Generative AI (“GenAI”) as it has reached sufficiently wide distribution (100MM+ people). Meanwhile, artificial general intelligence (“AGI”) is either here a week from now or never, depending on who you ask. As for plain old AI, it has been here since the 1950s; but, not in forms that laymen would classify as AI.
Before getting into why organizations need to change, four points on where we are today.
1. Soaring Headcounts, Swelling Egos
Prior to the industrial revolution, companies were relatively small, often consisting of the owner(s), their family members, and perhaps a few apprentices or journeymen (m/f/d). Even “large” corporations had far fewer people working in them than they do today.
However, large-scale industrialization and mass production demanded increasingly large teams with new people-management technologies (e.g., org structures, job titles, etc.) to create improved corporate efficiencies and exceptional synergies.
On a human-history scale, this is a new thing. Enter: corporate culture.
When confronted with these pseudo-social structures, our multi-millennia-old brain hardware created pseudo-social work personas to cope. Dunbar’s number suggests that people can only form so many meaningful relationships; with modern corporations routinely exceeding 100-250 individuals each rocking eons-old grey matter that doesn’t meaningfully scale, should this behavior be surprising?
And, more importantly, is it preferable?
I think not: given the option, most people tend to prefer on smaller teams over larger ones, and like to work out roles and responsibilities amongst themselves rather than have fixed job titles set by People Operations or Management.
2. Declining Facility ROI
In the wake of the COVID-19 pandemic, distributed and hybrid work challenged centuries-old operating models, especially in companies that perform knowledge work.
While return-to-the-office is unavoidable for some (e.g., baseball teams), organizations that deal less in atoms and more in information and information services are finding it incrementally more costly to physically consolidate employees compared to their peers, say nothing about the costs bourn by employees in commutes and compensating for rigid scheduling.
Necessary for baseball teams—maybe not for support desks.
3. Emphases on Cognitive Overhead
The folks behind Team Topologies have a great way of articulating how teams should be organized against the total available remit: organize work by cognitive load.
If form follows function (and it does if you paid any attention in
Biology 101), quantifying the cognitive load required to own a function should determine the size, structure, composition, and relationship of its teams and what its groups of teams ought to be. As the human brain has limited reserves of time, attention, and willpower, there are hard limits to what an individual can do. Companies should embrace this reality and ignore it at their peril.
A note on active cognitive ability by way of analogy: “Use it or lose it.”
With rare exception, polyglots who speak more than 3 languages often find that lesser-used languages/dialects tend to recess into the mind unless used actively. Those languages can be reactivated quickly (within a week or few), but often at the cost of proficiency in another non-native language.
I find this limitation to be true for most skills: you can only keep so many of them in active, working memory. Similarly, people can only keep so many skills in peak condition at any one time.
Organizing knowledge work around the brain’s limitations, capabilities, and interests seems like a sane, winning plan.
4. I’m The Captain Now
High-performers generally seek agency in and ownership of their work. Management’s challenge here is more burnout prevention/disengagement and less about serial underperformance. Tuning ownership and responsibilities for talent needs to be choreographed against all their neighboring, also high-performing colleagues who also often take on additional responsibility—whether they were asked to or not.
This is complicated and delicate.
Management responsible for handling
All The Things not only need divvy out the cartesian expansion of
People × Work, but also need keep everyone abreast of changes about and between it all while also declaring who-owns-what and the commensurate leadership structures.
- The more delineations, the greater the chance of turf wars.
- The larger the teams, the more expensive the communications, and the more guardrails around zones of responsibility.
- Both of these complexities (politics and communications) compete with mindshare (and hours) required for work.
Ceteris par., a collection of smaller teams will outperform a larger one.
Enter Generative AI
Machines will always (eventually) outperform our natural abilities. Just ask John Henry.
What makes Generative AI (henceforth GenAI) transformative is both the range and depth expansion of an individual’s ability to perform work—especially information work. With regular use, it enables inorganic cognitive expansion within those who can wield it.
…and not everybody will.
A quick note before moving on: this post would be tagged
#futureofworkif I bothered to “tag” content on this Web 1.0 site. But, GenAI (and AI, generally) will have profound knock-on effects… eventually. We’ll put aside The Singularity, Universal Basic Income (UBI), Nanotechnology, Our Evil Robotic Overlords, and other neighboring topics for this discussion for me (or my digital eidolon) to revisit at a later date.
The Rise of the Übergeneralist
GenAI’s magic trick is two-fold:
- It can make hard things easier to understand, and
- It can create things that were previously hard to do without specialization and/or time.
This one-two punch empowers GenAI-enabled design thinkers to achieve a “good enough” outcome through extremely fast feedback loops. The
ideate ➔ execute ➔ critique ➔ repeat cycle accelerates significantly as the process now requires less specialization, and therefore less collaboration.
Through consistent practice and learning to confidently discern whether generative work is accurate and precise (or merely an exceptionally convincing hallucination), frequent GenAI users become übergeneralists.
They will know what good looks like because of their breadth of experience. Their value will be in their ability to edit and curate.
And they become super valuable.
Specialists with Generalist Tendencies
Consider a Public Relations Principal whose job includes:
- Developing* and implementing* PR strategies.
- Cultivating media relationships.
- Creating**** compelling written content.
- Managing** crisis situations effectively.
- Engaging* with key stakeholders.
- Maintaining** brand consistency.
- Planning** and coordinating* PR events.
- Measuring*** and reporting*** on PR metrics.
- Leading and mentoring PR team.
See the ChatGPT3.5 prompts that generated the above list.
- "create a job description for a PR principle" (ed: note that I misspelled principal)
- "please take the responsibilities and make them into short, bullet points. each starting with a verb"
- "now make these gerunds"
The asterisks (*) indicate the parts of their job, and just how much, cheaply-available, consumer-grade GenAI can amplify their work today in place of leaning on a larger team for execution.
Is the list perfect? No… but it is “good enough”.
Though I have worked with PR teams in the past and know what they do, there’s non-trivial cognitive strain for me to write it on-the-fly. Should I rack my brain for 10 minutes for this non-critical activity? Should I hire a ghostwriter? Or, should I use a chatbot to whip something serviceable up in half-a-minute and use my reclaimed nine-and-a-half minutes to make this paragraph more compelling?
Obviously I chose the latter—it would be difficult to convince me otherwise.
The Fall of Mediocrity
I’m reminded of a scene from Big (1988) in which Tom Hanks is a bit more suited to data entry than his colleague Jon Lovitz. Witness his immediate reaction.
“Listen, what are you trying to do, get us all fired? … Gotta slow down. Pace yourself. Slowly. Slowly.”
Übergeneralists may face difficulties when collaborating with individuals who cannot match their versatility and speed. The reverse is equally true. Unless those individuals possess exceptional talents or specialization beyond what GenAI can provide, their work contributions will soon be turned into an automated system, GenAI or otherwise, rendering their continued employment redundant.
If we step back into the shoes of our Public Relations Principal, would she opt to maintain the larger team size she needed before the advent of GenAI? Would she attempt to mediate harmony between her übergeneralists and those who resist its adoption?
Would she consider what her work life might look like if she went solo?
Do her clients need her in the same way they once did?
Thinking back to team topologies: with GenAI, if each team member has an order of magnitude additional cognitive capacity, and if cognitive limitations determine team capabilities and therefore structure, are today’s organizational structures best suited for maximal return on investment (ROI)?
Crudely, managers are evaluated by:
- the net output of their team(s), and
- the net output of teams adjacent to their team(s).
Salary load is often the largest expense in producing knowledge work. Good managers will almost always opt for fewer people when optimizing for net output when communications overhead is factored in.
With GenAI, fewer people can create equivalent—if not more—output.
Recently I was speaking with some technology leaders under Chatham House Rule about how our hiring practices have recently shifted for PDE (product, design, engineering) teams.
Many of us noted that as recently as a year ago, it was prudent to over-index on engineering recruitment because without engineers, it’s difficult to test, build, and iterate on a product. Today, with GenAI, design and product contributors can do so much more on their own that over-indexing on engineer hiring is no longer necessary.
Not to say that engineers are unnecessary: GenAI has already shown it can greatly expand an engineer’s total output while also making their work more legible to the rest of the team.
While managers should always be tinkering with their hiring playbooks and methods, GenAI is already shaking things up.
Executive leadership has always had a wide aperture on the business, often delegating execution of strategy to their lieutenants. But, as the aperture widens downstream, there’s going to be increasing demands on holistic, systems-level thinkers.
These übergeneralists might have a specialty or few that gives them an edge within their functions (and for why they were selected for their roles), but their ability to flex into lesser-used skills and use GenAI to manage the less-critical parts of their functions into “good enough” states will enable greater speed at for fewer costs, and therefore excellent contributors to new, GenAI-enabled corporate topologies.
They won’t need large teams. They won’t want large teams.
Organizations, like they always do, will reorganize themselves around that which creates outsized value.
For companies whose primary value drivers include data, information, and knowledge work, the changes will be the most profound.
For information systems and commodities, it’s forever a race to zero. Those who can do it more cheaply and more quickly while maintaining their USP (unique selling proposition) will out-earn their competitors.
Over the last twenty years, the companies who embarked on digital transformation tended to win. Likewise, tomorrow’s winners will be the ones who transform again when they embrace the tools that allow for outsized cognitive expansion within their employee base. This almost certainly means that winning companies will be the ones with a smaller headcount.
Or an unencumbered upstart, armed with AI, will eat their lunch.
Back to Being Human
Merriam-Webster dubbed ‘authenticity’ as the word of the year for 2023. I see it as reflective of us seeking genuine human connection in a digitally saturated age.
In our working lives, smaller teams mirror the intimate gatherings preferred by our millennia-old hardware. Companies that can bring back the campfire and ditch the corporate-speak will find their high-performers thriving in a Dunbar-satisfying local maxima.
As consumers, commodities will be cheaper, which means marginally more ethical commodities will be more affordable. Our purchasing behaviors will value novelty, uniqueness, and specific items because we’ll have both the mental bandwidth to delve deeper into them and greater disposable capital to support our unique interests.
Mass customization and creator economies aren’t going away. If anything, they will expand. Companies will need to meet us where we are.
Final Thoughts (as of late 2023)
While scientists have been working on Artificial Intelligence for decades, we’re new seeing the beginnings of the technology cross the chasm between the mavens and the mainstream. The availability and engagement of übergeneralists will reshape corporate structures, headcounts, and what’s possible for companies to do. And, as a society we’ll get more for less.
But it’s still early days.
The GenAI models available today represent step-function leap, but my hunch is that we’ll be here for a few years enjoying incremental improvements before the next major breakthrough in AI research appears. (Ultimately, all of these models are fancy applied statistics and expansive machine learning techniques that create the representation of human intelligence, but not one that a layman would consider “actual” intelligence after a little interrogation.) I think we have a few more step-functions to go before (and if) a true AGI shows up (consciousness optional) that would fool the layman every time.
Until then, currently available GenAI is sufficiently powerful enough to disrupt the last century’s worth of corporate structures in favor of something (and ironically) smaller and more human. What once were corporations that prioritized growth above all else will find existence more profitable and defensible at new local maxima.
Company topologies will change, change hard, and change fast.
Message in a bottle…
I’m optimistic about what this technology enables and what that means about humanity’s future.
However, the above discussion does not touch on the ethics and risks associated with widely available GenAI, AI, AGI, and the rest of it all… and I’m not going to event attempt to do it justice in a footnote.
What gives me comfort is that despite the challenges—and they will be significant—there is a temporal gap between today and when we’ll be confronting real, existential challenges associated with AI. We have ample time to improve our laws, governance, and collective understanding.
While we won’t be entirely ready, we’ll be ready-enough. In many ways, we humans are far more adaptable than we give ourselves credit for… but usually only after we’re pushed.
In the meantime, I’m far more worried about climate change.