Intelligence Amplified
This piece first appeared on The Hyder Ground Substack. Join us there!
What happens when a billion people gain superpowers?
We’ve been asking the wrong question about AI.
For two years, we’ve been obsessed with artificial intelligence: what it can do, what it will become, whether it will replace us.
We’ve treated AI as an autonomous force, something separate from ourselves, watching it with a mixture of awe and dread. But this framing misses something fundamental.
What if we flipped the acronym? What if instead of AI (artificial intelligence) we started thinking about IA: intelligence amplified?
The shift matters more than it seems.
When we say “artificial intelligence,” we’re centering the technology. We’re making it the protagonist of the story. The question becomes: what will AI do to us? It’s a passive construction. We become recipients, observers, potential victims of a force beyond our control.
But “intelligence amplified” centers us. It makes humans the protagonist again. The question shifts: what will we do with amplified intelligence? How will we use this tool to think better, create more, solve harder problems?
This isn’t semantic gymnastics. Language shapes reality. The words we use determine the fears we feel and the futures we can imagine.
The evidence is already here.
In a landmark study at Boston Consulting Group, 758 consultants (some of the world’s most elite knowledge workers) used GPT-4 on realistic consulting tasks. The results were staggering: they completed 12.2% more tasks, finished work 25.1% faster, and produced work rated 40% higher in quality.
But here’s what’s remarkable about those numbers: they’re not about AI getting smarter. They’re about humans getting more capable.
The consultants who scored worst at baseline saw the biggest gains: a 43% improvement in performance. The top performers still improved, just less dramatically. The technology didn’t replace expertise. It amplified what was already there, lifting everyone’s ceiling.
This is intelligence amplification in action. The same human, with the same knowledge and judgment, suddenly able to do more. Think faster. Explore wider. Create better.
And it’s not just consultants anymore.
When a billion people suddenly have access to tools that amplify their intelligence, everything changes. Every student writing an essay. Every lawyer researching case law. Every designer brainstorming concepts. Every entrepreneur building a business plan.
We’ve moved from intelligence as a scarce resource to intelligence as abundant capacity. From a world where cognitive capability was limited by individual brainpower to one where it’s amplified and distributed at scale.
The institutions we built, the hierarchies we created, the gatekeepers we installed…they were all designed for a world of scarce intelligence. Now we’re entering an era where intelligence amplification is cheap, accessible, and powerful enough to reshape how work gets done.
The question isn’t whether this transformation is happening. It’s whether we approach it with fear or agency.
Consider what actually happened in that study.
The consultants weren’t automated away. They were augmented. They still decided strategy. They still applied judgment. They still used their expertise to evaluate options and make decisions. But they could do it faster, with more iterations, considering more possibilities.
The researchers noticed two patterns emerging: “Centaurs” who clearly divided tasks between human and AI, and “Cyborgs” who blended the two in constant interaction. Both approaches worked. Both kept the human firmly in control, using AI as an amplifier for different aspects of their thinking.
The intelligence was always human. The amplification just made it bigger.
This is the template for what’s coming. Not replacement. Amplification. Not humans versus machines. Humans with machines, figuring out new ways to think and create and solve problems.
This reframing dissolves the fear.
When we talk about artificial intelligence, we conjure images of machines that think, reason, and make decisions independently. The narrative arc points toward replacement: first they help us, then they don’t need us.
But intelligence amplified tells a different story. It’s a tool story. And humans have always been tool makers and users.
We didn’t fear calculators once we understood they amplified our math ability rather than replaced our mathematical thinking. We don’t fear search engines now that we know they amplify our research capability without replacing our curiosity or judgment.
Large language models amplify our ability to write, brainstorm, analyze, and create. They handle the grunt work of pattern matching across vast datasets while we apply taste, strategy, and purpose.
The value isn’t in the AI. It’s in the amplification of human intent.
But amplification at scale creates new challenges.
In the BCG study, when consultants were given a task specifically designed to be outside AI’s capabilities, those using AI were 19 percentage points less likely to reach the correct answer. They trusted the amplification in situations where the amplifier was broken.
Now multiply that across a billion users. Across every classroom and courtroom and boardroom. The opportunities are enormous. So are the risks.
When anyone can amplify their intelligence, how do we preserve what’s valuable about expertise? When anyone can generate convincing content, how do we maintain trust? When intelligence is no longer scarce, what becomes valuable instead?
These aren’t questions about the technology. They’re questions about us. About the society we want to build when the constraints that shaped our institutions suddenly lift.
And they’re urgent questions, because the transformation is already underway. We’re trying to figure out the rules while the game is being played.
The human questions return to the center.
With the IA frame, the conversation shifts from “what will AI do?” to “what do we wish to amplify?”
Do we want to amplify creativity or slop? Critical thinking or pattern matching? Individual judgment or collective capability?
These are human questions with human answers. They assume agency. They require us to make choices about what matters, what’s worth amplifying, what we want to preserve as purely human.
And they reveal that the technology isn’t the hard part. The hard part is deciding who we want to become when our intelligence is amplified, when the constraints we’ve always worked within suddenly lift, when a billion people gain capabilities that were once available to only a few.
This is why “cybernetic teammate” resonates.
As Ethan Mollick notes in his research, AI has evolved from tool to co-intelligence to something more integrated: a cybernetic teammate. Not replacing the human, but working alongside, amplifying capabilities in real time.
A teammate doesn’t threaten your position. A teammate makes you better. Together, you accomplish what neither could alone.
That’s intelligence amplification. That’s IA.
And when you give a billion people access to a cybernetic teammate, you’ve fundamentally changed the landscape of human capability.
The transformation is already happening.
Whether we call it AI or IA, the technology is here. But the frame we choose determines how we engage with it.
If we treat it as artificial intelligence (something foreign, autonomous, potentially threatening) we approach it defensively. We focus on protecting ourselves from it. We worry about what it might take from us. We see a billion people with AI access as a threat to be managed.
If we treat it as intelligence amplified (an extension of our own capabilities) we approach it as builders and users. We focus on what we can create with it. We think about what becomes possible when limitations lift. We see a billion people with amplified intelligence as an opportunity to unlock human potential at unprecedented scale.
The technology doesn’t change. But everything else does.
We’ve always been cyborgs, actually.
The smartphone in your pocket already amplifies your memory, your navigation, your connection to others. Glasses amplify vision. Hearing aids amplify sound. We’ve been using tools to transcend biological limits for a cool millennia.
Language itself is an intelligence amplifier: a technology that lets us think more complex thoughts than any individual brain could hold, by encoding ideas in symbols we can share and build upon.
Every technology that extends human capability is, in some sense, intelligence amplified. LLMs are just the latest iteration: the first tools that amplify the messy, creative, linguistic kind of thinking we previously thought only humans could do.
And now, for the first time, this kind of amplification is cheap enough to be everywhere, easy enough to be accessible, and powerful enough to matter.
The future looks different through this lens.
With AI framing, the future is about machines getting smarter until they don’t need us. It’s a zero-sum story where human value declines as machine capability rises. The question is how long until we’re obsolete.
With IA framing, the future is about humans getting more capable, solving harder problems, creating things we couldn’t before. It’s a positive-sum story where amplification raises the ceiling for everyone. The question is what we’ll build with our expanded capacity.
Which future would you rather work toward?
The technology will continue improving either way. The challenges will arrive regardless. Schools will wrestle with students using AI. Courts will grapple with AI-generated evidence. Companies will navigate the chaos of every employee suddenly working with amplified intelligence.
But the story we tell ourselves about it (whether we’re spectators to AI’s rise or active users of amplified intelligence) determines whether we approach these challenges with fear or agency. Whether we see problems to avoid or opportunities to seize.
We get to choose.
The acronym is trivial. The shift in perspective is everything.
Stop thinking about artificial intelligence as something separate from us. Start thinking about intelligence amplified: yours, mine, ours. A billion people figuring out what they can do when their cognitive capabilities suddenly expand.
The tools don’t replace taste, discernment, or wisdom. They amplify whatever we bring to them. And that means the most important work isn’t building better AI or managing its risks (though both matter).
The most important work is becoming better humans worth amplifying. Developing the judgment to know when to use these tools and when to set them aside. Building the institutions that can harness amplified intelligence while preserving what makes us human. Figuring out what becomes valuable when intelligence itself is no longer scarce.
The technology isn’t the story. We are. A billion of us, suddenly equipped with capabilities we’re still learning to use, trying to figure out what we can build together.
We always have been the story.
We just needed the right frame to see it.
P.S. - If you’d like me to come deliver this message to your organization, I do have a few slots left for 2026. You’d be in solid company with Yale, Microsoft, Adobe, Chase, Disney, and more. You can check availability at www.ShamaHyder.com.
P.P.S. - Thanks for being a reader. It means so much to me. Join us on Substack!
Indeed a powerful insight. Thanks Shama , for Sharing and inviting me.
A great perspective! “Intelligence amplified” is what we should be embracing AI to help us with. When we design systems around that idea, AI becomes a tool for freedom, not fear.
Thought in creationism
I started calling it intelligence amplified...my team's decisions are faster, huge win!
I've stopped overcomplicating AI, automated two tasks, reclaimed a day a week!