By Parmy Olson

In Supremacy: AI, ChatGPT, and the Race That Will Change the World, journalist Parmy Olson tells the story of a technological arms race unlike any in history — not fought over territory or weapons, but over intelligence itself. It’s the race to build Artificial General Intelligence (AGI), machines that can think, reason, and create at or beyond the capacity of the human mind.

Olson frames this contest as one of immense consequence: the rivalry between the world’s leading AI labs — OpenAI and DeepMind — and the billion-dollar corporations that fund them. The story is a blend of corporate ambition, scientific idealism, and existential risk. What began as a quest to improve humanity is fast becoming a competition to control it.

The book explores three intertwined questions:

No. 1 — Who will achieve AI supremacy first?

No. 2 — What will that power mean for society?

No. 3 — Can humanity remain in control once machines begin thinking for themselves.

Chapter No. 1 — The Dreamers and the Disruptors

The roots of this race stretch back to two visionaries with radically different backgrounds and philosophies.

Demis Hassabis, founder of DeepMind, was a child prodigy in chess and neuroscience. His mission was noble and scientific: to “solve intelligence” and, by doing so, solve everything else — from disease to climate change. He imagined AI not as a commercial tool, but as an intellectual and moral quest.

Sam Altman, on the other hand, was a Silicon Valley entrepreneur and investor, shaped by the culture of speed, iteration, and scale. As the CEO of OpenAI, Altman’s vision was ambitious but pragmatic: to make sure AI didn’t just belong to one company or country. “Artificial intelligence should benefit all of humanity,” he declared.

Both men saw AI as destiny — but their roads diverged sharply.

DeepMind leaned toward caution, secrecy, and scientific rigor. OpenAI, under Altman, leaned toward openness, speed, and commercial application. One moved quietly in the lab; the other launched ChatGPT and changed the world overnight.

Chapter No. 2 — From Idealism to Capitalism

When DeepMind was founded in 2010, its stated mission was to use artificial intelligence for good. In 2014, Google bought the company for more than $500 million, promising it autonomy and an “ethics board” to protect against misuse. The partnership gave DeepMind access to the massive computing power and data resources it needed to push the limits of machine learning.

Meanwhile, in 2015, OpenAI was founded as a nonprofit research lab by Sam Altman, Elon Musk, and several other tech luminaries. Its stated goal was “to ensure artificial general intelligence benefits all of humanity.” It positioned itself as the moral counterweight to Google’s power.

But noble intentions collided with financial reality. Training large-scale AI models costs staggering amounts of money — millions in computing power alone. As these labs raced to build bigger and smarter systems, they needed funding. OpenAI began to drift from its nonprofit origins, eventually forming a “capped-profit” company and taking a $13 billion investment from Microsoft.

DeepMind, too, found itself increasingly under corporate pressure from Google. As one former employee told Olson, “Science took a back seat to scale.” The shift from research to product became inevitable.

The irony, Olson observes, is that both companies were founded to keep AI development safe — and both were forced to compromise those values to keep up with each other.

Chapter No. 3 — The Birth of ChatGPT and the Spark Heard Round the World

In late 2022, OpenAI released ChatGPT, a conversational AI model built on top of GPT-3 and later GPT-4. It was a watershed moment.

The product was deceptively simple — a chat box that could answer questions, write essays, generate code, and simulate conversation with uncanny fluency. Within weeks, it attracted over 100 million users, becoming the fastest-growing consumer app in history.

The public was stunned. For the first time, ordinary people could feel the power of AI. It wasn’t hidden in research papers or obscure systems; it was right in front of them — talking, reasoning, creating.

But behind the scenes, the launch created chaos. Google and DeepMind were blindsided. They had held back similar technology for fear of ethical backlash. Suddenly, OpenAI had set the agenda for the entire industry. The world’s largest tech companies were forced into reactive mode. Within months, Google rushed out Bard (later Gemini), while Meta, Anthropic, and others scrambled to release their own chatbots.

The “AI gold rush” had begun.

Chapter No. 4 — Inside the Rivalry — DeepMind vs. OpenAI

Olson paints the competition between DeepMind and OpenAI as a clash of cultures as much as of code.

At DeepMind, engineers operated more like scientists, chasing theoretical breakthroughs. The company’s crowning achievement was AlphaGo, the AI that defeated world champion Go player Lee Sedol in 2016 — a symbolic moment in AI history. DeepMind’s focus was on building systems that could learn — generalizable intelligence, not one-off tricks.

At OpenAI, the vibe was pure Silicon Valley hustle. Altman encouraged risk-taking, rapid iteration, and public release. While DeepMind prioritized academic publications, OpenAI prioritized public impact. Their breakthrough came not from a lab paper, but from deployment. ChatGPT was messy, controversial, but revolutionary.

Olson notes that this rivalry accelerated progress but also deepened ethical tension. In the rush to stay ahead, companies began pushing boundaries — releasing increasingly powerful models with limited safety testing. The race for “AI supremacy” started to look less like a marathon and more like a drag race without brakes.

Chapter No. 5 — The Moral Dilemma of Acceleration

The question haunting every chapter of Olson’s book is this: Are we moving too fast?

AI’s rapid evolution has outpaced regulation, ethics, and even comprehension. Systems like ChatGPT, Claude, and Gemini are astonishingly capable — but also prone to misinformation, bias, and hallucination. More troublingly, no one fully understands how these large language models think.

The book chronicles the internal debates at OpenAI and DeepMind over safety, transparency, and control. Many researchers worry that without guardrails, the pursuit of AGI could unleash systems that act in unpredictable — even dangerous — ways.

But corporate incentives push in the opposite direction. Once AI became profitable, slowing down was no longer an option. “Safety” became a marketing term, not a mandate.

Olson calls this the paradox of progress: the more powerful AI becomes, the harder it is to pause its development. Everyone fears being left behind — so everyone keeps accelerating.

Chapter No. 6 — The Global Chessboard

AI supremacy isn’t just a corporate rivalry — it’s geopolitical.

Olson explores how the race for AI dominance has become a matter of national strategy. The United States, China, and Europe are locked in a technological cold war. China’s government has invested heavily in state-backed AI programs, aiming to lead the world by 2030.

Meanwhile, Western nations debate privacy, ethics, and regulation — often slowing themselves down. The question becomes whether democracy can compete with authoritarian speed.

The book argues that the next decade of AI development will determine global power structures for generations. Control over AGI means control over economies, militaries, and information systems — in essence, control over the future.

But Olson also warns that framing AI as an arms race can become a self-fulfilling prophecy. When every actor believes “if we don’t build it, someone else will,” cooperation collapses, and collective safety disappears.

Chapter No. 7 — The Human Toll

For all its talk of progress, Supremacy never loses sight of the human dimension.

Inside these companies, employees wrestle with burnout, ethical conflict, and the moral weight of their work. Some engineers leave, fearing that the tools they’re building will be misused. Others stay, convinced that if “good people” don’t lead the charge, worse actors will.

Outside, society begins to grapple with disruption on an unprecedented scale. AI threatens millions of jobs in fields like customer service, education, media, and law. Deepfakes blur reality. Algorithms amplify misinformation. Students cheat. Workers panic.

And yet, the technology also dazzles. Doctors use AI to detect cancer earlier. Scientists model climate patterns. Artists create new forms of beauty. The tension between awe and anxiety becomes the defining mood of the AI era.

Chapter No. 8 — The Fall of Idealism

As OpenAI and DeepMind evolve, both begin to resemble the very tech giants they once opposed.

At OpenAI, internal conflict emerges between the company’s research arm (focused on safety and ethics) and its leadership (focused on scale and deployment). Sam Altman’s ouster and reinstatement as CEO in 2023 — one of the book’s climactic episodes — becomes a metaphor for the larger struggle between ambition and accountability.

At DeepMind, Google’s corporate structure slowly absorbs the company, rebranding its AI efforts under Google DeepMind. The “ethics board” promised at its acquisition? It quietly fades into irrelevance.

By the end, Olson makes clear that the original mission — to democratize and safeguard AI — has been compromised by profit motives and political pressure. The labs that set out to save humanity are now fighting to dominate it.

Chapter No. 9 — The Ethics of Supremacy

The concept of “supremacy” runs through the book as both theme and warning.

Technological supremacy promises immense power — but it also tempts hubris. The idea that one company, nation, or individual could control intelligence itself raises profound moral questions.

Olson examines the growing divide between AI optimists (who believe intelligent machines will elevate humanity) and AI pessimists (who warn of existential danger). Both camps agree on one thing: the decisions made now will echo for centuries.

She argues that humanity faces a “second nuclear moment.” Just as physicists in the 1940s realized their discoveries could destroy the world, AI scientists today are realizing that their creations could surpass their control.

The difference? This time, there may be no pause button.

Chapter No. 10 — The Road Ahead

In her closing chapters, Olson reflects on what the AI race reveals about human nature itself.

We are driven by curiosity, competition, and fear — the same forces that propelled space exploration and the arms race. But intelligence, she notes, is different. It’s not just a tool; it’s the substrate of power, consciousness, and identity. Whoever “wins” AI supremacy won’t just change the world — they’ll redefine what it means to be human.

Olson doesn’t pretend to have the answers. Instead, she urges humility, transparency, and cooperation. She argues that AI should not belong to a handful of corporations or governments, but to the collective stewardship of humanity.

Her message is clear: the future will be defined not by how fast we build AI, but by how wisely we wield it.

Themes and Takeaways

The Corruption of Idealism

Every technological revolution begins with noble intentions. But as AI’s potential for power and profit becomes clear, ethics bend under pressure.

Acceleration Without Accountability

Once progress reaches a certain velocity, slowing down feels impossible. Every player fears falling behind, even when moving forward feels dangerous.

The Fragility of Ethics in Corporate Systems

Even well-meaning organizations are shaped by their investors, shareholders, and market pressures. The bigger the mission, the greater the temptation to compromise.

The Geopolitics of Intelligence

The AI race isn’t just about innovation; it’s about global power, sovereignty, and survival.

The Human Element

The book reminds us that behind every algorithm are people: engineers, dreamers, and whistleblowers wrestling with impossible questions about purpose and responsibility.

Conclusion: The Fragrance of Power

Supremacy closes with an image that lingers — the idea that intelligence itself has become the new currency of global supremacy. The race to control it could lead to salvation or self-destruction.

Olson’s narrative reads like a warning disguised as a thriller: if humanity treats AI as a weapon to win rather than a tool to understand, the victory will be pyrrhic.

The challenge, she suggests, is not just building smarter machines — it’s becoming wiser humans. The measure of progress will not be how intelligent our algorithms become, but how conscious we remain of what they reflect: our ambition, our insecurity, and our unrelenting desire to play god.

In the end, the real supremacy at stake isn’t artificial. It’s human.


If You Liked This Article, You May Also Like …