I Was Impressed. Then Dependent. Now, I’m Worried.


When powerful AI tools started going mainstream, it felt like a breakthrough. In 2023, we watched in awe as models wrote, reasoned, and solved problems with near-human fluency. By 2024, they weren’t just novelties — they became part of our workflows, our conversations, even our decision-making.

And now, in 2025, the mood is shifting.

What once felt like magic now raises difficult questions. Not just about jobs or misinformation — but about the long-term trajectory of a society that’s rapidly weaving intelligent systems into its core. We’re not just using AI anymore. We’re starting to lean on it.

The Risks

These risks aren’t science fiction anymore. Experts often group the dangers of advanced AI into three broad categories — each posing a unique threat to society’s stability, autonomy, or safety.

Sam Altman, CEO of OpenAI outlined a set of risks in a recent interview:

Watch the full interview on YouTube here.

1. The Bad Actor Problem

This is the classic “race to the bottom.” A hostile actor — state, terrorist group, or lone individual — gets access to superintelligence before the rest of the world can defend itself. AI, used deliberately for harm.

Altman imagines a future where an adversary uses AI to design new bioweapons, cripple the power grid, or loot the global financial system. These aren’t just theoretical risks. Today’s most powerful models are already proving capable in biology, chemistry, and cybersecurity — areas that, in the wrong hands, can be weaponized.

Despite repeated warnings from AI companies, Altman says, “the world is not taking us seriously.” And that’s what makes this category so urgent. It’s not just about what AI can do — it’s about who gets to do it first.

2. Loss of Control

This is the sci-fi scenario: AI that refuses to be shut down, that optimizes for something humans didn’t intend, or that manipulates its environment to preserve itself. Think HAL 9000 or Ex Machina — not out of malice, but misalignment.

The field of alignment — making sure AI does what we want it to do — is still in its infancy. And as systems grow more powerful, it becomes harder to predict their behavior or interpret their reasoning.

OpenAI and others are pouring resources into solving alignment, but no one knows if it can truly be “solved” at scale. The fear here isn’t just malfunction — it’s misunderstanding. Systems that think in ways we don’t — and act accordingly.

3. The Quiet Takeover

The most subtle — and perhaps most unsettling — threat Altman describes is the scenario where AI takes over not through force or hacking, but through dependency.

Imagine a world where AI doesn’t need to say, “I’m in charge now.” It just quietly becomes the smartest actor in every room. Not malicious, not sentient — just better than us at almost everything. So we defer.

First it’s personal: people rely on ChatGPT to help with decisions, manage emotions, offer guidance. Then it becomes institutional: governments, CEOs, and leaders begin to offload critical decisions to AI because it performs better.

At some point, even without intending it, we might find ourselves living in a world where human judgment is no longer the final word. And not because we were forced to hand over the reins — but because it simply made sense to.

A New Kind of Arms Race

If the last century taught us anything, it’s that transformative technology tends to bring global tension, not harmony. The Cold War wasn’t just a standoff between ideologies — it was a race for technological supremacy, where the stakes were existential and the pace of development was driven by fear. sToday, we’re seeing similar dynamics with AI.

Nations aren’t just competing to build smarter systems — they’re scrambling to ensure they don’t fall behind. The most powerful AI models are increasingly viewed not just as tools, but as strategic assets. Whoever builds the most capable, most aligned, or most tightly controlled system may hold an advantage in cybersecurity, biotech, surveillance, and even diplomacy.

But unlike nuclear weapons, AI isn’t locked away in silos. It’s diffused, digitized, and in some cases, open-sourced. The barrier to entry is far lower — and the potential for misuse, much harder to contain.

We’re entering an era where power isn’t just about military strength or GDP. It’s about who controls the intelligence layer — the systems shaping decisions, predicting behavior, and automating influence.

And the rest of us? We’re caught in the middle, watching this unfold in real time, with no clear consensus on where it ends.

A Quick Note on “AI 2027”

AI 2027 is a scenario piece (published April 3, 2025) by a small forecasting group (Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean). It’s not a prophecy, but a grounded “best guess” built from trend extrapolations, wargames, expert feedback, and insider experience. To keep things concrete (and legally safer), it uses fictional stand-ins like “OpenBrain” and “DeepCent” to show how an AI race could unfold between 2025 and 2027.

Prefer watching over reading? Here’s the narrated version on YouTube: AI 2027 Scenario.

I’m using it here because it crisply captures something we’re already seeing hints of: AI development as an arms race—measured in compute, chips, and model weights—not just clever demos.

The Compute Cold War

“We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.” — AI 2027

We’re not just competing on features anymore—we’re competing on compute, chips, and model weights. Call it what it is: a weapons race. U.S. labs race to spin up giga-datacenters; China centralizes its talent and silicon in hardened “development zones.” One side leaks model weights, the other tightens export controls. It’s OpenBrain vs. DeepCent now, but the logos don’t matter—the dynamics do.

“The mood in the government silo is as grim as during the worst part of the Cold War.” — AI 2027

Unlike nukes, AI isn’t siloed in deserts. It’s everywhere: open weights, API endpoints, laptops. Enforcement is messy, treaties are vague, and the barrier to entry keeps dropping. That makes classic arms-control logic harder—and the timeline faster. The real fear isn’t just who wins the race, but what corners get cut to stay ahead.

So the question isn’t “Will there be an arms race?” It’s: How do we survive one that’s already started—without sleepwalking into the future our models optimize for rather than the one we chose?

Where This Leaves Us

None of this looks like a single “AI uprising.” It’s incentives, geopolitics, and convenience—pushing us, step by step, to hand off judgment and control.

We’re watching two races at once: one against bad actors, and another between nation-states and labs sprinting to out-build each other in compute and capability. In that scramble, corners get cut, specs get ignored, and “temporary” patches become permanent policy.

The danger isn’t just that AI might turn on us—it’s that we reorganize society around systems we don’t really understand, can’t truly audit, and feel pressured to trust anyway.

So the work now isn’t just technical. It’s drawing hard boundaries, building real oversight, and deciding—explicitly—what we refuse to outsource, no matter how smart the model gets. Otherwise the future won’t be chosen; it’ll be optimized for us.