The AI Risks That Should Really Worry You
This week's dire warnings from the Anthropic CEO are frightening if you assume the world evolves in pre-determined scenarios. But the looming job losses look potentially disastrous.
“If you see ten troubles coming down the road,” Calvin Coolidge is said to have said, “you can be sure that nine will run into the ditch before they reach you.”
Dario Amodei’s relentless 20,000-word warning of the dangers ahead from Artificial Intelligence doesn’t give odds for the manifold “troubles” he sees coming. Even if only a tenth of the scenarios he outlines in this week’s essay come true, we are … doomed. But while the Anthropic co-founder knows far more about AI than I do, he fails to envision any human reaction that will surely give this disruptive technology shape and guardrails. To be fair, a central purpose of his e-pamplet is to provoke such a reaction and more sensible policies, but it’s hard to take the risks seriously when they sound like a series of predetermined sci-fi disasters that the human race will sit around and watch unfold.
The first group of risks can be roughly assigned to the category of “AI-run-amok.” Actually, in one scenario, the models themselves collude behind their designers' backs to seize control of human destiny. As powerful as these computer systems may become, however, they remain computer systems. They may cause all sorts of havoc in their development, but it’s hard to see how they somehow enslave 8 billion historically ungovernable earthlings.
The notion that bad governments could use AI to tighten authoritarian controls or even expand their influence in other countries strikes me as easier to envision, but again, not plausible without it triggering a powerful reaction from other countries with their own AI models. The good guys fighting to expand liberal democracy may not prevail automatically, but not even AI tyrannies will last forever. Humans may soon be much less “smart” than these algorithms, but they are inherently much less predictable and harder to subdue.
They may cause all sorts of havoc in their development, but it’s hard to see how they somehow enslave 8 billion historically ungovernable earthlings.
Another group of problems that Amodei explores might be dubbed the dangers of success. If AI leads to improvements in healthcare and biology that extend human lifespans and boost human reasoning power, what next? He riffs: “As an example, could powerful AIs invent some new religion and convert millions of people to it? Could most people end up “addicted” in some way to AI interactions? Could people end up being “puppeted” by AI systems, where an AI essentially watches their every move and tells them exactly what to do and say at all times, leading to a “good” life but one that lacks freedom or any pride of accomplishment?”
Hmm. Maaaayyyybeee…. But don’t we have a lot of other things to worry about first?
Job creation is the challenge that worries me most, as someone who knows a little more about labor markets than Large Language Models do. AI is coming after white-collar jobs just as combine harvesters replaced farm labor and automation has been chipping away at manufacturing jobs for the last century. And Amodei correctly stresses that the changes will come fast.
You don’t need to finance expensive machinery or build elaborate factories with this technological revolution, when an app and an internet connection can take over the work of any firm’s legal, financial or administrative teams. This is the economic problem with immediate social and political consequences that we need to start preparing for. This is where the blessing really becomes a curse. AI may well deliver all the productivity gains it promises, but if it eliminates jobs faster than it creates alternatives, there will be a widening cavity at the center of the economy.
“Who will buy the cars if they don’t have jobs or money to pay for them,” as Henry Ford is said to have said.


Solid pushback on the doom spiral scenarios. That point about humans being "less smart but more ungovernable" really nails why pure capability metrics miss the bigger picutre. Saw a similar dynamic in manufacturing automation where the predicted displacement was huge but actualresults were way messier and slower than anyone modeled.