Dario Amodei, the CEO of AI firm Anthropic, is sounding the alarm on the potential risks posed by artificial intelligence while also highlighting the technology’s significant benefits. Speaking on the “Hard Fork” podcast, Amodei expressed concerns about the dangers of AI misuse and the need for immediate action.
Amodei predicts a looming “shock” as people begin to realize the extent of both AI’s promise and its potential for harm, likely within the next two years. He stated, “I think people will wake up to both the risks and the benefits.” His intent is to “forewarn people” in the hopes of eliciting a “sane and rational response.”
While acknowledging the potential of AI to break down barriers in fields like niche “knowledge work” and possibly help solve major global issues, Amodei stressed that the corresponding risks are substantial. He noted that Anthropic’s “responsible scaling policy” focuses on preventing the misuse of AI in areas like autonomy and chemical, biological, radiological, and nuclear (CBRN) threats. “It is about hardcore misuse in AI autonomy that could be threats to the lives of millions of people. That is what Anthropic is mostly worried about,” he added.
Amodei believes that the risk of misuse by malicious actors could become a reality as early as 2025 or 2026, though he admitted uncertainty about when “real risk” might present itself. He clarified that the concerns extend beyond simple applications like providing instructions on an internet search. Instead, he worries more about complex issues like that “kind of esoteric, high, uncommon knowledge that, say, only a virology Ph.D. or something has.”
Amodei also foresees major implications for military technology and national security. He’s concerned that AI could be “an engine of autocracy,” particularly in countries like Russia and China. “If you think about repressive governments, the limits to how repressive they can be are generally set by what they can get their enforcers, their human enforcers to do,” Amodei explained. “But if their enforcers are no longer human, that starts painting some very dark possibilities.”
He emphasized the importance of the U.S. matching China’s advancements in AI development, hoping to equip “liberal democracies” with enough technological advantage to counter abuses of power and protect national security.
So what’s the solution? Amodei emphasized that it’s possible to mitigate risks without sacrificing AI’s benefits. He believes it can be done with “subtlety” and “a complex conversation.” Amodei recognized that AI models are “somewhat difficult to control” at present. However, he remains hopeful. “We know how to make these,” he said. “We have kind of a plan for how to make them safe, but it’s not a plan that’s going to reliably work yet. Hopefully, we can do better in the future.”