Navigating the A.I. Landscape with Anthropic’s CEO, Dario Amodei
In a recent episode of the “Hard Fork” podcast, hosts Kevin Roose and Casey Newton welcomed Dario Amodei, the CEO of Anthropic, for a comprehensive discussion on the latest developments and challenges in the field of artificial intelligence. The interview covered a range of topics, from Anthropic’s newest language model, Claude 3.7 Sonnet, to the escalating A.I. competition with China and Amodei’s hopes and concerns for the future.
Roose and Newton began by discussing the hectic pace of A.I. announcements, with companies frequently making news on weekends. Amodei then introduced Claude 3.7 Sonnet, highlighting its enhancements in real-world task performance, particularly in coding. He explained
“We trained Claude 3.7 more to focus on these real-world tasks.”
Amodei also noted the unique “extended thinking” mode of the model, which allows for longer processing times for complex queries, differentiating it from other reasoning models available. This feature allows the model to decide how long it would need to consider the user’s question. This function mimics human thought processes.
When asked about the dangers associated with Claude 3.7 Sonnet, Amodei emphasized a crucial point about the difference between present and future dangers. Rather than present threats, he expressed greater worry about the risks related to the increasing power of the models. He warned that models are approaching a stage at which they could be helpful in “misuse risks”, including enabling biological or chemical weapons, and AI autonomy risks. Although Amodei felt that the current version, Sonnet, did not show a “meaningful increase in the threat”, he said that it was “a substantial probability” that the next model would cross into a riskier category.
Amodei discussed the rapid pace of A.I. innovation and the way it is copied among competitors. He felt that different models are actually relatively different from each other, which creates differentiation and makes it difficult for consumers to choose.
Amodei also explored competition, and said that, from the perspective of national security and autonomy, he feels more concern about DeepSeek. This is because, from a national perspective, A.I. could become an engine of autocracy and be used by authoritarian regimes.
In discussing the recent A.I. Action Summit in Paris, Amodei expressed disappointment, describing it as a trade show lacking the spirit of the original Bletchley Park summit, which prioritized risk discussion. He agreed that the industry is not taking enough time to consider all the possible outcomes.
In the second half of the interview, Amodei addressed questions about the future. He expressed a 70 to 80 percent probability of highly advanced A.I. systems emerging by the end of the decade, possibly as early as 2026 or 2027. He mentioned the increase in people who “Grok the future”.
Amodei highlighted the need for nuanced conversations about managing risks while maximizing benefits, advocating for critical thinking in an environment with increasingly sophisticated content.
Roose brought up the possibility of career decisions in the face of rapidly evolving A.I. capabilities, especially the possibility of replacing junior jobs. Amodei stated that, in the short run, these technologies would likely augment and increase the productivity of human coders, but that long-term there may be the replacement of jobs, particularly at the lower levels.
Ultimately, Amodei expressed his hope that society will learn to coexist with powerful intelligences, acknowledging both the potential for profound positive change and a degree of emotional upheaval as A.I. technology matures and integrates further into daily life.
