Anthropic, a prominent player in the artificial intelligence field, has introduced its latest AI model, Claude 3.7 Sonnet. This new model is engineered to deliver quicker responses and provides enhanced step-by-step reasoning capabilities, representing a significant advancement in the company’s offerings.

This release comes at a time of intense competition in the AI sector, with major U.S. technology companies and global firms like DeepSeek and Alibaba striving to innovate and capture market share. Anthropic’s model is designed as a hybrid approach that combines multiple reasoning strategies to more effectively tackle complex problems. Backed by Amazon and Google, the startup has positioned Claude 3.7 Sonnet as its most advanced model to date, making it accessible across all Claude plans, including Free, Pro, Team, and Enterprise.
The new model features an “extended thinking mode,” available exclusively for paid subscribers. In this mode, the AI model is designed to “self-reflect” before formulating answers, thereby enhancing its performance across a wide range of tasks, including mathematics, physics, code generation, and following instructions. Anthropic, based in San Francisco, has tailored the model to focus on practical, “real-world” applications, which reflects current business use cases for large language models.
In addition to the model launch, Anthropic is also presenting a limited preview of Claude Code. This coding tool leverages AI to assist developers with coding tasks, allowing them to delegate significant engineering work directly from their terminal. It embodies an agentic coding tool, an AI-powered application designed to autonomously handle coding-related tasks. The company has announced that its pricing structure will remain unchanged from its previous models, allowing users to customize the time and resources dedicated to answering questions.
Compared to OpenAI’s o1 model, Anthropic’s new model presents a more cost-effective option, with pricing set at $3 per million input tokens and $15 per million output tokens, in contrast to OpenAI’s $15 and $60, respectively.