Anthropic Launches ‘AI for Social Good’ Initiative with Former Biden Official at the Helm
Anthropic has appointed Elizabeth Kelly, a former Biden administration official, to lead its new Beneficial Deployments team. This initiative aims to extend the benefits of Anthropic’s AI technology to organizations focused on social good, particularly in areas such as health research and education, which often lack market-driven incentives.
Kelly previously led the U.S. AI Safety Institute within the National Institute of Standards and Technology (NIST) in 2024. During her tenure, she helped establish agreements with OpenAI and Anthropic to allow NIST to safety-test the companies’ new AI models before deployment. Kelly joined Anthropic in mid-March after leaving government service in early February.
“Our mission is to support the development and deployment of AI in ways that are beneficial for society but may not be driven by market forces,” Kelly explained to Fast Company. The new team aligns with Anthropic’s mission as a public benefit corporation, which aims to distribute the advantages of its AI technology equitably beyond just deep-pocketed corporations.
Key Focus Areas of the Beneficial Deployments Team
The Beneficial Deployments team will operate within Anthropic’s go-to-market organization, ensuring that the company’s AI software and services are designed with customer needs in mind. Kelly emphasized that her team will collaborate across departments, including Anthropic’s Applied AI group and science and social impact specialists, to help mission-aligned customers develop successful products and services powered by Anthropic’s AI models.
“We need to treat nonprofits, ed-tech companies, and health-tech organizations developing transformative solutions with the same level of support as our largest enterprise customers,” Kelly stated. Smaller organizations, which often lack budget and in-house AI expertise, may receive a higher level of support than Anthropic’s larger customers.
‘AI for Science’ Program and Other Initiatives

The Beneficial Deployments team will provide partner organizations with free access to Anthropic’s AI models. One of its first initiatives is the ‘AI for Science’ program, which will offer up to $20,000 in API credits over six months to qualifying scientific researchers, with the possibility of renewal. Anthropic plans to work with at least 25 researchers using its large language model Claude and then expand the program to additional industry verticals.
This initiative aims to democratize access to cutting-edge AI tools for researchers working on topics with significant scientific impact, particularly in biology and life sciences applications, as publicly funded support for scientific research faces increasing challenges.
Collaborations and Future Plans
Anthropic has already piloted the Beneficial Deployments concept by providing API credits and consulting services to several ed-tech organizations. For instance, Amira Learning uses Anthropic’s AI to teach millions of students reading comprehension. The company leverages Claude to generate personalized dialogues with students and create custom instructional content.
Other partners include FutureHouse, an Eric Schmidt-backed nonprofit automating scientific research with AI, and Benchling, a cloud-based data management platform for life sciences researchers. FutureHouse uses Anthropic’s Claude models to develop scientific agents for research and drug discovery, while Benchling embeds AI agents into scientific workflows using Anthropic’s AI within Amazon’s Bedrock cloud environment.
With the Beneficial Deployments team now established, Anthropic will formalize and expand its earlier engagements with these organizations and offer similar opportunities to more qualifying academic and nonprofit groups. The team has already posted several open roles, including specialists in public health and economic mobility.
“I’m excited about the potential of these efforts to support organizations that are sometimes overlooked and need to be part of the AI transformation,” Kelly concluded.