Superintelligence Startup Reflection AI Secures $130M in Funding
Reflection AI Inc., a recently established startup led by former Google DeepMind researchers, announced its launch today, alongside a significant $130 million in early-stage funding. The capital was raised through two investment rounds. The initial seed round, valued at $25 million, was spearheaded by Sequoia Capital and CRV. The latter firm then co-led the subsequent Series A funding, which raised $105 million, with Lightspeed Venture Partners.
Reflection AI’s funding efforts also attracted a number of high-profile investors. These reportedly encompass Nvidia Corp.’s venture capital division, LinkedIn co-founder Reid Hoffman, and Scale AI Inc. Chief Executive Officer Alexandr Wang. This investment round values the company at $555 million.

Misha Laskin (right) and Ioannis Antonoglou (left), co-founders of Reflection AI.
At the helm of Reflection AI are co-founders Misha Laskin, who serves as CEO, and Ioannis Antonoglou. Laskin previously contributed to the development of the training workflow for Google LLC’s Gemini large language model series, and Antonoglou worked on Gemini’s post-training systems. Post-training is essentially the process of refining a large language model post-training to elevate the quality of its output.
Reflection AI has set its sights on developing what it terms “superintelligence.” They define this as an artificial intelligence system capable of managing a variety of computer-related tasks. The company’s initial focus is constructing an autonomous programming tool. Reflection AI believes that the foundational technologies necessary to create such a tool can be adapted to realize its broader superintelligence vision.
“The breakthroughs needed to build a fully autonomous coding system — like advanced reasoning and iterative self-improvement — extend naturally to broader categories of computer work,” wrote Reflection AI staff members in a blog post. The company initially plans to concentrate on developing specialized AI agents. These agents will automate particular programming tasks, such as identifying code vulnerabilities, optimizing the use of memory in applications, and assessing their reliability.
Reflection AI is also set to automate associated tasks. The company claims its technology is capable of generating documentation that explains the functionality of a particular piece of code. Furthermore, their software is designed to help manage the infrastructure that supports customer applications.
According to a job posting on Reflection AI’s website, the company intends to utilize large language models and reinforcement learning to power its software. Historically, developers trained AI models using datasets that included explanations for each data point. Reinforcement learning eliminates the need for these explanations, thereby simplifying the creation of training datasets. The job posting also reveals Reflection AI is considering “novel architectures” for its AI systems, suggesting a possible move beyond the Transformer neural network architecture used by most large language models. Another job posting, looking for an AI infrastructure expert, indicates that Reflection AI plans to train its models using tens of thousands of graphics cards. The company also intends to work on “vLLM-like platforms for non-LLM models.” Developers use vLLM, a popular open-source AI tool, to reduce the memory usage of language models.
“As the team advances model intelligence to increase its scope of capabilities, Reflection’s agents take on more responsibilities,” wrote Sequoia Capital investors Stephanie Zhan and Charlie Curnin in a blog post. “Imagine autonomous coding agents working tirelessly in the background, handling workloads that slow teams down.”