Chinese Startup Aims to Challenge Nvidia’s AI Chip Dominance
A new framework developed by a team connected to China’s Tsinghua University aims to lessen the country’s dependence on Nvidia chips for artificial intelligence model inference. This initiative represents the latest move in China’s ongoing effort to achieve technological self-sufficiency in the face of US export controls.

A Chinese team says its new AI framework can run on domestic chips. Photo: AFP
The framework, named Chitu, is designed for high-performance inference of large language models (LLMs). Developed in collaboration between the start-up Qingcheng.AI and a team led by Tsinghua University computer science professor Zhai Jidong, Chitu is built to operate on domestically produced chips. This challenges the dominance of Nvidia’s Hopper series GPUs in supporting certain AI models, such as DeepSeek-R1, according to a joint statement.
AI frameworks are essential tools, serving as the foundation for sophisticated AI models. They provide the necessary libraries and tools that allow developers to efficiently design, train, and validate complex AI models.
The Chitu framework has been open-sourced since Friday and supports mainstream models, including those from DeepSeek and Meta Platforms’ Llama series. Testing using the full-strength version of DeepSeek-R1 with Nvidia’s A800 GPUs showed impressive results. The company reported a 315% increase in model inference speed while decreasing GPU usage by 50% compared to existing foreign open-source frameworks.
This development is part of a larger trend among Chinese AI companies to reduce reliance on Nvidia, whose high-performance GPUs are subject to US export restrictions. The US government has banned Nvidia from selling its advanced H100 and H800 chips, part of the Hopper series, to clients based in China.