Making AI-generated Code More Accurate
Programmers can now use large language models (LLMs) to generate computer code more quickly. However, this only makes programmers’ lives easier if that code follows the rules of the programming language and doesn’t cause a computer to crash. Some methods exist for ensuring LLMs conform to the rules of whatever language they are generating text in, but many of these methods either distort the model’s intended meaning or are too time-consuming to be feasible for complex tasks.
A new approach developed by researchers at MIT and elsewhere automatically guides an LLM to generate text that adheres to the rules of the relevant language, such as a particular programming language, and is also error-free. Their method allows an LLM to allocate efforts toward outputs that are most likely to be valid and accurate, while discarding unpromising outputs early in the process. This probabilistic approach boosts computational efficiency.

Due to these efficiency gains, the researchers’ architecture enabled small LLMs to outperform much larger models in generating accurate, properly structured outputs for several real-world use cases, including molecular biology and robotics. In the long run, this new architecture could help nonexperts control AI-generated content. For instance, it could allow businesspeople to write complex queries in SQL, a language for database manipulation, using only natural language prompts.
“This work has implications beyond research. It could improve programming assistants, AI-powered data analysis, and scientific discovery tools by ensuring that AI-generated outputs remain both useful and correct,” says João Loula, an MIT graduate student and co-lead author of a paper on this framework.
The researchers’ approach involves engineering knowledge into the LLM to steer it toward the most promising outputs. These outputs are more likely to follow the structural constraints defined by a user, and to have the meaning the user intends. They accomplish this using a technique called sequential Monte Carlo, which enables parallel generation from an LLM to compete with each other. The model dynamically allocates resources to different threads of parallel computation based on how promising their output appears.
Each output is given a weight that represents how likely it is to be structurally valid and semantically accurate. At each step in the computation, the model focuses on those with higher weights and throws out the rest. In a sense, it is like the LLM has an expert looking over its shoulder to ensure it makes the right choices at each step, while keeping it focused on the overall goal.
Boosting Small Models and Future Directions
To test their approach, the researchers applied the framework to LLMs tasked with generating four types of outputs: Python code, SQL database queries, molecular structures, and plans for a robot to follow. When compared to existing approaches, the researchers’ method performed more accurately while requiring less computation. In Python code generation, for instance, the researchers’ architecture enabled a small, open-source model to outperform a specialized, commercial closed-source model that is more than double its size.
“We are very excited that we can allow these small models to punch way above their weight,” Loula says. Moving forward, the researchers want to use their technique to control larger chunks of generated text, rather than working one small piece at a time. They also want to combine their method with learning, so that as they control the outputs a model generates, it learns to be more accurate.
The research has been funded and supported in part by the Canada CIFAR AI Chairs Program, the MIT Quest for Intelligence, and Convergent Research. The paper, titled “Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo,” will be presented at the International Conference on Learning Representations.