Advancing Trust in AI Model Training
Artificial intelligence (AI) models are increasingly being used to predict material properties and design new medicines and industrial chemicals. However, trusting the answers these models provide remains a significant challenge. Now, researchers from the Pacific Northwest National Laboratory have developed a method to measure the uncertainty of a class of AI models called neural network potentials, marking an important step toward confidence in AI model training.

The research team, led by data scientists Jenna Bilbrey Pope and Sutanay Choudhury, has created an uncertainty quantification method that helps determine how well neural network potentials have been trained and identify when a prediction is outside their training boundaries. This development is crucial because millions of dollars in investment can depend on the reliability of AI model predictions.
“We noticed that some uncertainty models tend to be overconfident, even when the actual error in prediction is high,” said Bilbrey Pope. “Our method mitigates this overconfidence, providing a more accurate assessment of prediction uncertainty.” The team has made their method publicly available on GitHub as part of the Scalable Neural network Atomic Potentials (SNAP) repository.
The researchers benchmarked their uncertainty method using the MACE foundation model for atomistic materials chemistry. By calculating how well the model is trained to determine the energy of specific material families, they demonstrated the potential to instill trust in AI model training. This trust is essential for integrating AI workflows into everyday laboratory work and creating autonomous laboratories where AI becomes a trusted assistant.
The new method addresses the tradeoff between the speed of AI predictions and their accuracy. While AI models can make predictions much faster than traditional computationally intensive methods, their ‘black box’ nature has been a barrier to adoption. The PNNL data science team’s uncertainty measurement provides a way to understand how much to trust an AI prediction, enabling statements such as “This prediction provides 85% confidence that catalyst A is better than catalyst B.”
By making it possible to ‘wrap’ any neural network potentials for chemistry into their framework, the researchers have given AI models the power to be uncertainty-aware. This advancement brings AI one step closer to being a reliable tool in materials science and chemistry research.