AWS for Industries: Building a Generative AI-Powered Reservoir Simulation Assistant with Stone Ridge Technology
In the critical field of reservoir simulation, creating accurate models is essential for understanding and predicting the complex behavior of subsurface fluid flow through geological formations. This process is often challenging, even for seasoned professionals, due to the inherent complexities of model creation, implementation, and optimization. Fortunately, artificial intelligence (AI) and large language models (LLMs) now offer a transformative solution to streamline and enhance the reservoir simulation workflow.
This article details the development of an intelligent simulation assistant, leveraging the power of Amazon Bedrock, Anthropic’s Claude, and Amazon Titan LLMs. The goal is to revolutionize how reservoir engineers approach simulation tasks.
Background on Reservoir Simulation Models
Reservoir simulation models are classified as Earth Simulation Models, which cover domains like groundwater flow, geothermal energy, hydrocarbon extraction, and CO2 sequestration. A simulation, for example, of the Norne reservoir model, is essential for hydrocarbon extraction. Understanding the evolution of pressures and saturations for oil (heavy hydrocarbons), gas (lighter hydrocarbons), and groundwater is key to identifying well locations and extraction strategies.
From a modeling perspective, there are three primary workflows:
- Getting the model “right”: This involves ensuring the model accurately represents the physical reality, which is achieved through “History Matching,” where the model parameters are adjusted to fit observed data, minimizing the error.
- Implementing the model accurately and efficiently: Once validated, focus shifts to correct and efficient implementation on a digital computer.
- Optimizing model parameters: Engineers can explore different scenarios to optimize hydrocarbon recovery or CO2 storage by manipulating boundary conditions, well patterns, and injection/production schedules.
While the first and third workflows include advanced techniques, this article concentrates on the second workflow: ensuring the correct and efficient implementation of the reservoir simulation model.

Integration of AI and Domain Knowledge
The intelligent simulation assistant is a powerful tool designed to address the complexities of simulation. It prepares simulation input data, evaluates existing simulation models, and selects optimal simulation options based on observed simulation logs.
By using Amazon Bedrock, Amazon Titan embeddings, and Anthropic’s LLMs, the assistant seamlessly integrates AI with domain knowledge to empower efficient and accurate reservoir simulation workflows.
The assistant draws upon a diverse range of data sources, including:
- Simulation Model: Physical inputs, parameters, and solution techniques
- Product Knowledge: Documents, reports, technical manuals, webpages, and other resources
- Processed Product Knowledge: Vector stores, SQLite databases, YAML files, and structured data
- Simulation Log and Results Files: Output data from simulation runs
- Processed Simulation Runtime/Execution Knowledge: Filtered information from simulation logs and results
By integrating these data sources, the assistant enables effective human-machine collaboration, providing users with a vast knowledge base and tailored responses to their queries, while making the most of available knowledge.

Key Capabilities
The intelligent simulation assistant offers comprehensive capabilities to streamline the reservoir simulation workflow. These include:
- General Inquiries and Keyword Explanations: Accurate answers to general questions about the simulation software, detailed information on keyword usage, and examples of implementation.
- Model Analysis and Issue Identification: Identification of potential issues or inconsistencies in the simulation model input data, alerting users to potential problems before the simulation runs.
- Simulation Run Analysis and Optimization: Analysis of log and results files during and after simulation runs, identifying potential issues or bottlenecks, and offering recommendations for optimization.
- Interactive Model Manipulation and Re-running: Users can directly modify simulation models to address identified issues or explore scenarios, and the assistant can initiate and monitor new simulations based on the updated models.
The assistant uses precomputed knowledge bases, model-specific data stores, and callback agents to provide contextual information, thereby ensuring accurate and relevant responses tailored to the user’s queries.
One important consideration in building generative AI tools is data security. By converting the simulation data and results analysis into tokens and storing them in a secure database within a private cloud environment, and not exposing this to the underlying LLM model directly through training or fine-tuning, data security and confidentiality is preserved.
Using Amazon Bedrock and Anthropic’s LLM Models
The integration of Amazon Bedrock and Anthropic’s LLMs provides the foundation for the intelligent simulation assistant. Amazon Bedrock is a fully managed service for building and deploying machine learning (ML) models, and it provides a robust environment for hosting and serving the assistant’s AI components. By using Amazon Bedrock, the assistant can leverage the capabilities of Anthropic’s LLMs, which enable advanced natural language processing and generation.
Anthropic’s LLMs, like Claude, are at the forefront of AI technology, providing unparalleled language understanding and generation. These models are trained on vast datasets, allowing them to grasp complex concepts and generate human-like responses. By integrating these LLMs, users can engage in natural conversations, enabling an intuitive and efficient user experience.

The AWS architecture includes these steps:
- The simulation engineer accesses a reservoir simulation knowledge base tailored with domain-specific expert knowledge.
- This knowledge can be tokenized into a vector database and ingested into OpenSearch. Amazon Titan G1 embeddings are used for vectorization. Claude on Bedrock can be used to expose this as a chatbot application.
- The simulation engineer builds a model with the generative AI tool as a guidance, with a physically consistent simulation input file.
- This is stored on the data storage medium, to be passed to the execution environment where the simulation is run: AWS ParallelCluster.
- The API server, web server, and data management EKS nodes are launched within a private subnet. The simulations are launched through the CLI, using the input model created in Step 3.
Although not covered in this architecture, two key elements enhance this workflow significantly and are the topic of future exploration: 1) simulation execution using natural language by orchestration through a generative AI agent, and 2) multimodal generative AI (vision and text) analysis and interpretation of reservoir simulation results such as well production logs and 3D depth slices for pressure and saturation evolution.
Conclusion
As AI technology continues to advance, this intelligent assistant represents a significant advancement in reservoir simulation workflows. By continuously expanding its knowledge base, this assistant provides users with a powerful tool that evolves and adapts with new insights and data.
The intelligent simulation assistant offers a seamless blend of cutting-edge AI and in-depth domain knowledge. This empowers engineers to address complex simulation challenges with increased efficiency and enhanced accuracy, leading to new insights and accelerating innovation in the energy industry.
Users interested in discussing the business case or technical details can reach out to the AWS team or Stone Ridge Technology and refer to the accompanying Stone Ridge Technology post.
References:
[1] Rwechungura, R., Suwartadi, E., Dadashpour, M., Kleppe, J., and B. Foss. “The Norne Field Case—A Unique Comparative Case Study.” Paper presented at the SPE Intelligent Energy Conference and Exhibition, Utrecht, The Netherlands, March 2010. doi: https://doi.org/10.2118/127538-MS.