Secure AI Algorithm Execution on AWS: A Case Study from Radboudumc
This post delves into the architecture and security principles employed by Radboud University Medical Center Nijmegen (Radboudumc) to establish a hardened artificial intelligence (AI) runtime environment on Amazon Web Services (AWS).

For business leaders grappling with sensitive or regulated data, this post offers invaluable insights. It showcases a proven approach for harnessing the power of AI while upholding strict data privacy and security standards. The demand for healthcare services is escalating globally. A World Health Organization (WHO) report projected the need for an additional 14 million healthcare workers by 2030. To improve healthcare outcomes, organizations are increasingly leveraging AI to automate services and assist healthcare professionals with smarter tools.
Insights derived from the growing volume of available data are crucial for delivering a high level of care. AI is poised to significantly impact efficiency across the healthcare value chain, from automating administrative tasks to improving diagnostic accuracy. However, before AI algorithms can be integrated into clinical practice, it is essential to validate their performance through benchmarking.
In the scientific community, this benchmarking process is often facilitated by challenges. These competitions enable comparison and foster development of cutting-edge algorithms. Challenge organizers provide test data, and participants submit their AI algorithms for evaluation. To ensure fairness and unbiased results, the test data remains hidden from participants.
Consequently, a secure environment is necessary. This environment enables third-party AI solutions to operate on sensitive medical data. Radboudumc, a leading university medical center in the Netherlands, has built such a platform on AWS: grand-challenge.org.
This platform is a scalable, low-latency, and secure machine learning platform designed for the end-to-end development and benchmarking of AI algorithms in biomedical imaging. What started as a small, in-house system for algorithm benchmarking has evolved into a premier research platform for AI challenges in biomedical imaging. The 2021 migration to the AWS Cloud was pivotal in this expansion, enabling Radboudumc to scale operations and fulfill the increasing demand for secure and compliant AI solutions.
Validating AI for Healthcare
Organizations handling sensitive data, such as in healthcare, finance, and automotive industries, face the challenge of maintaining data privacy and security while encouraging collaboration and innovation. University medical centers, in particular, are at the convergence of clinical application and medical research, requiring the ability to securely analyze data in partnership with collaborators.
Moreover, these institutions frequently need to test new algorithms against their own datasets in a secure setting without exposing the data publicly. Today, over 100,000 researchers, developers, and clinicians globally use grand-challenge.org to validate their algorithms on private datasets. Organizers can upload sensitive data securely. Participants submit their AI models without granting access to their intellectual data. Both parties retain control over their data and the algorithms, promoting trust and confidentiality.
Grand-challenge.org utilizes an array of AWS services to support its platform’s safety, integrity, and scalability. The full potential of ML algorithms depends on access to high-quality data, both in terms of diversity and volume. While public datasets offer a good starting point, proprietary or regulated data is often key to developing impactful AI solutions. AWS Data Exchange simplifies the process of finding and using third-party data in the cloud, including datasets like those made available by Radboudumc.
Examples include histology data from the TIGER challenge and computed tomography (CT) images from STOIC project.
Securing Data and Isolating Algorithms
Testing third-party developer code on sequestered data necessitates a robust security infrastructure to mitigate risks like data exfiltration or misuse of resources. On grand-challenge.org, this is achieved using Amazon SageMaker. SageMaker simplifies the deployment of algorithms. Network communication within this environment is limited to data transfer in and out of Amazon Simple Storage Service (Amazon S3), and data is encrypted both at rest and in transit. This reduces the attack surface and mitigates the risk of unauthorized access to the data that the algorithms need to run on.

Figure 1. Running SageMaker within its own virtual private cloud (VPC) in a private subnet with only an Amazon S3 gateway and access to two S3 buckets for algorithm inputs and outputs.
Seamless Integration with Event-Driven Architecture
Challenges have submission periods throughout the year. To efficiently handle the variable traffic, grand-challenge.org employs cloud elasticity, dynamically scheduling and scaling SageMaker jobs based on demand. This helps reduce costs and minimizes operational complexity. The core of this serverless infrastructure is a combination of Amazon EventBridge, Amazon Simple Queue Service (Amazon SQS), and Amazon Elastic Container Service (Amazon ECS).
EventBridge monitors the state changes of SageMaker jobs. The SQS queue and ECS workers orchestrate messages to scale on-demand. This scalability allows the platform to efficiently manage a high volume of submissions, without the need for constant manual oversight.
Results and Future Roadmap
Grand-challenge.org has assisted over 27,000 global participants in evaluating their algorithms against 45.6 terabytes of protected medical data. The platform has hosted over 130 public challenges across 122 different organizing teams, with more than 10,000 algorithm submissions.

Figure 2. Bar graph showing an increase in inference jobs in relation to the challenge deadline. This increase is handled seamlessly by the scalability of the compute resources.
The platform’s architecture scales compute resources up and down on demand to run submissions in parallel, doing away with the need for dedicated infrastructure. Participants have used over 62,000 hours of compute time for inference alone, the equivalent of over seven years of continuous compute on a single machine (excluding compute time for algorithm training).

Figure 3. Map showing that challenge organizers and participants are distributed globally.
Grand-challenge.org is a secure and scalable solution that has benefited 2,010 academic organizations worldwide. AWS provides access to over 130 Health Insurance Portability and Accountability Act (HIPAA) eligible services and holds numerous certifications for industry-relevant global IT and compliance standards. For example, the Fachklinikum Mainschleife, a German hospital, migrated its IT infrastructure to the AWS Cloud, meeting strict data privacy requirements and adhering to the data protection requirements set by the GDPR.
This case study provides a compelling example of how institutions can securely leverage the power of AI. This is particularly relevant for those dealing with sensitive data such as in healthcare.