The Imperative of Meaningful Stakeholder Engagement in AI
One of the most resonant messages from the 2025 Paris AI Action Summit was the call for increased public and private-sector investments to foster greater technological innovation. However, despite a shared commitment to more inclusive and sustainable AI systems, the path to achieving this in partnership with those whose lives are directly impacted by AI remains unclear.

We must address the ambiguity surrounding “stakeholder engagement.” The reality is that commercial entities, especially “Big Tech” companies, wield significant control over how AI is developed and who participates in this process. While regulations and civil society efforts work to define “public interest AI,” the private sector lacks consistent incentives for transparency and accountability.
Stakeholder bodies, including civil society organizations, marginalized communities, labor groups, and consumers, are compelled to engage with corporate entities to mitigate potential risks and harms of AI technology. Yet, clear pathways for this engagement are often absent. Therefore, regardless of the domain, there’s a need to guide technology companies on effective stakeholder engagement.
Corporate AI developers recognize the importance of external stakeholders in creating successful, safe technology. However, their incentives and risks differ from public-sector organizations. Private sector companies aim to build and market products quickly, and recommendations for stakeholder engagement must consider these different incentives.
Inclusive and equitable stakeholder engagement is challenging. Regardless of the domain, sector, or audience, consistent steps are required to achieve this aim even with mistakes made along the way. The interaction between those seeking input and engagement can be harmful. History shows that some forms of stakeholder participation can harm marginalized communities by exploiting their intellectual labor. Or, participants might experience their time and efforts as “wasted” because their input is not integrated into the final output.
The diversity of stakeholders also makes it challenging to translate insights gained into actions taken. People come in with an intersection of different identities and experiences. Their demands, concerns, and insights may converge, but they are also just as likely to be in conflict with what other stakeholders are saying and prioritizing. Parsing through the seemingly conflicting input and deciding between stakeholders is an extremely difficult aspect of working with external stakeholders and the public, especially because, ultimately, it may seem like certain voices or insights have to be prioritized over others.
Ultimately, stakeholder engagement is not a binary. The degree to which participation is harmful or empowering depends on treating stakeholders, how power relations are managed, and the output of the engagement. A broader infrastructure is needed, built not on unearned trust, but by addressing existing mistrust transparently. This involves straightforward communication across various languages and mediums to ensure that literacy, language, ability, and physical accessibility are not barriers to participation.
Stakeholder participation is not inherently ‘good.’ It cannot rectify harmful systems, such as AI designed for military use. However, stakeholder engagement is very important to ensure the involvement of socially excluded communities in important decisions. Without advocates for social equity, engagement can become a form of ethics-washing, giving technologies a veneer of social good while causing physical and social harm.
Dr. Tina M. Park is the Head of Inclusive Research & Design at the Partnership on AI. She focuses on working with impacted communities on equity-driven research frameworks and methodologies to support the responsible development of AI and machine learning technologies.