The Challenge of Accountability in General-Purpose AI
The development and deployment of general-purpose AI (GPAI) systems have raised significant concerns about accountability. Unlike static product offerings, GPAI systems evolve with use, making it challenging to establish accountability for their development. A recent survey conducted by MIT Sloan Management Review and Boston Consulting Group (BCG) found that 73% of experts agree or strongly agree that GPAI producers can be held accountable for how their products are developed.

Experts emphasize that holding GPAI producers accountable requires a multifaceted approach. Jeff Easley, general manager of the Responsible AI Institute, notes that “just as pharmaceutical companies are accountable for drug safety or automotive manufacturers for vehicle standards, AI companies can reasonably be expected to implement rigorous testing, safety measures, and ethical guidelines during development.” Yan Chow, global health care lead at Automation Anywhere, agrees, stating that “general-purpose AI producers can and should be held accountable for how their products are developed, just as companies selling any other goods and services are held accountable for labor and manufacturing practices, product design, safety risks, marketing promises, and waste recycling.”
The Need for Regulation
Despite the consensus on the need for accountability, some experts are skeptical that meaningful accountability for GPAI producers is achievable without regulation. Damina Satija from Amnesty International argues that “the global discourse on AI governance is heavily weighted toward these producers self-governing through nonbinding principles on ethical or responsible use of AI that can be rolled back with zero accountability.” Philip Dawson, head of AI policy at Armilla AI, agrees that “most obligations on foundation model developers exist within high-level, voluntary frameworks.”

Regulations such as the European Commission’s General-Purpose AI Code of Practice, the new EU AI Act, and the EU’s General Data Protection Regulation are seen as crucial to ensuring transparency, accountability, and trust. Rainer Hoffmann, chief data officer at EnBW, highlights the EU AI Act’s requirement that GPAI producers supply technical information about their models to the European AI Office and downstream providers, along with a summary of the training data used.
A Multifaceted Approach to Accountability
Experts propose a combination of industry-led ethical guidelines and standards, regulatory frameworks, audits and assessments, and transparency and explainability of AI systems. Jai Ganesh, chief product officer at Harman Digital Transformation Solutions, suggests that “industry-led ethical guidelines and standards, regulatory frameworks to establish clear guidelines and regulations for AI development, audits and assessments to help identify potential biases and errors in AI systems, and frameworks for transparency and explainability of AI systems” are necessary.

Rebecca Finlay, CEO of Partnership on AI, emphasizes that “safety is a team sport.” GPAI producers must proactively mitigate risks, companies that use AI products must establish their own governance frameworks, and users need education on AI risks and risk mitigation.
Recommendations for Promoting Accountability
- GPAI producers should hold themselves accountable by adhering to established ethical AI guidelines and industry standards, conducting regular audits and impact assessments, and prioritizing transparency.
- Businesses should boost their engagement with policymakers to shape regulations and help hold all relevant actors in the AI ecosystem accountable.
- Businesses should better collaborate with each other to define transparency requirements for GPAI producers.
- Law and policymakers should ensure meaningful regulations are a baseline for GPAI accountability and establish independent, third-party watchdog organizations with specialized technical knowledge and adequate resources for oversight.
By implementing these recommendations, we can promote accountability for GPAI producers and ensure that the development and deployment of GPAI systems align with responsible AI practices.