Setting the Stage: Guidelines for Businesses Adopting Third-Party AI
Organizations aiming to leverage artificial intelligence tools, yet lacking the infrastructure to develop their own programs, face a critical challenge: establishing a clear framework for their relationships with AI providers. This requires defining the program’s purpose, outlining how it will address the business’s needs, incorporating specific contractual safeguards, and establishing a robust compliance program. Without these elements, businesses risk falling behind competitors in today’s fast-evolving market.
Given the current lack of detailed statutory or regulatory guidelines governing business applications of AI, the relationship between AI providers and their customers is primarily shaped by service agreements, licenses, and data-sharing stipulations. This ‘private governance’ through tailored contracts is the key to defining the obligations and responsibilities of both the provider and the customer. Time-worn contract templates will not suffice in the rapidly evolving AI landscape; both legal counsel and contracting parties must be prepared to adopt a new approach, as Steve Jobs might put it, to ensure that the tools can be successfully leveraged.
Accessing and deploying AI tools may involve a complex web of agreements, including development contracts, data-sharing arrangements, and collaborations customized to specific requirements and contributions – particularly if the business customer can insist on such customized agreements instead of simply agreeing to online terms and conditions that are generally non-negotiable.
Contractual Safeguards for AI Implementation
Several contract elements are vital for establishing practical guardrails and controls. They include:
- Statement of work or work order: A comprehensive description of the tool’s functionality, any training provided, and details regarding periodic updates and support.
- Notice and acceptance: Mechanism for the client to test the tool before any payments are due, along with a process to verify that the tool performs as expected.
- Data usage and protection information: This encompasses the origin of the data used to train the algorithm, data privacy and security measures and an incident response plan.
- Indemnification: Specific coverage for unique AI-related risks, including intellectual property disputes, data breaches, security incidents, and output errors.
- Dispute resolution: This section should incorporate a period for amicable resolution before resorting to litigation or arbitration.
Implementing this contractual framework can provide benefits to both vendors and customers, allowing them to accelerate deal completion and revenue generation.
Compliance and Risk Management
Integrating an AI tool into business processes requires a comprehensive risk assessment and an AI compliance program. The risk assessment should start with stakeholder interviews to determine the type of AI tool envisioned and confirm how it will meet the business needs. With the tool’s purpose in mind, the organization should collect documents and data on the current AI usage and the strategy for managing risks. Key players, including IT, employees using the tool, executive management, and the board of directors, should be interviewed. Assessments should also include third parties.
When a decision has been made to go ahead with an AI tool, but before the deployment, the business must then establish an AI compliance program. Compliance controls address jurisdiction-specific requirements, for example whether the tool is to be trained using data from regions, countries, and states with specific laws governing the uses of AI.
Further, these controls should address contractual obligations, such as:
- Cybersecurity controls.
- Access controls.
- Monitoring and quality assurance for outputs – especially the generated output of generative AI tools.
- Internal audits to assess how well the company’s compliance program controls are performing.
Essential Components of an Effective AI Compliance Program:
The company’s compliance function will need to have the structure, resources, and expertise to provide these steps for an effective AI compliance program:
- Standards, policies, procedures, and internal controls specific to AI use.
- Designated compliance officer, including a detailed accountability chart and job description.
- Background check processes for employees, board directors, and third parties, including checklists designed to identify background issues unique to AI.
- Training programs for daily users of the AI tool, executive management, directors, and third parties who will have access to the tool.
- Auditing, monitoring, reporting, and accountability protocols, which include detection, investigation, tracking, and corrective actions for unusual behavior or the underperformance of the tool, along with a reporting hotline or other means of communication.
- Incentive and disciplinary policies, including bonus provisions, access termination policies, and clawback of bonuses.
- Updates and patch management processes.
The Path Forward
While the acquisition of AI tools offers organizations the potential to enhance their efficiency and effectiveness, several key questions must be addressed. Initial inquiries should focus on the tool’s function and its contribution to the company’s growth and long-term success. With these answers, the organization is prepared to form risk assessment and compliance programs essential to successful deployment.
Disclaimer: This article is intended for informational purposes only and does not necessarily reflect the opinions of the publisher, Bloomberg Industry Group, Inc., or its owners.