Building Trust and Ensuring Compliance with AI in IT
Artificial intelligence (AI) is rapidly transforming the IT landscape. Businesses are seizing unprecedented opportunities for growth and efficiency, especially within IT service and operations (ServiceOps). AI agents are proving invaluable, providing in-context insights, accelerating incident response, predicting change risks, and enhancing vulnerability management. However, the diverse and high-velocity data used by AI, including large language models (LLMs), presents significant challenges for data security and regulatory compliance.
Many AI models function as “black boxes,” making it difficult for users to understand how their data is processed, stored, and maintained in compliance with established policies. These technologies often incorporate multiple components and data sources, which further complicates data residency considerations. Without robust data governance, transparency, and security measures, sensitive corporate data, intellectual property, and customer information risk exposure through unintended data leakage.
Questions CIOs and CISOs Must Address
CIOs and CISOs are crucial in maximizing the benefits of generative and agentic AI while maintaining the security of applications, data, and overall operations. Staying informed about the latest developments and best practices in data security and compliance is a necessity to leverage AI’s potential while mitigating risk. Selecting the appropriate AI platform necessitates careful consideration of organizational needs. Seven critical questions help guide this decision-making process:
-
How are access controls implemented? Prioritize solutions that implement role-based access controls to ensure sensitive information is only accessible to authorized users. This includes varying permission levels, adherence to the least-privilege principle, and robust safeguards to prevent unauthorized access and data breaches.
-
How is data encrypted? Select solutions that encrypt data during transmission over the internet and utilize allowlists to restrict unauthorized IP addresses.
-
What are the data residency considerations? Ensure data storage within contracted regions complies with existing agreements, commercial regulations, and relevant federal regulations. This alignment with regional and sector-specific requirements simplifies regulatory adherence.
-
What type of data is used to train AI models? Understand what types of data train AI models for specific use cases and ensure strict adherence to data privacy and compliance regulations.
-
Do I retain ownership of my data? Ensure full ownership of your data and understand the LLM provider’s data logging, retention policies, and configuration options.
-
Do the AI models expose my data to third-party AI vendors? Verify that chosen LLM providers align with the organization’s data compliance requirements.
-
How are AI models audited? Facilitate a data compliance assessment with your chosen LLM or AI infrastructure provider.
BMC Helix: Addressing Key Security Concerns
BMC Helix customers retain complete ownership of their data; all incident tickets, knowledge articles, and files remain within their BMC Helix or third-party applications. With their open-first approach, organizations can leverage existing security and compliance mechanisms, removing concerns about data copying, retention, or misuse by the LLM. This fostering trust and clarity within AI operations. Data sources, inclusive of tickets, incidents, observability data, knowledge articles, and configuration data across BMC Helix applications, are governed by roles with permissions that set boundaries for GenAI responses. For instance, an IT support agent lacks access to HR support tickets, and support agents and administrators receive different answers to the same question based on their access credentials.
Additionally, BMC Helix customers have the ability to configure the internal knowledge articles used for GenAI responses. Content found with the customer’s third-party applications is indexed using an admin profile, and this content is available to end-users interacting with BMC’s proprietary GPT model, HelixGPT.
Other benefits and factors include:
- BMC Helix utilizes robust data encryption, both in transit and at rest.
- Data within BMC Helix AI applications resides within the customer’s contracted regions.
- Organizations must directly contact their LLM provider for information on data residency policies outside of BMC Helix.
- BMC HelixGPT does not copy or store customer data in AI models; the data is solely used for training purposes, complying with strict data privacy and compliance regulations under BMC policies. Furthermore, the data is isolated and logically segregated from other customer access or use.
- In service management use cases, BMC HelixGPT leverages a stateless AI model to process each ITSM, employee navigation, service collaboration, or other requests independently.
- In IT operations management (with AIOps) use cases, BMC HelixGPT is trained using the customer’s incident data, resolution worklogs, and other data to assist the AI with categorizing incidents, identifying root causes, summarizing impacts, and intelligently assessing risks.
- BMC HelixGPT does not expose customer data to third-party AI vendors. Therefore, IT organizations are responsible for ensuring that their chosen LLM or AI infrastructure providers satisfy their data processing and retention requirements as well as commercial and federal compliance requirements specific to their BMC HelixGPT use cases.
The Bottom Line
As AI continues to transform IT, building trust through proactive data management, transparency, and robust security practices is essential. By prioritizing these elements, organizations can fully capitalize on AI’s transformative power while effectively managing any related risks. By addressing the identified security and compliance challenges, organizations can craft a future in which AI enhances work and drastically increases human productivity.
Contact BMC if you would like to discuss this further.