Skip to content

AI-Powered Software: Avoiding Information Security Pitfalls

September 30, 2024

Artificial intelligence (AI) is revolutionizing how companies are solving problems. It is taking over repetitive tasks, improving operational efficiency, and reducing human error. Technology companies in particular are commonly modifying their products and services to incorporate aspects of AI and using it to improve business operations – from improved customer support and personalized user experiences to predictive analytics and insights, product recommendations, advanced analytics and business intelligence, and automated workflow and task management.

Enhancing products and services to include AI is a strategic business decision that often goes through the following process:

  1. Identifying the business use case
  2. Gathering and preparing data
  3. Choosing the best AI tools and technologies
  4. Building and training machine learning models
  5. Integrating the AI model into the software platform
  6. AI model deployment
  7. Continuous improvement and retraining

Each step in the process, however, brings with it new security risks. Thus, AI governance and compliance should be evaluated at every turn. Constant vigilance during the development lifecycle can help minimize risk.

Sample information security risks posed by integration of AI:

  • AI-powered attacks – Cybercriminals commonly use AI to develop more sophisticated attacks such as automated phishing campaigns, AI-driven malware, deepfakes, adversarial attacks, brute force attacks, data poisoning, and denial of service (DoS) attacks.
  • AI system vulnerabilities – Similar to other third-party software products used by technology companies, AI platforms may have security vulnerabilities that need to be evaluated through an organization’s third-party risk management process prior to integration.
  • Bias in AI models – Also referred to as machine learning bias, AI systems can produce biased results when the data used to train the AI or the algorithm itself is biased. This can lead to discriminatory results which can lead to unfair and incorrect decisions.
  • Data privacy concerns – As AI systems rely on enormous amounts of personal data for training, operation and continuous improvement, there is an extensive list of concerns as it relates to data privacy that include data collection and consent, data sharing and usage, data security, transparency and accountability, and regulatory changes. The accelerated development and implementation in AI within companies has outpaced privacy regulations such as GDPR.

With the risks that are present, companies are beginning to use industry-accepted frameworks related to AI governance to manage and evaluate their operations. Standard-setting bodies such as International Organization for Standardization (ISO), National Institute of Standards and Technology (NIST), and HITRUST, among others, have created resources to help companies manage the risk of AI.

  • ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring the responsible development and use of AI systems.
  • ISO/IEC 23894 is an international standard that provides guidance on how organizations that develop, produce, deploy, or use products, systems, and services that utilize AI can manage risk specifically related to AI. The guidance also aims to assist organizations to integrate risk management into their AI-related activities and functions.
  • NIST AI Risk management Framework (AI RMF) is a voluntary framework designed to help organizations manage the risks associated with AI systems.
  • HITRUST’s AI risk management assessment is built on 51 relevant and practical risk management controls harmonized with ISO 23894 and NIST AI RMF.

As companies determine how to incorporate AI within their security programs, it is wise to discuss with a security advisor different methods of evaluating and reporting these activities to clients and stakeholders. For assistance with this, contact our IT Risk Advisory professionals.

Ben Phillips Director, IT Risk Advisory Services

We're Looking for
Remarkable People

At KSM, you’ll be encouraged to find your purpose, exercise your creativity, and drive innovation forward.

Explore a Career Full of Possibilities