top of page

Structuring an AI Risk Assessment: A Guide for Businesses

Updated: Aug 18


by Basia Walczak, Manager, Privacy and Michael Pascu, Senior Manager, Artificial Intelligence


As artificial intelligence (“AI”) becomes a core part of business operations, understanding the associated risks is crucial. According to a recent survey, 65% of organizations have prioritized the evaluation of AI risk using their existing internal processes, and 55% are relying on current and pending laws to assess AI-related risks.¹ As AI adoption grows, it is more important than ever to identify potential issues and establish safeguards to ensure AI is used responsibly and effectively.


This article discusses the main steps organizations should take to identify and mitigate AI-related risks via an AI risk assessment. We highlight key AI risks to consider when conducting an AI risk assessment and suggest a process for undertaking this assessment.


What is an AI risk assessment?

An AI risk assessment involves evaluating the potential adverse outcomes of deploying AI systems, including ethical, legal, operational, and technical risks. The goal of the AI risk assessment is to proactively identify and mitigate these risks while maximizing the benefits AI can bring to your business. Without appropriate care, AI could lead to privacy issues, reputational damage, or discriminatory outcomes.²


AI risk assessments help businesses identify and address potential risks, ensure regulatory compliance, protect against data privacy breaches and eliminate biases that could result in unfair outcomes. By proactively managing these risks, businesses can build public trust, safeguard their reputation, and avoid legal liabilities.


What risks should be considered?

There are various AI-related risks that organizations should consider when evaluating their AI systems.³ These include:

  1. Validity and reliability: AI systems should fulfill their intended purpose, produce consistent results, not fail or break unexpectedly, be accurate, and respond appropriately to unexpected situations.

  2. Accountability and transparency: Clearly define roles and responsibilities for everyone involved from design and deployment of the AI system. Whenever possible, share information about how the AI system operates and the data it was developed with in ways that impacted audiences can understand.

  3. Explainability: Provide clear explanations and reasoning behind the AI system’s decisions, predictions, or recommendations.

  4. Security and resiliency: Ensure AI systems are resistant to adversarial attacks, such as data poisoning and data or intellectual property theft. They should also be able to return to normal functionality after an adverse event.

  5. Safety: AI systems should avoid negatively impacting humans, property, or the environment. They should allow for human intervention and be capable of being shut down as needed.

  6. Bias and non-discrimination: AI systems should be accessible to all individuals. Assess the AI system’s performance and implement measures to manage and mitigate harmful discrimination, preventing the exacerbation of existing social biases.

  7. Privacy: Uphold human autonomy, identity, and dignity. Ensure the AI system’s measures for protecting personal and data privacy comply with applicable laws.


When and how should you conduct an AI risk assessment?

An AI risk assessment should be conducted during the early stages of design and development. This allows for risks to be mitigated during the system’s design and validation, before deployment. Below are the key steps involved in each stage of an AI system’s lifecycle, with a particular emphasis on conducting an AI risk assessment.

Stage of the AI System
Key Activities

1. Ideation and Feasibility

  • Identify a pain point or opportunity that would benefit from the use of artificial intelligence.

  • Conduct a business case to determine whether this initiative is a worthwhile pursuit.

  • Ensure that this initiative aligns with your organizational risk appetite and strategic direction.

  • Receive a go/no-go decision on whether to continue.

2. Solution Design

  • After a successful business case, begin designing the AI solution in collaboration with your information technology, data governance, and AI/analytics teams.

  • Decide on the required data, success criteria, and technical requirements to operate the AI system.

2A.  AI Risk Assessment

  • Conduct an initial risk assessment to determine whether the AI system exhibits signs of high-risk behaviour.

  • Engage with stakeholders from different departments to gather diverse perspectives, including ethical, legal, operational, and technical personnel.

  • Analyze the identified risks in terms of their likelihood and potential impact.

  • Develop strategies to mitigate the identified risks. These might include technical (e.g., data encryption), process changes (e.g., user training) or policy updates to ensure regulatory compliance.

3. Solution Development, Testing, and Validation

  • Develop the AI solution with the findings from the AI Risk Assessment in mind, implementing necessary safeguards where required.

  • Ensure that the AI system meets or exceeds its performance thresholds and criteria and ensure that it operates in a manner free from bias or discrimination.

  • Test and validate the AI system’s robustness, including against adversarial attacks or data leaks.

4. Deployment and Monitoring

  • Establish a monitoring framework to continuously track AI performance and risks. Track both the performance of the AI system and monitor for changes in AI laws that may impact the operations of the AI system.

  • Ensure organizational visibility and address concerns through a well-documented risk assessment process. Maintain regular reporting and record-keeping as new risks are identified and mitigated.

  • Consider updates to existing policies where required, especially as the law progresses and new guidance is provided by regulatory bodies.  


Conclusion

AI offers immense potential for business innovation and efficiency. However, the associated risks require careful management. By structuring a thorough AI risk assessment, businesses can navigate the complexities of AI deployment responsibly, ensuring that their AI initiatives are both beneficial and secure. Prioritize transparency, stakeholder involvement, and continuous monitoring to build a robust AI risk management framework.


 

How can we help?

INQ’s portfolio of AI services is customized to fit your specific needs and get you AI-ready. To learn more, visit our website at www.inq.consulting or contact us at ai@inq.consulting. To keep up with the latest in AI news, subscribe to the Think INQ newsletter.


 

² Wharton, University of Pennsylvania, “Artificial Intelligence Risk & Governance,” Artificial Intelligence/Machine Learning Risk & Security Working Group (AIRS), online < https://ai.wharton.upenn.edu/white-paper/artificial-intelligence-risk-governance/ >.

³ These risks have been adapted from those discussed in the NIST AI Risk Management Framework.

113 views0 comments

Comments


bottom of page