Mastering AI Risks and AI Governance – A Guide

A guide for future-oriented companies

The implementation of artificial intelligence (AI) in companies is an important step toward innovation and efficiency. But with opportunities come risks. In this blog article, we take a look at AI risks and the importance of robust AI governance.

At a time when Artificial Intelligence (AI) is rapidly advancing and becoming more integrated into business operations, the complexity and potential for unforeseen challenges is also increasing. Recent developments in AI have changed the landscape and opened up new opportunities. But with these innovations also come risks. The importance of robust AI governance has therefore never been more important to ensure that organizations can reap the benefits of AI without risking undesirable consequences. Below, we take a look at AI risks and why careful governance and monitoring of AI systems is critical.

Identifying and classifying AI risks.

AI systems are complex and can present unexpected challenges. Some of the main categories of AI risks are:

  1. Failure risk: what happens if an AI system fails? Is there a disaster recovery plan in place?
  2. Information risk: are the outputs of AI systems accurate? Does the model need to be adjusted to reflect reality?
  3. Financial risk: are the costs of developing and deploying AI systems justified?
  4. Liability risk: who is liable for decisions in which AI systems were involved?
    Reputational risk: Do all uses of AI systems follow ethical standards?
  5. Data risk: Is data processed in a compliant manner? Where does the data come from, and what copyrights must be respected?

An example of information risk is an AI recruiting system that discriminated against women. In this case, the AI was trained based on applications from the last ten years, with most applications coming from men. The algorithm learned that the gender characteristic “man” would be a good hiring criterion. Such malfunctions can have serious consequences and underscore the need for careful monitoring and oversight of AI systems.

Developments on the regulatory side

Regulators have recognized the risks and are responding with legislation and guidance. Key developments include:

  • EU AI Act: a cross-EU approach to regulating AI applications that classifies AI systems according to their level of risk and establishes corresponding obligations for manufacturers, providers, and users. The regulation could likely come into force in 2026 after a transition period. Violations of the regulations can result in fines of up to 30 million euros or up to six percent of global annual sales.
  • IDW EPS 861: A German standard for the audit of artificial intelligence that helps companies implement AI systems under current law. It provides a framework for assessing AI applications, including their development, implementation and use.
  • Global Partnership on Artificial Intelligence (GPAI): an international approach that aims to develop AI in accordance with human rights and democratic values. Through collaboration between governments, industry, and civil society, GPAI provides a platform for sharing best practices and developing common standards to maximize the positive impact of AI worldwide.

Evaluating an AI system under the EU AI Act.

According to a study by IBM, external regulatory and compliance obligations are the most important aspect of explainable AI for 50% of companies. Compliance with new regulations requires careful evaluation and classification of AI systems. Even more, AI governance requires a holistic approach that considers people, processes, and technology to ensure responsible, transparent, and explainable AI:

  • People: Implementing AI requires a strong, interdisciplinary team. It is important to align stakeholders, generate the right level of interest, and encourage them to participate in ideation. Establishing business goals and KPIs in line with business controls and regulations is also critical.
  • Processes: The process of AI governance includes tracking and documenting data provenance, associated models, metadata, and overall data pipelines for audits. Documentation should include techniques, hyperparameters, and test metrics to increase transparency and visibility for stakeholders. Establishing a repeatable, end-to-end workflow with integrated stakeholder approvals can reduce risk and increase scale.
  • Technology: establishing well-planned, well-executed, and well-controlled AI requires specific technology building blocks. The ideal solution should manage the entire AI lifecycle and provide the following capabilities: Integration of data from multiple types and sources; Openness and flexibility with existing tools; “Self-service” access with privacy controls; Automation of model development, deployment, scaling, training, and monitoring; Networking of multiple stakeholders through customizable workflows; and Support for building custom workflows for different people using governance metadata.

Currently, there are an increasing number of new solutions, AI platforms or frameworks with different focuses that can help companies with the different stages. Examples include IBM watsonx, Dataiku or AI Verify. Here, it is important to analyze well which solution components fit best into the company’s own IT and AI strategy and ensure maximum flexibility for the rapid developments in this environment.

The bottom line: the importance of robust AI governance.

Implementing AI is not without its challenges. Identifying and classifying risks, complying with new regulations, and carefully evaluating AI systems are critical to success.

Robust AI governance that goes beyond traditional IT governance is essential. It must consider the specific risks and requirements of AI and ensure that systems are operated ethically, securely, and in compliance with the law.

In a world where AI is becoming increasingly important, it is critical for companies to keep an eye on the risks and implement sound AI governance. The path to innovation must be taken responsibly, and careful planning and monitoring are key to success.

As the "Head of Data Strategy & Data Culture" at O2 Telefónica, Britta champions data-driven business transformation. She is also the founder of "dy.no," a platform dedicated to empowering change-makers in the corporate and business sectors. Before her current role, Britta established an Artificial Intelligence department at IBM, where she spearheaded the implementation of AI programs for various corporations. She is the author of "The Disruption DNA" (2021), a book that motivates individuals to take an active role in digital transformation.

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More