Этика, доверие и объяснимость в искусственном интеллекте (ИИ)

Взлом "черного ящика" ИИ и понимание этики ИИ

Поскольку искусственный интеллект (ИИ) продолжает развиваться быстрыми темпами, становясь частью все большего числа бизнес-процессов, а давление со стороны регулирующих органов и клиентов продолжает расти, нам необходимо решить некоторые очень глубокие этические вопросы. Проектировщики и разработчики ИИ, в частности, могут помочь снизить уровень предвзятости и дискриминации, обращаясь к этим этическим аспектам и используя вспомогательные наборы инструментов.

If you were told that your loan application was rejected for no apparent reason, would you take that decision? And if you knew that an autonomous car could be manipulated to misinterpret speed signs, would you drive it? Of course not.

We humans live by accepted ethical standards, which are enforced by laws, regulations, social pressures, and public discussion. While ethical norms and values ​​may change over time and across cultures, they have played a critical role in decision making since early human civilization.

In business, the question of ethics is also not new. But as artificial intelligence (AI) continues to evolve at a rapid pace, entering an increasing number of business processes and supporting decision-making, we need to address very deep ethical questions without delay.

Ethical stumbling blocks of AI

In 2019, customer complaints surfaced accusing Apple Card’s credit scoring algorithm of gender discrimination. And security researchers at McAfee used a simple trick to fool Tesla’s intelligent cruise control. To do this, the researchers put a 2-inch tape on the 35 mph speed sign (making the middle part of the 3 a little longer), and the car’s system misinterpreted it as 85 mph and adjusted the speed accordingly.

Thus, the responsible use of data has become a central element of competitive advantage.

While consumers are concerned about societal issues such as shared prosperity, inclusion, and the impact of AI on employment, companies are focusing on organizational implications such as:

  • Regulators, as well as the European Union, are working on appropriate legal frameworks that are increasingly binding on companies. On April 21, 2021, the European Commission presented the first-ever legal framework for AI that attempts to categorize AI into risk categories.
  • If a model unreasonably discriminates against a certain group of customers, this can lead to serious reputational damage.
  • Transparent decision making builds customer trust and increases their willingness to share data. For example, 81% of consumers say they have become more concerned about how companies use their data over the past year.

Example: IBM Ethics Guide

Accordingly, many companies have already introduced their own ethical AI guidelines. A company committed to responsible innovation for more than 100 years, IBM identifies the following five main areas in the development of responsible AI systems :

  • Accountability : AI designers and developers are accountable for ethical AI systems and their outcomes.
  • Alignment with Values : AI should be designed in a way that aligns with the norms and values ​​of your user group.
  • Explainability : AI must be designed in such a way that humans can understand the decision-making process.
  • Fairness : AI must be designed in a way that minimizes bias and promotes inclusiveness.
  • User rights to data : AI must be designed in such a way that it protects users’ data and that they retain control over data access and use.

The Future of Ethical AI Systems

Criteria and metrics for ethical AI systems will ultimately depend on the industry and the use case in which they will be applied. But AI designers and developers can help reduce bias and discrimination by keeping these five areas of ethical consideration in mind.

AI systems must remain flexible enough to be continuously maintained and improved as ethical issues are discovered and resolved. Various dashboards or toolkits available for development can support this process. For example, » AI Fairness 360 «, an open source toolkit that can help investigate, report, and mitigate discrimination and bias in AI models. Or the open source » AI Explainability 360 » toolkit that can help explain AI algorithms.

Summary

Ethical decision making is not just another form of solving technical problems, it must be built into the AI ​​design and development process from the very beginning. Ethical, human-centered AI must be designed and developed in accordance with the values ​​and ethics of the society or community it affects.

For companies using AI, this should be a top priority. It is necessary to make sure that each employee understands the risks and feels responsible for the success of AI in the company.

Als Bereichsleiterin für „Data & Technology Transformation“ und Account Partnerin treibt Britta tagtäglich Unternehmenstransformationen und unterstützt mit ihrer gegründeten Plattform "dy.no" Macher*innen, die in der Konzern- und Wirtschaftswelt etwas verändern wollen. 2021 erschien zudem ihr Buch „Die Disruptions-DNA“, das dazu inspiriert, die Digitale Transformation aktiv mitzugestalten.

Комментарии закрыты.