Reality and misconceptions about the future of artificial intelligence (AI).

A deep dive into four recurring statements about AI

AI clearly has a key role to play in the business world – now and in the future. It holds unprecedented potential for efficiency gains and new business models. Because AI is unlike any other technology, it also holds the potential for unexpected consequences and disruption on an unprecedented scale. No company should deploy AI blindly and without strategy, and no responsible board can afford not to engage with and understand its potential.

Artificial intelligence (AI) is changing the way machines and people work together. It is improving customer interactions across industries and driving operational and process improvements. More than 40% of companies expect AI to be a “game changer.” For example, the German government also wants to accelerate the adoption and development of AI technologies. It plans to invest €3 billion in AI research by 2025.  But 77% of companies also say business adoption is a challenge. To date, few boardrooms are willing to weigh the risks and opportunities associated with AI.

AI is different from other technology innovations. To have a more complete picture of the capabilities and opportunities available, in this article I address four statements that customers most often address to me:

“AI drives business innovation and success.”

At the heart of AI and other leading technologies is data. There is usually no shortage of data in this regard; there is more than enough of it in this age of increasing data volumes. According to Statista, the global big data market will grow to $103 billion by 2027, more than doubling the market size in 2018.

AI models depend not only on the quantity but also on the quality of the data, which must be ensured.  In addition, the challenge for most organizations is capturing and creating meaningful correlations of data stored in silos across the enterprise, data from external sources, and data that is transferred into the enterprise in real time.

Managing big data requires a long-term commitment and planning to handle future growth. Thinking too small may take a long time for an AI investment to pay off economically and make the business case.

Choosing the wrong platform, tools, or poor integration can also waste valuable time and add significant cost and complexity to implementation and ongoing management. Governance and security strategies at the enterprise level are critical to avoiding regulatory and compliance issues when developing new technologies. As in other areas, AI technology is evolving faster than regulation. But the evolution of data (sometimes designed on a country-by-country basis) and AI regulations will increase. Since no responsible board can afford to remain unprepared but take advantage of potential nonetheless, it is a consideration, for example, to include AI strategy in the annual assessment of company-wide ethics and conduct policies.

“AI means a revolution in intelligence.”

Although AI has grown by leaps and bounds recently, it is still only ‘intelligent’ in the strictest sense of the word. It would probably be more useful to think of what we have achieved as a revolution in computational statistics, rather than a revolution in intelligence.

So far, most of the progress has been in what is often called “narrow AI” (German: schwache Künstliche Intelligenz). Machine learning techniques are used to develop algorithms and systems to solve specific problems, such as natural language processing.

The more complex problems are in the area of so-called “general AI” (German: allgemeine Künstliche Intelligenz). The challenge is to develop AI that can solve general problems in the same way as humans. Many researchers believe this is still decades away from becoming a reality.

Right now, worrying about evil AI is a bit like worrying about overpopulation on Mars. The bigger danger here is more about humans themselves. No object or algorithm is ever intrinsically good or evil. It’s how they are used that matters. A 3D printer can print prosthetic limbs or even weapons. The GPS was invented to fire nuclear missiles and now helps deliver pizzas. Forming an opinion about an algorithm means understanding the relationship between humans and machines.

“The decision of an algorithm cannot be traced.”

Rule-based algorithms contain instructions written by humans and are therefore easy to understand.  Theoretically, anyone can open them and follow the logic of what happens inside. But rule-based algorithms have a major drawback: they only work for the problems for which humans know how to write instructions. They also have limited scalability. Machine learning algorithms, on the other hand, turn out to be remarkably good on problems where writing a list of instructions doesn’t work. The path the machine takes to solve it often doesn’t make much sense to a human observer.

“Artificial intelligence (AI) is highly prone to error and skews data.”

There is an exciting paradox in the human relationship with machines. While it has been proven that many people trust things they don’t understand too much, they reject algorithms as soon as they know that an algorithm can make mistakes. This is known to researchers as algorithm aversion. People are less tolerant of an algorithm’s mistakes than they are of their own – even if their own mistakes are bigger.

AI systems are only as good as the data we put into them. Poor data or poor data quality can also contain implicit racial, gender, or ideological biases. Imagine the impact on a credit provider’s brand if it were discovered that it routinely rejects applications due to bias in its AI training. Therefore, it is critical to develop and train these systems with unbiased data and to develop algorithms that can be easily explained. Several research groups, including IBM Research, are already developing methods to reduce the bias (“bias”) that may be present in a training data set.

In addition, to build trust between humans and machines, we need to adopt policies and invest in systems that promote co-development and shared learning. For example, the Ethical Guidelines for Trusted AI, produced by the European Commission’s Independent Expert Group on AI, provide a set of new principles for companies operating in the European Union. As more and more products and software incorporate machine intelligence, we should look for ways to build virtuous and ethical learning cycles between the company that produces it and the people who use it.

AI clearly has a key role to play in the business world – now and in the future. It holds unprecedented potential for efficiency gains and new business models. Because AI is unlike any other technology, it also holds the potential for unexpected consequences and disruption on an unprecedented scale. No company should deploy AI blindly and without strategy, and no responsible board can afford not to engage with and understand its potential.

Britta Daffner ist seit über einem Jahrzehnt in der Technologie- und Daten-Industrie zu Hause. Ihr Credo: Innovation und Digitalisierung von Unternehmen vorantreiben – durch Technologie und moderne Führung. Dafür befähigt sie als Practice Leader Data & Technology Transformation in IBM Firmen dabei, das volle Potential aus Daten zu nutzen und unterstützt als Coach Macher*innen, die in der Konzern- und Wirtschaftswelt etwas verändern wollen. 2021 erschien zudem ihr Buch „Die Disruptions-DNA“ (www.disruptionsdna.de), das dazu inspiriert, die Digitale Transformation aktiv mitzugestalten.

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More