Machine Learning – Basics and definition explained for beginners and managers

We explain the basics of Machine Learning and why it is so important.

Is Machine Learning really new?

Machine Learning (hereafter ML) is a sub-discipline of artificial intelligence and has been the subject of research for over 50 years. Thus ML is not new. However, it fell into oblivion rather quickly after its “discovery” because disillusionment quickly spread after initial successes. The right applications simply could not be found and data was not available in the necessary quantity and quality.

If Machine Learning is not new – Why is it so hyped?

Of course, ML algorithms have been greatly improved over the last 50 years. Nevertheless, this is not the main reason for the trend to put these algorithms into practice now.

Due to the rapid increase in computing capacity (and for everyone available), it is now economically possible to provide the enormous computing capacities required for this. Thanks to fast graphics cards, servers are now available for just a few euros per hour, which a few years ago would have been at the top of the supercomputer list (and therefore unaffordable).

Parallel to this, the amount of data available has also grown – this applies to own company data as well as public data. This provides an excellent basis for mapping really meaningful and useful use cases for companies. This includes, for example, predictive maintenance (i.e. estimating the time of failure), recognition of spoken texts and price forecasts.

Even if a hype is currently developing from this, which will certainly be followed (again) by a great disillusionment, many business models can profit from it or even be strategically developed further.

Understanding machine learning beyond hypes

What is an algorithm?

An algorithm for computers can be thought of like a recipe. It describes exactly which steps are performed one after the other. Computers do not understand cooking recipes, but programming languages: In it, the algorithm is broken down into formal steps (commands) that are understandable for the computer.

Some problems can easily be formulated as an algorithm, e.g. counting from 1 to 100 or checking whether a number is a prime number. For other problems, this is very difficult, e.g. recognizing fonts or keywording text. Here the procedures of machine learning help. For a long time, algorithms have been developed that allow existing data to be analyzed and the knowledge derived from it to be applied to new data.

Why are some algorithms called “learning”?

A Machine Learning Algorithm has many freedoms, the so-called parameters. Simplified, a parameter could be used, for example, to place messages with the word “Trump” geographically in relation to the North American region. Typically, ML algorithms use many hundreds, often up to hundreds of thousands of parameters. Adjusting the parameters to get the correct results for the existing data is called learning.

Supervised Learning – What is that?

For the so called “supervised learning” known data are needed, which already contain the logic you would like to apply to a new data set.

From this data a training and test data set is selected. The former is used to set the parameters in the algorithm accordingly, while the latter is used to evaluate the performance of the algorithm. Here you can also calculate quality metrics and terminate the training process if the results are considered good enough (this may take a long time or not happen at all!).

The algorithm learns the logic within this so-called training set. An algorithm trained in this way can then classify data that has a certain similarity to the training set with the learned logic – for example, based on the given categories product bought/not bought or cancellation/no cancellation.

You have to be very careful with some of the steps: For example, when the algorithm is practicing with a training set, it must not simply learn everything “by heart”, but must understand the logic behind it. If you don’t manage to do this, the problem you face is called “overfitting”.

Unsupervised Learning – What is that?

Unsupervised learning is suitable for supervised learning if no known, logically structured data is available for practice. Algorithms that use unsupervised learning can, for example, structure a customer database according to different customer groups (customer segmentation). There are algorithms that decide themselves how many such clusters they form and algorithms that are given the number of clusters.

After this kind of machine learning, manual work follows again and human creativity is needed to interpret the result: Because the clusters found must now be interpreted professionally. This is because the algorithm does not provide any explanation as to why these clusters were created in this way.

Another possibility of unsupervised learning is the so-called dimension reduction. This can be used to find out so-called features from an existing data set, i.e. components in which the data actually differ. An example for this could be descriptions of clothes, as feature the color would be extracted.

Reinforcement Learning – What is that?

Reinforcement learning is currently a less important type of learning in the economy, and is also a monitored procedure. The idea here is to reward (and thus promote) successful behavior, while suppressing the behavior that has led to undesirable results.

For example, if you wanted to train an algorithm to play for money on ten one-armed bandits (which function differently “well”), you would first have them play five times at each machine and then more often at the machines that produced the highest winnings in the first sample. The algorithm is allowed to play a little bit on the machines that delivered little or no winnings, because this could have been an unfavorable (and unlikely) coincidence during the first five tries and in reality these are the best machines.

How many algorithms that can learn are there?

There are a multitude of different learning methods, only support vector machines and decision trees as representatives of supervised learning should be mentioned here.

For each of these methods there are different algorithms how the parameters are adjusted to achieve the highest possible agreement with the known data. These algorithms are the actual learning procedures in machine learning. Examples are Gradient Descent, Backpropagation and Genetic Algorithms.

Depending on the purpose of the application, certain algorithms are found to work better or less well. This can also be influenced by the data. Some special applications even require modifications of the algorithms themselves. For very many cases very good results can be achieved with standard algorithms. In individual cases, however, it may be necessary to modify an algorithm or develop one of your own.

Machine Learning at first still means: Manual work

As automated as all this sounds, the processes of machine learning still include many manual process steps: For example, known data is often not available in the quality you actually need. For this reason, the data must usually be cleaned up in the first step, within the framework of so-called data cleansing.

Machine Learning is a statistical method

All three types of machine learning are statistical procedures, which means that only a high number of repetitions leads to good results. Computers can do this “stupid” work very well, and due to the greatly increased computing capacity we do not have to wait very long for the results.

Behind a successful machine learning project is always an interdisciplinary team

ML makes products and services more user-friendly, processes more efficient and forecasts more reliable. If management defines the use of machine learning as part of the corporate strategy, machine learning – combined with the right data – has the power to revolutionize the entire business model.

Against this background, the current hype that has developed around ML is very understandable.

With all the possibilities one must not forget: Machine Learning is not a panacea. The decisive factor is the data quality, i.e. the “fodder” of ML: Thus, “garbage in – garbage out” applies especially to ML. In addition, ML needs very large amounts of data, which are not always available.

The results produced by the ML algorithm are only as good as the people who have procured and prepared suitable amounts of data with company-relevant questions in their minds and have repeatedly adjusted the parameters of the algorithm until a technically interpretable result was obtained.

In many cases it is not the technology that sets the limits of Machine Learning, but the creativity of the people. It is essential to find the appropriate use case for the business and then to design iteratively, using all the existing domain knowledge that your own employees bring to the table. Customer-centric innovation methods such as design thinking and lean prototyping approaches make an important contribution to this – also by detecting failure early on.

 

    Stephanie Fischer und Dr. Christian Winkler sind Gründer und Geschäftsführer von datanizing, einem in München ansässigen Unternehmen, das für Organisationen Strategien und konkrete Anwendungen mit künstlicher Intelligenz entwickelt, die im eigenen Betrieb nutz- und gewinnbringend eingesetzt werden können. Sie begleiten seit Jahren Unternehmen bei der Konzeption und Implementierung datengetriebener innovativer Lösungen im Bereich Machine Learning, Text Analytics und Big Data.

    Die Kommentarfunktion ist geschlossen.

    This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

    MoreThanDigital Newsletter
    Subscribe
    Join the #bethechange community
    close-image