How can we make digitization fair? Digital ethics should educate mature companies and employees who ask critical questions of technology. To ensure security, social responsibility, transparency and individual autonomy, IT decision-makers must promote user empowerment instead of digital nudging.
Digital ethics is intended to provide people with guidance for making moral decisions in the globally networked world. A handbook that provides answers to all the questions posed by digitization? Great! Unfortunately, it’s not that simple. Actually, digital ethics is supposed to create exactly the opposite – it is supposed to educate responsible companies and employees who are strengthened in their responsibility to ask critical questions of technology and its implementation in order to derive rules for fair digitization.
Technology as a world improver or value destroyer?
What has remained of the dreams of the networked sharing community that wanted to make the world a better place through disruptive innovation? After decades of hyped hopes around cyberspace, disappointments about the “Broken Web” dominate public discussion. Real-world protection mechanisms are not working, society is radicalizing into echo chambers, and the tech industry is suffering from a loss of trust. The negative consequences of the global platform economy cannot be hidden: Anonymization of work, loss of jobs through automation, precarization through exploitative working conditions and oligopoly formation pose new challenges for politics and business. Fears and worries are spreading about a transhumanist future that wants to make human labor (and humans in general) superfluous and give us a feudalism 2.0 in which moneyed elites subvert the democratic constitutional state.
Information technology has brought knowledge to the far corners of the world, but it has also contributed to the spread of disinformation and power asymmetries. Prosumers of the Internet are disenfranchised as private experiences are defined by tech companies as freely available material and transformed into products. The pervasive surveillance culture makes the right to secrecy a sensitive and highly vulnerable commodity. Humans are falling by the wayside because, out of convenience, they have entrusted every last corner of their personalities to machines, which reward us for this with manipulation and dependencies. Critics such as Tim Berners-Lee, initiator of the Contract for the Web, or Tristan Harris, founder of the Time Well Spent movement and the Center for Humane Technology, are therefore calling for new rules for Web technologies.
Digital ethics: protecting human dignity and autonomy
What does an ethicist do? He questions the contemporary conventions of human action for individual and social compatibility in order to point out ways for a good life. Questions about acting in the digital economy abound: How is digital enlightenment changing understandings of privacy? How can we ensure the autonomy rather than the exploitation of the individual? To what extent are we willing to allow a datafication of our lives that offers us advantages in quality of life and convenience but limits our freedom of choice?
Digital ethics revolves around a fundamental philosophical problem that has preoccupied modern man since ancient times: What is more important for the good life – the pursuit of individual happiness or moral duty? Or, to put it more concretely in terms of the information age, how can we reconcile the ideal of a decentrally organized, collaborative knowledge society of autonomous individuals with the reality of capitalist surveillance management? And what happens when the opt out switch is pressed?
In practice, digital ethics is supposed to provide a framework for making digitization humane. After all, there is a lack of values-based digital literacy in education and business, ethics by design business models, and binding codes for Big Data and AI. Trend-setting lists of values exist within the IT industry, academia and at the political level. For example, the Institute of Electrical and Electronics Engineers (IEEE), Alphabet (Google), and the German Informatics Society have codes that emphasize issues such as security, transparency, social obligation, respectful and competent advancement of technology, and overall accountability. The “EU Guidelines for Trustworthy AI” emphasize human oversight, technical robustness, and legal compliance. Although the codes give different weightings, what they all have in common is that they do not view technology as an end in itself, but as a means to serve people.
Digital ethics starts from the premise that human dignity is at the center of all considerations. The 10 Commandments of the German Institute for Digital Ethics emphasize the self-protection of the individual, who should rather reveal as little of himself as possible in the screened expanses of data-hungry networks and be critical of the temptations of technology. But is individual freedom preserved if the Internet’s compulsion for transparency is met with concealment? Or is it already a normative self-censorship that the system of constant surveillance creates? Caution is right, but it must not be played off against the value of trust. But trust can only be created if courageous, self-confident and digitally mature people lead the way as role models.
Digital maturity strengthens individual judgment and resilience
Digital maturity is an attitude that must be learned: it is a lifelong process of self-knowledge and self-questioning in order to develop further in the digital space in a self-determined manner and along the knowledge of one’s own abilities, but also limitations. Digital maturity is both the will to learn digital skills in practice and the ability to adopt perspectives other than one’s own in digital communication. In addition to the technical skills required to handle hardware and software with confidence, social, psychological and cultural competencies are essential. What is needed is broad general knowledge, a high degree of abstraction in thinking and a holistic approach. This meta-learning, in turn, develops personal identity and enables increased judgment.
Blind faith in the power of machines is just as naive as categorical abstinence. Digital technologies have long since become systemically relevant and are used at the government level, for critical security infrastructure, and in almost every aspect of daily life. The Internet and the real world are no longer separate parallel worlds (at the latest with the Internet of Things). But the algorithms that decide what is good and what is bad are in many cases unknown to users. In addition, there is the problem of the error-proneness of software programs, which leads to security failures, diagnostic errors and biases such as racial profiling. Also problematic is the often inadequate documentation of technological development processes, which prevents even development teams from understanding their systems 100%. Are humans still in control or are we living in a black box?
Digital maturity means understanding that there is no one answer to all questions. We must learn to live with uncertainties that are increasing in an increasingly complex world. Rather, it’s about developing resilience against the foreign domination of technology. This can only be done with a healthy distrust of the temptations of the digital world and concentration on maintaining our own mental equilibrium. Observing from a distance is not an option here – one cannot avoid diving into the postmodern cyberworld of surfaces in order to plumb its depths phenomenologically with one’s whole self.
Gaining trust with user empowerment
The realization that everyone is integrated into a potentially manipulative system and the sovereign exercise of one’s own power to act do not have to be mutually exclusive: The more knowledge one has about the psychological penetration of algorithmic logic, the better one can outsmart it. Empowerment gives users back the sense of responsibility that they can make things happen within their own decision-making scope.
Instead of increasing efficiency on the employee and advancing personalization on the customer, good digitization should focus on the duty to create value in a distanced and humane way on the basis of proportionality. With regard to the Internet and Big Data, this means that only through the greatest possible autonomy (data sovereignty) and the appropriate use of human resources (data economy) can customers and citizens regain the trust in the long term that digitization promotes sustainable well-being.
Psychological manipulation is not necessary to sell a product. Creating sustainable value instead of generating rapid growth and short-term returns not only makes people happy, but also ensures the company’s continued existence in the long term. To achieve true integrity, you have to model autonomy instead of forcing it with command and control. Progress should support people in their positive self-perception instead of confusing them. For the developers of technological systems, this means that they must offer “added performance under the condition of controlled entropy minimization,” as Sarah Spiekermann puts it in her current book Digital Ethics – A Value System for the 21st Century.
4 tips for ethical design in web development
Compliance not only focuses on legal aspects, but also on the social responsibility of the company’s own products or services. Ethical design naturally adheres to legal requirements for the protection of the user, such as the “Privacy by Design” of the GDPR. It also focuses on promoting the deeper social needs of the customer (exchange, information, entertainment, learning, relaxation, etc.) instead of being geared solely to sales-promoting marketing specifications.
Freedom of choice
It is a commonplace that clarity, simplicity and intuitive operation make web interfaces easier to navigate. But often the orientation to the user’s impulses for action (call-to-action) is confused with guided action and digital nudging. Exaggerated feedback in the form of rewards creates false incentives. Addictive design encourages multi-consumption, which can lead to attention lapses, information overload, and addiction. It makes more sense to promote the consumer’s balance and focus via intrinsic motivation so that his freedom of choice is preserved.
A calm, safe environment promotes mindfulness and creates emotional safety so that the user can build trust. Rhythmic routines including pauses and an authentic learning environment contribute to disturbance-reduced grounding that favors concentration and individual mobilization. The user is not distracted from his actual motivation and can experience lasting satisfaction instead of the satisfaction of short-lived affects. This also applies to group dynamics: instead of building up pressure to conform through status-oriented self-presentation, a sense of cooperative and harmonious togetherness should be conveyed through clearly structured and well-moderated communication spaces.
One important factor is the adaptation of web interfaces, apps or online stores to regional and cultural characteristics. The focus here is on aligning design and product development more closely with the diversity of communities, rather than with artificial homogenization that does not reflect the real world. The framing of one’s own culture is a decisive factor in whether a customer buys a product. Diversity in design can avoid customer misunderstandings due to translation problems and at the same time achieve a stronger emotional attachment to the product. Intercultural learning within the organization also promotes product development and the understanding of foreign markets.