A manifest for diversity and a call for political design
(first published in German language in the 39th CSR MAGAZINE).
In CSR MAGAZIN 2019, issue 33 we outlined how digital responsibility should look like. Since then, some highly innovative technical developments have taken place, but something crucial has remained the same – thankfully!
We are seeing numerous new digital innovations that contain great social and economic power for change in a huge number of areas of life and work. Probably the best-known example of this is the market-ready use of so-called artificial intelligence with interaction applications such as ChatGPT and Bard, for example, but also lesser-known applications such as LLaMA, YouChat, Claude and the counterparts from China such as Ernie or HunyuanAide. Almost every company involved in digital markets is more or less engaged in the application of such technologies or the in-house development of corresponding software products.
Path dependency is merciless
The new thing here is that the experiences from past interactions within the systems can change future interactions and, at best, continuously improve results. So to speak, they have experiential learning integrated at top speed and accessible to everyone. To the quantitative data collection frenzy, the economic logic now implicitly adds the race for the fastest learning curve of the systems. There can only be one?! The best player seems to prevail again, because the path dependency is merciless and the second and third best systems are in danger not to be used by customers. Is that inevitable? Or can we make it different in the European Union, for example, with our joint commitment to an ecologically oriented social market economy?
Values for human digitalization
So what is the crucial thing that has remained the same in this context? The normative framework within which these developments also take place! We still have state constitutions and national constitutions – here in Germany our Basic Law, the Charter of Fundamental Rights of the European Union and also a United Nations Charter of Human Rights. The values that help us ensure human focused digitalization are still the same as written in the 2019 article (se above): Human Sovereignty, Human Dignity, Human Presence and Responsibility, Transparency, Revisability, Diversity, Tolerance, Respect, Humility, Prudence, Privacy, Freedom, and finally Humanity. All of these values potentially create efficiency tradeoffs in monetizing AI applications. That, too, remains the same. Because the human factor with its dignity cannot and must not be automated. And because comprehensive ethics can never be digital or even algorithmically programmed. Accordingly, it is to be expected that in this area, too, great lobbying pressure will continue to be exerted on politicians to merely hang values in front of the new technology as a fig leaf in an unconcrete manner and to generate as few standards as possible from it. It will be even more challenging for society and politics to contain and cultivate digitization in a targeted and collaborative manner so as not to consign normative achievements of the modern era to renegotiation.
“Intelligence” for an artificial something
If we are talking about AI and not digitalization in general, we are dealing with another novelty. We see technological dominance of algorithmic results voluntarily accepted by humans, which are erroneously attributed quality, neutrality, uninfluenceability, logic and rationality. The choice of terms for this technology is not innocent of this. For if we were to speak of “high-performance statistical computational models,” the limitations of this technology would be much clearer. Instead, we speak of a trait previously attributed predominantly to humans: being intelligent. Rather than seeking to extend the reach of human intelligence through high-performance statistical computing models, intelligence is attributed entirely to a supposedly superior artificial something. The fetish of profit maximization is thus joined by the fetish of digitally mapped computational logic.
Humans make themselves slaves to statistical probabilities that only map the past and, at best, the present, recombine them and extrapolate them into the future. If AI can replicate human-like interactions and decisions in supposed perfection, we are constantly under pressure to justify our humanity. This touches confidence in a special form: our self-confidence – confidence in our power of judgment and, above all, in our orientation competence. What is morally required? To accept the supposedly perfect result of the AI or to make a different decision contrary to the AI suggestion without comparable mathematical perfection?
There is a danger that there will be nothing new under the sun, because human creativity will be increasingly replaced with mathematical recombination of what already exists. In the final analysis, this will lead to uniform assessments and evaluations in all areas of life – beyond what makes us human. How do we differ from mathematics? Are we merely bio-chemical programmed machines whose thinking can be digitally mapped algorithmically? Or are we analog beings with a wide range of ideological imprints, different approaches to meaning, who can reflect, metaphysically define themselves and change in a pluralistic world?
Orientation competence and self-confidence
If we as human beings say YES to diversity and NO to mathematical predeterminism, we avoid the danger of enslaving ourselves. In other words: If we continue to trust ourselves, AI can be used as a useful tool – just like any other technology. Then AI remains a suggestion system whose results and recommendations must always be assessed by humans and where the decision-making authority remains always with humans. Certainly, this requires a different competency grid than we currently have implemented for the most part in our educational systems. The machine can gather knowledge, but reflecting, critically questioning, independently developing and orienting, and finally making one’s own decisions empathically and taking responsibility for them is something that must be learned and practiced. In addition to the much-vaunted media literacy and the ability to apply digital tools, this is exactly what we need to maintain human sovereignty and dignity: more orientation competence and self-confidence and less knowledge reproduction. Thus, as in 2019, this article ends again with a plea for diversity and a call for active political design of even this technological development against the backdrop of our existing norms.
Emergency switches are no innovation killers!
We don’t need a renegotiation of our values base for AI either! What we need is only the concretization of values-based action for new technologies in critical fields of application … Thank God we have been on the way here in the European Union since the turn of the millennium! It is not the companies alone that have a duty here, but society as a whole – represented by its parliaments. Red lines in the use of AI are no innovation killers, but merely safety nets for the normative achievements of our humanity. It is like red traffic lights – they also did not stop the innovation process in vehicles development. They only stop vehicles from harming people! So let us design emergency switches for the use of this technology, too!