Humanity and AI need Grounding Values for a bright Future!
In issue 33 of CSR NEWS 2019, we discussed how to take responsibility for the digital sphere. Since then, a number of highly innovative technical developments have taken place, but something crucial has – thankfully – remained the same. We are seeing numerous new digital innovations that, especially when used in combination, have the potential to bring about major social and economic change in many areas of life and work. Probably the best-known example of this is the market-ready use of generative artificial intelligence. Almost every company involved in digitization is more or less engaged in the development and application of such technologies. In addition to the quantitative data collection mania resulting from the strife for economic success , the race for the fastest learning curve of the systems is now also implicit in the economic logic. The best players will prevail again, because the path dependency is merciless and the second and third best systems are not really used by customers at comparable prices. Is that a must? Or can we do things differently in this challenge?
What is the decisive factor in this context that always remains the same? The top-level normative frameworks within which these developments take place! We still have constitutions, e.g. the Basic Law in Germany or other national constitutions, the Charter of Fundamental Rights of the European Union and of course also the United Nations Charter of Human Rights. The values that help us to ensure a human digitalization are still the same as written in issue 33 of CSR NEWS 2019:
- Humaneness
- Human Dignity
- Human Sovereignty
- Human Responsibility and Presence
- Transparency
- Humility
- Caution
- Revisability
- Freedom
- Privacy
- Diversity
- Tolerance
- Respect
All of these values potentially lead to efficiency losses in the monetization of AI applications. This has also remains the same every time new innovations reach the market: the markt logic alone stays ethically blind when not hedged in a normative framework of responsibility. The human factor and its dignity cannot be automated. Comprehensive ethics can never be digital, let alone programmable. As a result, it is to be expected that there will continue to be great lobbying pressure on politicians in this area to merely hang values as a fig leaf in front of the new technology and generate as few standards as possible from it. It will be even more challenging for society and politics to contain and cultivate digitalization in a targeted and collaborative manner so as not to leave the normative achievements of the modern era open to renegotiation.
When it comes to AI and not digitalization in general, we are dealing with another novelty. We are seeing the technological dominance of algorithmic results voluntarily accepted by humans, to which quality, neutrality, impartiality, logic and rationality are erroneously attributed as “natural”. The choice of term for this technology is not innocent of this. If we were talking about “high-performance statistical calculation models”, the limits of this technology would be much clearer. Instead, we are talking about a characteristic that has so far been predominantly attributed to humans: being intelligent. Instead of wanting to extend the range of human intelligence through high-performance statistical calculation models, intelligence is completely attributed to a supposedly superior artificial something. The fetish of profit maximization is thus on the berge of marrying the fetish of digitally mapped computational logic. In the worst case, humans make themselves slaves to statistical probabilities that only depict the past and, at best, the present, recombine them and extrapolate them into the future. If AI can reproduce human-like interactions and decisions in supposed perfection, we are constantly under pressure to justify our humanity. This is about trust in a special form: self-confidence – trust in our power of judgment and, above all, in our ability to orient ourselves. What is morally required? To accept the supposedly perfect result of the AI or to make a different decision contrary to the AI suggestion without comparable perfection? There is a danger that there will be nothing new under the sun, because human creativity will increasingly be replaced by mathematical recombination of what already exists. Ultimately, this can lead to uniform assessments and evaluations in all areas of life – beyond what defines us as human beings. How do we differ from mathematics? Are we just biologically programmed machines whose thinking can be digitally mapped algorithmically? Or are we analog beings with a wide range of worldviews, different approaches to the world that can reflect, metaphysically define and change themselves in a pluralistic world?
If we as humans maintain the YES to diversity and to deviation from mathematical predeterminism, we avoid the danger of enslaving ourselves. In other words: If we continue to trust ourselves AI can be used as a useful tool – just like any other technology. AI then remains a suggestion system whose results and recommendations must always be assessed by humans and where the decision-making authority remains with humans. This will certainly require a different skills grid than the one we have largely implemented in our education systems at present. Machines are capable of knowledge, but learning and practicing how to reflect, critically question, independently develop and orient oneself and ultimately make one’s own decisions empathetically and take responsibility for them. In addition to the much-vaunted media skills and the ability to use digital tools, this is exactly what we need to preserve human sovereignty and dignity: more orientation skills and self-confidence and less knowledge reproduction. This article ends with a plea for the acceptance of diversity and a call for the democratic political shaping of this technological development against the backdrop of our existing norms. We don’t need to renegotiate our values for AI either! What we need is Re-Realization: the concretization of value-based action for new technologies in critical fields of application. It is not just companies that have a duty here, but society as a whole – represented by its parliaments. Red lines in use and development of AI are not always innovation killers, but merely a safety net for the normative achievements of humanity!
This is a translated and revised Version of my German Publication „Werte für eine menschorientierte KI: Nichts Neues unter der Sonne?!“ 39.CSR-MAGAZIN 2023, Künstliche Intelligenz, S. 13-16