Artificial Intelligence and Society: Philosophy of Fallibility
Part 23: Will Superhumans Eradicate Ordinary Human Beings?

KOBAYASHI Keiichiro
Faculty Fellow, RIETI

The idea that “everything is fallible” is the only theory that could be an “infallible truth.” Only a comprehensive doctrine premised on the principle of fallibility can keep the moral value \(q_{t}\) of the social system positive forever. Therefore, we believe that societal ideals that should be societally maintained must be premised on the principle of fallibility.

That idea applies not only to ordinary human beings but also to superhumans, or human beings whose intellectual power has been enhanced by AI. Let us assume that ordinary human beings and superhumans enhanced by AI and biotechnology have been divided into two separate social classes. In that case, superhumans, too, are aware of their own fallibility. Despite being enhanced by AI, they would understand reality through nothing more than “approximate calculations” but would be unable to “truly” understand “everything.” Pattern identification based on deep learning is also a form of approximate calculation using prepared sets of real-world patterns. Superhumans, too, would understand that all intellectual activities represent an accumulation of approximate calculations.

Superhumans who are aware of their own fallibility are expected to create a society that is tolerant of activities that are freely conducted by a great variety of beings (including ordinary human beings). If they are aware of their own fallibility, they are certain to recognize the possibility that innovations brought about by other people (including ordinary human beings) could have a significant impact on themselves. If interactions that could occur between unforeseen innovations are taken into consideration, from a superhumans’ point of view, respecting the continued existence of ordinary human beings, rather than wiping them out (or letting them wither into extinction), would be the most beneficial and rational decision for purely selfish reasons (Note 1).

The vision of a diverse and tolerant society premised on the principle of fallibility is nothing more than what we imagine within the limits of our thinking. One problem for me is this: when thinking about a future society in which co-existence with AI is inevitable, how far will I, a mere ordinary human being, be able to follow the reasoning of AI (and that of superhumans whose intellectual power has been enhanced by AI), which is expected to transcend the current human understanding? Of course, the possibility cannot be ruled out that superhumans, by following some line of reasoning that is beyond the author’s understanding, will arrive at the conclusion that ordinary human beings should be exploited or eradicated.

Even so, there is one thing we can say for sure.

At the least, believing in the fallibility of any bleak vision of future society remains an option for us—that is, we can choose to believe that it may be wrong to assume that superhumans will eradicate mankind. In this case, fallibility is another name for hope.

Footnote(s)
  1. ^ The logic mentioned here applies not only to the relationship between superhumans and ordinary human beings but also to the relationship between superhumans and other animals and plants. Even though intellectual activity may be the exclusive domain of Homo sapiens, by respecting biodiversity, ordinary human beings and superhumans can expect to benefit in various ways, including from resources generated through the activities of the diverse assortment of beings on the planet (e.g., drug ingredients, useful chemical substances, and raw materials). Given this expectation, even if superhumans act entirely selfishly, they are certain to consider respecting biodiversity to be a reasonable decision. This is exactly the same logic as the one applied to the relationship between superhumans and ordinary human beings that was explained in the main text.

September 21, 2023

Article(s) by this author