Artificial Intelligence and Society: Philosophy of Fallibility
Part 21: AI and Anti-Data Monopoly Policy

KOBAYASHI Keiichiro
Faculty Fellow, RIETI

What is worrisome in an era when AI is used in various situations is that AI may become the subject of a “myth of infallibility.” If such a myth of infallibility—the implication of which is that AI is justified in limiting individual freedom—were to become commonplace in society, it would be problematic.

We can say that AI, which evolves through deep learning, is in itself a product of trial and error. However, when AI is used in society, don’t people take the validity of its responses for granted? If society uses AI to answer a lot of its difficult questions—such as when financial institutions use AI for the process of selecting securities for investment or evaluating prospective borrowers’ creditworthiness; when AI is in charge of autonomous driving; or when it determines worker aptitude—and if various decisions are made on the assumption of AI infallibility, human freedom could be severely undermined (Note 1).

From the concentration of big data and the trend in AI utilization in the real world, the risk of that kind of society developing cannot be denied. The volume of personal data collected by a handful of IT companies, including GAFA (Google, Amazon, Facebook, and Apple), is enormous. If those companies monopolize big data and if AI systems that learn from the monopolized data come to be involved in making decisions that are critical for human society, AI systems could become literally unquestionable.

This in no way means that “AI is absolutely right.” Still, such a situation could create a social consensus that AI may be regarded as effectively infallible. Under that consensus, the important life choices of individuals (e.g., academic and working careers, marriage, and location of residence) could come to be determined by AI based on past data, which would mean the deprivation of individual freedom (also known as the right to stupidity, including learning by trial and error).

This is the future vision of a totalitarian system in which AI manages or controls society. In that kind of society, AI’s presence would be equivalent to that of a ruler who deprives individuals of their freedom and dictates their actions (although individuals would not be conscious of being dictated to in the case of an AI-controlled society). This would represent the arrival of the kind of dystopia that was described by George Orwell in his novel 1984 (first published in 1949).

If the IT giants’ ongoing monopolization of big data continues to advance, the advent of a dystopian future will come closer to reality. What is now occurring in the internet business is similar to the monopolization problem faced by the world economy around the end of the 19th century, when huge, dominant companies wielding power across various industries caused serious economic harm through market distortions. It may be necessary to restore the soundness of the competition environment in the AI market by enforcing anti-monopoly policy concerning the utilization of personal and customer data, just as the United States enforced anti-monopoly policy under the Antitrust Act in the late 19th century through the early 20th century.

It is difficult to prove based on currently available economic theories that enforcing an anti-data monopoly policy is necessary. The act of using data (and thereby promoting AI learning) has positive external effects. The larger the amount of data used, the more valuable additional data becomes—that is, the principle of “economy of scale” applies. Because of the principle of economy of scale, in the data utilization industry, greater efficiency is achieved when a small number of companies have monopolistic (or oligopolistic) market power than when many companies compete with one another. From discussions held from an economics perspective like this, the argument may emerge that applying the “electric power industry model,” which would grant data monopoly to IT giants in exchange for imposing a certain degree of government regulation, is appropriate.

A political philosophy premised on “fallibility” is likely to be able to counter that argument and justify the prohibition of data monopoly.

If the positive externalities associated with the use of AI are the main cause for concern, it may be said that allowing companies to gain monopolies is rational. However, the “fallibility” of AI poses another problem. Not only the companies that monopolize data, but also the AI systems that learn from the monopolized data, are fallible. Intelligence that learns from big data can never be absolutely right. The possibility, however small it may be, cannot be denied that the unsophisticated decision-making of humans could deliver better results than the sophisticated decision-making of AI.

In addition, especially when we consider the fallibility of AI and humans in a state of Knightian uncertainty under the veil of ignorance, in which even the probability distribution is unknown, the people are expected to agree to the enforcement of a ban on data access monopolies as a fair policy. That is because AI’s judgment is no different from the stupidity of individual humans in that both could be an appropriate judgment for the survival of individuals (making a probabilistic choice between AI and human judgments is impossible because in a state of Knightian uncertainty, the probability distribution is unknown). Giving a large number of companies’ AI systems and many human individuals access to data as sources of useful inputs for decision-making, rather that limiting such access to a handful of companies’ AI systems, makes it possible for the largest number of agents to make decisions. As banning monopolies on data access gives the largest number of individuals the largest number of options and heightens our chance of survival (delivers better effects), the people should reasonably agree to the ban as a social contract under the veil of ignorance.

As described above, if the logic of the Rawlsian veil of ignorance is applied, it becomes clear that the people should agree to a ban on data access monopoly as a social contract.

Footnote(s)
  1. ^ Believing in the infallibility of AI may make human society systemically prone to wrong judgment. An episode at Amazon.com of the United States that was reported in a Reuters article (https://www.reuters.com/article/cbusiness-us-amazon-com-jobs-automation-idCAKCN1MK08G-OCABS) provides a foretaste of that risk. According to the Reuters article, at the beginning of 2017, Amazon abandoned an experimental AI-based recruiting project launched in 2014, after recognizing that the AI tool used in the project discriminated against women when giving scores to job applicants. The Japanese Society for Artificial Intelligence referred to this matter in the “Statement on Machine Learning and Fairness” (https://www.ai-gakkai.or.jp/ai-elsi/archives/948), which was issued on December 10, 2019.

July 25, 2023

Article(s) by this author