Cabinet Resignation Triggered by a System Glitch
The Cabinet of Prime Minister Rutte in the Netherlands resigned en massed in January 2021, just as it was tackling the COVID-19 crisis, in order to take responsibility for a scandal triggered by a malfunction of a fraud detection system that forced more than 10,000 households to give back childcare benefits (Note 1). Underlying the intense controversy touched off by the scandal are technical problems, such as the development of an overly rigid system that allows no room for flexibility in the case of paperwork errors, and operational problems, such as discriminatory targeting of people of particular races or of multi-nationality (Note 2). This incident has made clear that decision-making process is already being automated in administrative processes and that in some cases failure to appropriately control the process could put many people in a difficult situation. Ahead of the planned establishment of a digital agency in September 2021, digitalization of administrative processes is expected to make progress in Japan as well. This column aims to find policy implications by discussing a draft AI regulation announced by the European Commission (EC) while looking at up-to-date trends related to digital technologies, mainly in terms of AI ethics.
After I joined the Ministry of Economy, Trade and Industry (METI) in 2011, I was involved in policy planning and development related to digitalization and AI while engaging in administrative activities related to the use of big data and open data, and robotics demonstration projects (please note that the opinions expressed in this article are my personal views, not the opinions of the organization to which I belong). Since 2017, I have lived in the United States as a foreign student and studied such matters as statistics and methodologies for developing AI systems. In recent years, more lively discussions have been held on matters concerning what capabilities society thinks AI should have and how AI should be used (AI ethics) (Note 3). including responsible use of AI, removal of bias, and explainability, than at any other time over the entire period of my career. This trend has also become pronounced internationally, as international agreements related to AI ethics continue to be concluded, including the Ethics Guidelines for Trustworthy AI (April) (Note 4) announced by the EC in 2019, the OECD AI Principles (May) (Note 5), the G20 AI Principles (June) (Note 6). This trend is evidence that AI has passed the demonstration test phase and is coming into use in various domains of our everyday lives.
What is AI?
AI, which stands for artificial intelligence, refers to a field of study that seeks to enable machines to perform activities as if possessing human intelligence. AI can be divided into two major categories—"strong AI," which seeks to reproduce human intelligence itself, and "weak AI," which aims to substitute machine labor for what was previously human-intelligence based work (Note 7). The scope of weak AI capabilities includes such functions as natural language processing, image recognition and robotics, which are based on statistical machine learning technology (Note 8). Basically, all those functions are comprised of the processes of analyzing inputs, making decisions, and feeding back outputs. Meanwhile, big data, cloud computing and sensor technologies serve as a kind of foundation for the development and operation of AI systems. AI is penetrating many industrial sectors in various ways. As Andrew Ng, a noted AI advocate, puts it, AI is about to transform every industry, just as electricity did a century ago (Note 9). There is no doubt that AI will continue to be a key to the development of the Japanese society.
Obstacles to AI development
At present, AI is already being used in our everyday lives in various ways, and its use is expected to become more and more widespread. On the other hand, there are some challenges specific to AI, and it is becoming increasingly difficult to ignore those challenges as the use of AI increases. The challenges that I discuss below are only a few of numerous challenges related to AI. The recent increased interest in and discussion on AI ethics is probably because of a desire to promote more responsible development of AI by appropriately addressing those challenges.
- AI decisions do not guarantee perfect accuracy and correctness as they are based on a probabilistic approach, and one major challenge is the risk that an erroneous judgment made by an AI system could have huge consequences. As a result, it is necessary to consider who should be responsible for compensating for losses and damage that may be caused by erroneous AI decisions. For reference, in 2018, the Ministry of Land, Infrastructure, Transport and Tourism announced its position on how to deal with losses and damage caused by autonomous driving systems (Note 10).
- AI systems learn from datasets, which could be incomplete or discriminatory. The second challenge for AI is the risk that if learning datasets are incomplete or discriminatory, AI systems could repeatedly make discriminatory decisions. For example, if an AI model for hiring is developed based on hiring data obtained from employers with a positive bias toward job applicants with certain attributes (e.g., gender and alma mater) it could erroneously continue to recommend the hiring of people with those attributes (Note 11).
- In some cases, it is extremely difficult to explain the reasons for and the underlying factors of AI decisions. The third challenge for AI is that the presence of such cases could create doubt about the credibility of AI decisions and hinder the development potential of AI. For example, if the decision making process for an AI that a company uses for business purposes is completely unintelligible, the company will be unable to provide adequate explanations for erroneous decisions, or to make appropriate changes to improve its performance.
AI Regulation by the EU
Various organizations are exploring AI ethics by considering appropriate ways of using AI in accordance with their respective circumstances. To my knowledge, there has been no legally binding international regulation concerning the use of AI. However, on April 21, 2021, the EC announced a proposal for a regulation on a European approach for AI as a draft comprehensive regulation concerning the use of AI (Note 12). This is a regulation intended to ensure "excellence" and "trust"—particularly the latter—which are major elements of a regulatory framework upheld in the debate so far within the EU on the appropriate use of AI (Note 13). It is expected to take several years before the draft regulation takes effect, as it needs to undergo debate by the European Parliament, among other processes. From among AI systems, the proposal selects and lists those which should be subject to regulation and presents four possibilities for regulatory intervention that are applicable in accordance with the risk level and criticality. Japanese companies providing AI services within the EU area, regardless of whether their services are paid or unpaid, could be subject to the regulation (Articles 2 and 3) (Note 14).
||AI-system examples that may be subject to regulation
- AI systems that deploy subliminal techniques in order to control human behavior, causing them physical or psychological harm.
- AI systems that exploit the vulnerabilities of children and persons with physical or mental disabilities in order to control their behavior, causing them physical or psychological harm.
- AI systems used by public organizations, etc. in order to evaluate or classify the trustworthiness of persons based on their behavior and personality characteristics.
|Prohibited. (Article 5)
- AI systems for real-time remote biometric identification that are used in public spaces for the purpose of law enforcement.
|Prohibited in principle. (Permissible if certain requirements are met.) (Article 5)
- AI products that are required to undergo a third-party conformity assessment (such as medical equipment) and AI systems intended to be used as a safety component of a product.
- AI systems for real-time or post remote biometric identification
- AI systems intended to be used as a safety component in the management and operation of critical infrastructure.
- AI systems used to assign persons to educational and vocational training institutions or to evaluate applicants' eligibility to receive training.
- AI systems used for hiring, task allocation and performance evaluation.
- AI systems used for determining access to essential services (e.g., administrative services, loans, and emergency first response).
- AI systems related to law enforcement (e.g., evaluation of recidivism rates, and detection of the emotional state of persons through polygraphs and other tools).
- AI systems used for immigration, asylum and border control
- AI systems intended to assist judicial authorities in researching and interpreting facts.
|Usage is permissible only when the prescribed requirements are met. (Articles 8 to 15)
- AI systems intended to interact with natural persons
- AI systems that recognize emotions and categorize persons based on biometric information
|Obligation for notification of AI use (Article 52)
- AI systems that generate or manipulate content (images, audio and video) through "deep fake" technology
|Obligation for notification of generation or manipulation of content by AI. (Article 52)
- AI systems that do not belong to any of the above categories (e.g., AI systems that are used in games or that sort out spam mail)
|Use is not subject to regulation. However, the development of voluntary codes of conduct is recommended. (Article 69)
|*AI systems intended exclusively for military purposes are not subject to the regulation. (Article 2)
AI systems prohibited under the draft regulation are those that involve excessive risks, such as the ability to cause harm through the use of subliminal techniques to manipulate human behavior. Although the draft regulation does not cite any specific example, AI systems that encourage voters who support particular political parties to cast ballots (micro-targeting) may be prohibited (Note 15) (Note 16). Meanwhile, regarding high-risk AI systems that may be conditionally prohibited, there are several matters of concern. For example, although Article 10 stipulates that high-risk AI systems must be developed based on datasets that are free of errors, it may be difficult or impossible in some cases to completely exclude data errors. Moreover, while datasets used for the development of AI systems are required to be "representative," companies located outside the EU area may find it difficult to obtain such datasets. Providers of high-risk AI systems are also obligated to submit information that proves their systems' compliance with the prescribed requirements (which may include confidential information concerning the detailed specifications of the AI systems, for example) when required by national competent authorities in EU member countries (Article 16). Therefore, it is necessary to keep a close watch on future developments with respect to points of debate such as securing the confidentiality of submitted information, a scheme of compensation for damage caused by information leakage, and the fairness of the procedures for requiring the submission of information.
Future Policy Direction in Japan
With respect to the abovementioned discussion on AI regulation in the EU, some people believe that it is premature to introduce regulation in Japan or that a legally-binding, wholesale regulation is unnecessary (Note 17). However, given that AI is about to change our society, just as electricity did in the past, it is very important to hold more in-depth discussions on AI ethics while examining the draft EU regulation as a reference (Note 18). For many years, Japan has been promoting discussions on AI ethics related matters. For example, Japan introduced draft principles for the development of AI systems at the meeting of G7 information and communication ministers in 2016. At present, debates are under way in various discussion forums, such as the Conference toward AI Network Society under the Ministry of Internal Affairs and Communications (MIC) and the Expert Group on Architecture for AI Principles to be Practiced under METI (Note 19).
Looking at the draft EU regulation, I am convinced that regarding AI systems with unacceptable risks that are prohibited by the draft regulation, such as AI systems that may cause harm through the use of subliminal techniques to manipulate human behavior, it is urgently necessary to hold in-depth debate. As such AI systems contravene the AI Utilization Principles (Note 20) that were announced by MIC, including the principles of safety, and human dignity and individual autonomy, they could have huge implications. Therefore, discussing the risks of such AI systems as the top priority matter at the abovementioned discussion forums under MIC and METI, rather than treating them as matters equal in importance to other AI ethics issues, conforms to the risk-based approach upheld by the government (Note 21). If the various complexities that are inherent in AI are taken into consideration, it is appropriate to immediately address and devote more discussion time to methods of treating AI systems that involve particularly high risks, as opposed to AI-related issues of less urgency (Note 22).
Furthermore, the EU's draft AI regulation may act as a catalyst for discussion on whether or not to introduce AI regulation in Japan. Of the 1,215 opinions submitted during a public comment process implemented by the EU with respect to the draft AI regulation, for example, only 3% of the respondents replied that the existing legal framework is sufficient for regulating AI, while 42% recognized the need for a new regulatory and legal framework. These results may serve as a reference for future discussions in Japan (Note 23) Over the next several years, developments related to AI regulation are likely to continue to draw attention from a wide variety of stakeholders following the announcement of the EU's draft AI regulation, which is the first such initiative in the world. Therefore, it is important for Japan as well to make its stance on future regulation clearer by presenting the schedule of debate in order to improve predictability for the industry and provide information that is useful as a reference for making business and management decisions (Note 24).
The original text in Japanese was posted on April 26, 2021.