Skip to main content
By Souichirou Kozuka. Explainability and accountability are the key principles for the human-centered AI. However, one must admit that the perfect explainability will compromise the benefits of using AI.

Since the use of AI (artificial intelligence) has become common, there have been various efforts to establish good governance for the use of AI. A well-known initiative of early days is the adoption of Asilomar principles by the Future of Life Institute, to which numerous researchers and companies signed up. The European Union and Japan are the two jurisdictions where the government and industry collaborated in producing frameworks for the responsible development and use of AI. Both the European Ethics guidelines for trustworthy AI and Japanese AI Principles (AI R&D Principles and AI Utilization Principles) emphasise that the use of AI should be human-centered, not leading to a dystopia where the AI controls humans by the algorithm. The idea met support globally, leading to the adoption of the OECD AI Principles for the use of AI.

Interestingly, jurisdictions have diverged in regulatory approaches after the global recognition of the principles for human-centered AI. The European Union have worked on the Proposal for a regulation laying down harmonised rules on artificial intelligence, with the belief that a legally binding instrument is necessary to supplement voluntary commitments through the soft-law. On the other hand, Japan now focuses on the effective implementation of the AI principles by the industry. It published the Governance Guidelines for Implementation of AI Principles and facilitates sharing of best practices among the industry members, making use of the forum that formulated its AI Principles. Similarly, the Singapore government published the Model Artificial Intelligence Governance Framework to be referenced by organisations that deploy AI. The latter approach focusing on the implementation gives rise to the agenda of “corporate governance for the use of AI”.

 One may question why AI must specifically be addressed, as distinguished from other kinds of technologies. An important feature of AI is that it makes decisions in a black box. The AI commonly used today is based on deep learning, which is a technology to identify a hidden correlation through machine learning of data. While it is helpful in discovering what a human can hardly recognise, a human finds it difficult to review how AI reached such a decision, still less to control it. Furthermore, the AI continues learning after the system is delivered from the developer to the user, making its decisions even less controllable.

As a result, the consumer cannot be convinced that the decision allegedly made by AI is not manipulated by the provider of the service deploying AI, which could lead to distrust in AI. Even when the public trusts the AI’s decision, there is still a possibility that the decision reflects a bias unacceptable to the society, in view of the fundamental rights of the people. Such a bias can easily sneak into AI’s decision when the data that the AI learns is affected by unfair practices in the society, such as discrimination by race or gender.

To solve these problems and ensure the public’s trust in AI, the user of AI has to care about the explainability of AI’s decisions, as well as to take accountability for them. Explainability and accountability are the key principles for the human-centered AI. However, one must admit that the perfect explainability will compromise the benefits of using AI. If one tries to identify every element that has led the AI to make a certain decision, a huge number of parameters must be disclosed and turned into a human-readable format. Then, the advantages of using AI to find correlations not recognisable by a human and substituting AI’s decision for a less efficient human decision will be compromised to a large extent. Obviously a balance is needed at some point.

The Model Framework by the Singapore Government argues that the relationship of the human and AI can be either “human-in-the loop”, “human-out-of-the-loop” or “human-over-the-loop”. Under the first approach, the final decision should not be left to AI but must always be reserved by a human. The second approach means that replacing the human decision by the AI’s decision is allowed. The third approach requires that the human oversight must be made so that a human can step in when an unexpected incident occurs. The Model Framework argues that the choice among these three approaches must be made on the basis of the severity of the harm to be caused by a wrong decision by AI and the probability that the AI errs.

Here lies an issue that the corporate management has to decide on. They should make a decision about how (under which approach) the AI system is used, and to what extent its decision is explained. In making such a decision, they need to assess risks of deploying AI. It is also recommended that the company using the AI system adopt principles on AI of its own, adapting the principles formulated globally or in its jurisdiction to its business. It is what major developers of AI systems already do today and is useful in identifying which issues are particularly relevant to the company’s use of AI. Depending on the potential risk, due diligence over the supply chain of AI system might also be required to examine how the data is collected and prepared for training of AI be exercised, because decisions by AI are affected by the data that it learns. Thus, the top management of a company using, or intending to use, an AI system in its service should build up the governance system for the use of AI within its company.

 

By Souichirou Kozuka, Professor, Law Faculty of Gakushuin University (Tokyo)

The ECGI does not, consistent with its constitutional purpose, have a view or opinion. If you wish to respond to this article, you can submit a blog article or 'letter to the editor' by clicking here

This article features in the ECGI blog collection Technology & Governance

Related Blogs

Scroll to Top