Regulating for “humans-in-the-loop”

27 September 2022

Regulating for “humans-in-the-loop”

Columbia Law School
Contributor
Columbia Law School

ECGI Categories : Technology

27 September 2022

Regulating for “humans-in-the-loop”

Authors :
Columbia Law School
Contributor
Columbia Law School
Keywords :

As there is increased use of algorithms in decision-making in critical domains, capturing the benefits of greater accuracy while ensuring that decisions are fair and non-discriminatory is a key concern. Regulatory agencies around the world have begun taking steps to address algorithmic bias and discrimination through guidance and regulation. The OECD tracks over 700 artificial intelligence (“AI”) initiatives in 60 countries, reflecting the pressing need to address the challenges of AI governance. One of the leading efforts is the European Commission’s proposed AI regulation (“AI Act”) circulated in April 2021 reflecting an expansive and comprehensive attempt to regulate AI.

Article 22 of the General Data Protection Regulation (GDPR), creates a right to not be subject to “a decision based solely on automated processing.”

A central component of the proposed Act is a requirement that high-risk AI systems, meaning systems that pose significant risks to health and safety, be overseen by humans.  A key aspect of human oversight is human involvement in any particular algorithmic decision. Article 14 explains that human oversight entails that a human must be able to “disregard, override or reverse the output of the high-risk AI system.” The approach echoes Article 22 of the General Data Protection Regulation (GDPR), which creates a right to not be subject to “a decision based solely on automated processing.” Often known as a requirement for a “human-in-the-loop” this approach precludes or restricts fully automated decision-making so that algorithmic predictions act more as recommendations or decision-aids rather than a substitute for human decisions.

The requirement that human decision-makers retain decision-making authority and discretion in settings that incorporate AI is an emerging pillar of AI regulation. The Canadian Directive on Automated Decision-Making requires human intervention in federal agency high impact decisions and necessitates that “the final decision must be made by a human.” In the U.S., Washington’s Facial Recognition Law requires “meaningful human review” essentially by requiring that a human have “the authority to alter the decision under review.”

Most substantive AI oversight requirements focus solely on algorithmic predictions as if AI decision were fully automated. Despite imposing a decision structure in which humans are the ultimate decision-makers, AI policies tend to focus on the properties and outcomes of algorithmic predictions in isolation. The Canadian Directive requires an Algorithmic Impact Assessment and testing of the Automated System for bias focusing on the algorithmic outcomes themselves. Washington’s Facial Recognition Law lays down detailed protocols of the facial recognition service, such as its “potential impacts on protected subpopulations” and its error rates. Similarly, the AI Act focuses on data governance and transparency of the algorithmic system. All these requirements implicitly assume that the outcome of interest to be scrutinized and monitored is the algorithmic component of the decision, although the true impact of AI systems is also the result of human decision-making.

we highlight the importance of distinguishing between decision-making systems of “automation” in which algorithmic decisions are implemented directly, and systems of “assistance” in which algorithms inform a human decision-maker.

In a recent paper with Jann Spiess and Bryce McLaughlin, “On the Fairness of Machine-Assisted Human Decisions,” we highlight the importance of distinguishing between decision-making systems of “automation” in which algorithmic decisions are implemented directly, and systems of “assistance” in which algorithms inform a human decision-maker. Typically, crucial properties of an algorithm like accuracy and fairness outcomes are analyzed as if the machine predictions were implemented directly. However, in critical domains in which human decision-making is considered vital or legally mandated, the impact of an algorithm depends on the human’s prior beliefs, preferences and interpretation of the algorithmic signal.

Using a formal model, we show that the optimal design of an algorithm, such as what features to include or exclude, depends on whether the decision-making system is one of automation or assistance. For example, excluding information on protected characteristics from an algorithmic process may fail to reduce, and even increase, ultimate disparities when there is a biased human decision-maker. Even when an algorithm itself satisfies certain fairness concerns, human decisions that rely on algorithmic predictions may themselves introduce bias.

This result provides further support for a more nuanced approach to how algorithmic inputs relate to desirable outcomes and the need to avoid what I call the “Input Fallacy.”

When regulation requires that algorithms act as decision-aids to humans, oversight mechanisms should be designed appropriately to consider the combined impact of algorithmic predictions and human decisions

The disconnect between the regulatory requirement for “humans-in-the-loop” and oversight requirements that focus on algorithms in isolation is therefore problematic. It remains an open question whether requiring human oversight in the form of human decision-making discretion and authority is optimal or fulfills its intended purpose. Regardless of whether having a human-in-the-loop is desirable, when we consider the impact of an algorithm we must be sensitive to how it is implemented. When regulation requires that algorithms act as decision-aids to humans, oversight mechanisms should be designed appropriately to consider the combined impact of algorithmic predictions and human decisions. AI policy and guidance should therefore require impact assessments and monitoring of the decision-making system as a whole and not merely the algorithmic component of the decision.

--------------------------------

By Talia Gillis, Associate Professor of Law and Milton Handler Fellow at Columbia University.

This article reflects solely the views and opinions of the authors. The ECGI does not, consistent with its constitutional purpose, have a view or opinion. If you wish to respond to this article, you can submit a blog article or 'letter to the editor' by clicking here.

 

Follow the ECGI blog

MORE ARTICLES

Interview with Herman Daems, Chair of the Board of ECGI featured by 'De Bestuurder', Belgium.

Interview with Herman Daems, Chair of the Board of ECGI featured by 'De Bestuurder', Belgium.

ECGI Blog Review Vol. 3: Governance & Climate Change

Introducing the ECGI Blog Review Vol.3, featuring the full collection of articles from the ‘Governance & Climate Change’ theme.

The duty of good faith: An origin story

By Susan Watson. The duty of good faith is owed to the corporation as an entity separate from its shareholders with the interests of shareholders, represented as the capital of the company, held in the entity.