Print

IPEN event on “Human oversight of automated decision-making”

IPEN events bring together privacy experts and engineers from public authorities, industry, academia and civil society to discuss relevant challenges and developments for the engineering and technological implementation of data protection and privacy requirements into all phases of the development process.

The EDPS and the University of Karlstad are hosting an Internet Privacy Engineering Network (IPEN) event on "Human supervision of automated decisions" on 3 September 2024.

When: 3 September 2024 - 14:00-18:00 CEST
Where:

Physical attendance: Eva Eriksson lecture hall, Universitetsgatan 2, 651 88 Karlstad, Sweden (registration required - link will be available here soon)

 

Online participation: (connection link will be available before the event)

 

See our Data protection notice for further information

Human oversight of automated decision-making

EU regulations such as the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AIA) state that decisions that could have a significant impact on individuals should not be fully automated. Instead, human oversight should be in place to ensure that the decisions supported by automation systems (such as artificial intelligence) are fair and accountable.

An example can be found in Article 22 of the GDPR, which defines that "the data subject shall have the right not to be subject to a decision based solely on automated processing which produces legal effects or similarly significantly affects him or her".

Another example is the Article 14(2) of the AIA, which requires human oversight of high-risk AI systems to “prevent or minimise the risks to health, safety or fundamental rights”. This is also supported by recital 73 of the AIA, which provides a rationale by stating that “appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service”.

The 2019 Ethics guidelines for trustworthy AI include seven non-binding ethical principles for AI, which are intended to help ensure that AI is trustworthy and ethically sound. One of these seven principles is “Human agency and oversight”.

However, some authors point out that there could be a lack of clarity about what “human oversight” means, what can be expected from it, and how can it be implemented efficiently:

 

“Regulators presumably put humans in the loop because they think they will do something there. What, precisely, are their assumptions about human decision-making and the ways in which it differs from machines?” 

“Adding a «human in the loop» does not cleanse away problematic decisions and can make them worse”.
Matsumi, H., & Solove, D. J. (2023). The Prediction Society: Algorithms and the Problems of Forecasting the Future.

“Human oversight policies shift responsibility for algorithmic systems from agency leaders and technology vendors to human operators.”
Green, B. (2022). The flaws of policies requiring human oversight of government algorithms

When a human is placed in the loop carelessly, there is a high likelihood that the human will be disempowered, ineffective, or even create or compound system errors.
Crootof, R., Kaminski, M. E., Price, W., & Nicholson, I. I. (2023). Humans in the Loop.

 

Real-life events such as the Three Mile Island accident in 1979, and the Air France Flight 447 crash in 2009 show that when human operators are presented with inaccurate information they are not only unable to monitor systems effectively, but can actually exacerbate the potential consequences. 

In other situations, such as when Tesla's self-driving cars reportedly handed over control to humans seconds before impact, humans are put in a situation where they are not prepared or able to make a significant enough decision to correct the behaviour of the system.

Although organisations, and particularly decision makers, who choose to use automated decision making systems must be held accountable for the automated decisions their systems make, it is often the case that the human operators interacting with the systems at the end of the pipeline are the ones who are blamed for any poor outcomes.

The aim of the IPEN event is to promote discussion on questions such as the following:

  • Don't the requirements for human oversight shift the burden of responsibility from the systems and their providers to the people who operate them?
  • Could there be an unavoidable liability of the operator? Let’s suppose a human operator chooses to follow the system's suggestion and turns out to be wrong. Wouldn’t that be seen as an inability of the user to understand the limitations of the system? 
    And, on the contrary, if the operator decides against the system's suggestion and proves wrong as well, wouldn’t that result in an even worse outcome for the operator, who had clear indicators to decide otherwise?
  • Article 14 (2) of the AIA (Human oversight) provides that “human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used”. Are the provisions of Article 14 clear enough about what oversight measures are expected from humans/providers and what their responsibilities should be?
  • If human oversight is a risk mitigation control, how can we measure its impact?
  • What does “appropriate” human oversight mean? What are the characteristics that should be taken into account to assess if a human oversight procedure is appropriate or not?
  • Could regulations requiring human oversight be paving the way for the production of defective systems?
  • How should this oversight happen? In the testing and monitoring of the system? Are we talking about escalation procedures like in a call centre?
  • What skills should humans have? Are we talking of engineers that know how an AI system works or are we talking about humanists?
  • What would be the legal implications if in the end the AI system cause harm? Who will be accountable legally and morally the user of the system, the provider of the system, the overseer of the system?
  • Incorporating humans into the process is costly, may not be scalable and could reduce the speed of systems, so AI deployers might not be inclined to use human oversight. Where should the line be drawn?

Join us in this discussion on 3 September!

The agenda and timetable will be published shortly, so be sure to check this page in the coming weeks for more information.