IPEN events bring together privacy experts and engineers from public authorities, industry, academia and civil society to discuss relevant challenges and developments for the engineering and technological implementation of data protection and privacy requirements into all phases of the development process.
The EDPS and INRIA organised on 31st May 2023 an Internet Privacy Engineering Network (IPEN) event on explainable artificial intelligence (XAI).
Where:
- Physical attendance : Bibliotheque Marie Curie from INSA – Lyon (France) - 31 Av. Jean Capelle O, 69100 Villeurbanne, France
- Online participation
A growing number of public and private organisations are deploying or planning to deploy AI systems. Many of these systems are designed to make decisions or assist humans in making decisions. Sometimes these are decisions that can have a significant impact on an individuals. Given the nature and complexity of AI systems, understanding and explaining how and why the AI system comes to a conclusion is often a challenge.
The purpose of explainable AI (XAI) systems is to make their behaviour understandable to humans by providing explanations of the underlying decision-making processes. To be humanly understandable, systems need to be able to explain their capabilities, their understandings and explain how and why a particular decision was reached. However, any explanation will be set in a context that depends on the task, capabilities, and expectations of the user of the AI system. And even the best explanations will be useless if users do not trust the efficiency or fairness of a system.
With this event, the EDPS wanted to stimulate a discussion in order to clarify the expectations regarding the XAI, its limitations and misunderstandings.
AGENDA
14:30 - 14:40 |
Welcome introduction
|
|
14:40 - 15:00 |
Keynote speech
|
Brent Mittelstadt, University of Oxford
|
15:00 - 15:45 | Panel 1: "XAI: from the concept to the real world application" |
Ronan Hamon, Joint Research Centre
Martin van den Berg, HU University of Applied Sciences Utrecht
Moderator: Xabier Lareo, EDPS |
15:45 - 16:00 |
Coffee break
|
|
16:00 - 16:45 | Panel 2: “Explainability and trust: the importance of the human factor” |
Zachary C. Lipton, Carnegie Mellon University
Reva Schwartz, National Institute of Standards and Technology (NIST)
|
16:45 - 17:30
|
Panel 3: "What can XAI bring to data protection?"
|
Helena Quinn, Information Commissioner's Office
Gianclaudio Malgieri, Vrije Universiteit Brussel
Sandra Wachter, University of Oxford
Moderator: Vitor Bernardo, EDPS |
17:30 - 17:40 |
Concluding remarks
|
Leonardo Cervera Navas, EDPS |
At the end of the event, a joint EDPS-ENISA-INRIA networking cocktail was offered to the participants of the event.
The event was followed by the Annual Privacy Forum 2023, which is organised also in Lyon on 1-2 June by ENISA, DG Connect and INRIA.
Videos
You can access the video recording of the event by visiting our Video sections or our IPEN channel on EU Video:
- IPEN event on Explainable Artificial Intelligence (XAI) - Welcome introduction
- IPEN event on Explainable Artificial Intelligence (XAI) - Keynote speech
- IPEN event on Explainable Artificial Intelligence (XAI) - Panel 1
- IPEN event on Explainable Artificial Intelligence (XAI) - Panel 2
- IPEN event on Explainable Artificial Intelligence (XAI) - Panel 3
- IPEN event on Explainable Artificial Intelligence (XAI) - Concluding remarks
Speakers
Leonardo Cervera Navas
Leonardo Cervera Navas is the Director of the Office of the European Data Protection Supervisor (EDPS), the Data Protection Authority of the European Union. Leonardo joined the European Commission in 1999 and since then he has been working in the Data Protection field in the EU institutions. In 2010, he joined the EDPS, as Head of the Human Resources, Budget and Administration Unit and he was appointed Director in 2018. As Head of the Secretariat, he is a member of the Management Board of the EDPS, responsible for advising on data protection law and policy, and he is in charge of the coordination and implementation of the strategies and policies of the institution.
Brent Mittelstadt, University of Oxford
Professor Brent Mittelstadt is Director of Research, Associate Professor and Senior Research Fellow at the Oxford Internet Institute. He coordinates of the Governance of Emerging Technologies (GET) research programme which works across ethics, law, and emerging information technologies. Professor Mittelstadt is a leading data ethicist and philosopher specializing in AI ethics, professional ethics, and technology law and policy. In his current role he leads the Trustworthiness Auditing for AI project, a three-year multi-disciplinary project to determine how to use AI accountability tools most effectively to create and maintain trustworthy AI systems. He also serves on the Advisory Board of the IAPP AI Governance Centre.
Ronan Hamon
Ronan Hamon is a scientific project officer with Joint Research Centre of the European Commission, located in Ispra, Italy. His current research interests focus on robustness, security and explainability of machine learning, as well as on applications of in cybersecurity and automated driving. He received a Ph.D. in physics in 2015 from the École Normale Supérieure de Lyon, France, on the analysis of transport network using data-driven approaches, and has held positions several research positions at Aix-Marseille University and at CMRE NATO, working on topics such as computer vision, music processing, and acoustics.
Joren Verspeurt, Radix.ai
Mr Verspeurt graduated from the KULeuven (University of Leuven) in the Master of Computer Science programme, specializing in AI. He wrote a master's thesis on applying deep learning techniques to psychophysiological data from simple EEG/"brainwave" sensors to measure and improve the gaming experience. He is currently working at Radix.ai as a Machine Learning Engineer and Security Officer Responsible for Data Protection, focussing on the development of ethical and legally compliant AI solutions using Natural Language Processing and time series methods for clients in the HR and transport sectors.
Martin van den Berg, HU University of Applied Sciences Utrecht
Dr. Martin van den Berg is an Associate Professor in the research group Artificial Intelligence at the Utrecht University of Applied Sciences. Here he researches Explainable AI with a focus on the financial sector. Dr. van den Berg holds an MSc in Business Economics and an MSc in Logistics Management, both from Tilburg University. He obtained his PhD from the VU University Amsterdam, on the subject of improving IT decisions with enterprise architecture.
Zachary C. Lipton, Carnegie Mellon University
Mr Lipton is an Assistant Professor of Machine Learning and Operations Research at Carnegie Mellon University (CMU). He directs the Approximately Correct Machine Intelligence (ACMI) lab, whose research focuses include the theoretical and engineering foundations of robust and adaptive machine learning algorithms, applications to both prediction and decision-making problems in clinical medicine, natural language processing, and the impact of machine learning systems on society.
Michaela Benk, ETH Zurich University
Ms Benk is a PHD candidate at the Mobiliar Lab for Analytics at ETH Zurich, specialized in the area of trust in human-AI interactions. Her research on the use of explainable AI techniques and their impact on user trust and decision-making draws on interdisciplinary methods of and prior research in the fields of Psychology, Cognitive Science, Natural Language Processing, and Computer Science. Her aim is to develop a rigorous understanding of the underlying mechanisms of trust in AI, as well as identify practical implications that can inform the design of explainable AI methods.
Reva Schwarz, NIST
Reva Schwartz is a research scientist in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) where she serves as Principal Investigator on Bias in Artificial Intelligence for NIST’s Trustworthy and Responsible AI program. Her research focuses on evaluating AI system trustworthiness, studying AI system impacts, and driving understanding of socio-technical systems within computational environments. Reva's background is in linguistics, and experimental phonetics. She advocates for interdisciplinary perspectives and bringing contextual awareness into AI system design protocols.
Giulia Del Gamba, Intesa Sanpaolo
Giulia Del Gamba works for the bank Intesa Sanpaolo where she deals with policies about Emerging Technologies within the Chief Data, A.I., Innovation and Technology Officer Area. Furthermore, she also sits on the Founding Editorial Board of Springer’s AI & Ethics Journal. Prior to this, Giulia started her career in Intesa Sanpaolo in 2016 as AI Ethicist focussing on data protection and algorithmic trustworthiness. Last year she also held a position of Legal Advisor in the Intesa Sanpaolo Hong Kong branch. In 2017, Giulia worked as a Legal Trainee in the Policy & Consultation Unit at the European Data Protection Supervisor (EDPS).
Helena Quinn, ICO
Helena Quinn is a Principal Policy Adviser for AI and Data Science at the Information Commissioner's Office (ICO), the UK data protection regulator. She has worked at the intersection of policy and AI in academia, industry and the public sector for over eight years. Most notably, Helena was a principal author of the ICO’s guidance, ‘Explaining decisions made with AI’, and has produced a paper on the harms that algorithms can bring about for competition and consumers in her previous role at the UK Competition and Markets Authority (CMA). She has also led work on the role algorithm audits can play in the regulation of AI through the Digital Regulation Cooperation Forum, a consortium of UK regulators.
Gianclaudio Malgieri, Leiden University
Professor Gianclaudio Malgieri is an Associate Professor of Law and Technology at Leiden University (eLaw Center for Law and Digital Technologies) and the Co-Director of the Brussels Privacy Hub at the Vrije Universiteit Brussel. He is also an Associate Editor of Computer Law and Security Review and the founder and coordinator of VULNERA, the International Observatory on Vulnerability and Data Protection, a topic on which he just published a book with Oxford University Press ("Vulnerability and Data Protection Law", 2023). He teaches and researches AI regulation, and data protection, also at the intersection with other fields, including vulnerability studies, gender studies and consumer protection.
Sandra Wachter, University of Oxford
Professor Wachter is a Professor of Technology and Regulation at the Oxford Internet Institute at the University of Oxford, where she researches the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law.
To know how we process your personal data please read the Data Protection Notice.