Print

Artificial Intelligence, data and our values – on the path to the EU’s digital future

Wojciech Wiewiórowski

COVID-19 has absorbed, as normal and justifiable, most of the data protection community’s attention on pandemic related matters, namely contact tracing apps. The judgment of the Court of Justice in the so-called Schrems II case has dominated our discussions this summer. Nevertheless, Artificial Intelligence (AI) occupies a privileged seat among the data protection hot topics of 2020.

While AI is far from being new, it has only recently become ‘mainstream’. Progress in computing and transmission hardware and software has paved the way for embedding AI components in many products and services for the general public. The expectations of the increasing use of AI and the related economic advantages for those who control the technologies, as well as its appetite for data, have given rise to fierce competition about technological leadership. In this competition, the EU strives to be a frontrunner while staying true to its own values and ideals.

As a first step towards an EU regulatory framework to address the human and ethical implications of AI, the European Commission launched two public consultations on its Communication “A European strategy for data” (the Data Strategy) and its “White Paper on Artificial Intelligence – A European approach to excellence and trust” (the White Paper).

This blogpost provides the highlights of our main observations, and we invite you to read the full details of our assessments in the EDPS opinion on the Data Strategy and the EDPS opinion on the White Paper.

1. Artificial Intelligence – yes, but the European way

We appreciate that the Commission refers to a European approach to AI, grounded in EU values and fundamental rights, reiterating the compliance with the European data protection legislation. It is equally important to have a coherent approach throughout the Union: any new regulatory framework for AI should be the same for both EU Member States and EU institutions, offices, bodies and agencies.

However, let us be prudent. AI comes with its own risks and is not an innocuous, magical tool, which will heal the world harmlessly. For example, the rapid adoption of AI by public administrations in hospitals, utilities and transport services, financial supervisors, and other areas of public interest is considered in the EC White Paper ‘essential’, but we believe that prudency is needed. AI, like any other technology, is a mere tool, and should be designed to serve humankind. Benefits, costs and risks should be considered by anyone adopting a technology, especially by public administrations who process great amounts of personal data.

The increase in adoption of AI has not been (yet?) accompanied by a proper assessment of what the impact on individuals and on our society as a whole will likely be. Think especially of live facial recognition (remote biometric identification in the EC White Paper). We support the idea of a moratorium on automated recognition in public spaces of human features in the EU, of faces but also and importantly of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals.

Let’s not rush AI, we have to get it straight so that it is fair and that it serves individuals and society at large.

2. A Data strategy for Europe

The COVID-19 crisis has made the importance of data availability and citizens' trust crystal clear. There is more: it has also unveiled that the protection of personal data is not a problem, it is part of the solution. If data spaces remain true to European values, they could pave the way for an open, fair and democratic alternative to the current predominant business-model, characterised by an unprecedented concentration of data in a handful of powerful players. If an equitable and sustainable approach is adopted, data spaces could be the missing medium through which individuals are empowered to share their data, while benefiting from a more transparent overview of their data’s multiple use.

The context in which the consultation for the Data Strategy was conducted gave a prominent place to the role of data in matters of public interest, including combating the virus. This is good and right as the GDPR was crafted so that the processing of personal data should serve humankind. There are existing conditions under which such “processing for the public good” could already take place, and without which the necessary trust of data subjects would not be possible.

However, there is a substantial persuasive power in the narratives nudging individuals to ‘volunteer’ their data to address highly moral goals. Concepts such as ‘Data altruism”, or ‘Data donation” and their added value are not entirely clear and there is a need to better define and lay down their scope, and possible purposes, for instance, in the context of scientific research in the health sector. The fundamental right to the protection of personal data cannot be ‘waived’ by the individual concerned, be it through a ‘donation’ or through a ‘sale’ of personal data. The data controller is fully bound by the personal data rules and principles, such as purpose limitation even when processing data that have been ‘donated’ i.e. when consent to the processing had been given by the individual.

The Commission’s strategies on data and Artificial Intelligence are well aimed to ensure a European approach that remains true to our values.

It will be crucial to see how they will be specified and actioned. As per our institutional mission, we stand ready to advise the Commission and EU legislator on how to ensure that technological leadership does not come at the cost of undermining fundamental rights and freedoms.

The AI regulation project and the data strategy are both long-term endeavours, and we expect to see them evolve substantially in light of the flow of contributions that the public consultations have gathered. As part of the EDPS 2020-2024 Strategy “Shaping a Safer Digital Future: a new Strategy for a new decade”, we will closely monitor developments.