Print

A smart approach: counteract the bias in artificial intelligence

Giovanni Buttarelli

Despite its name, artificial intelligence (A.I.) is a reality and though much hyped, has woven its way into everyday life: navigation systems, spam filters, weather forecasts to name but a few. Such is the potential influence of A.I. that it can be found on the political agenda and both the White House  and the House of Commons  have published reports on the subject.

This amount of attention shows that it is not too early to talk about the impact of A.I. Its application is already quite widespread and its effects on data protection and privacy are evident. Investing now to consider the societal impact and related ethical issues will not slow down innovation but will provide a sound foundation for further development.

Artificial intelligence is the development of machines that can apply techniques beyond fixed programs, which makes it look as if they ‘think’ like a human to perform tasks that usually require human intelligence, such as visual perception, speech recognition, decision-making and translation.

Machine learning is one of the most researched subsets of A.I. and involves the construction of algorithms that can learn and make predictions using data. Arthur Samuel  described it as giving computers the ability to process data without being explicitly programmed.

In general, the internal models the computers create when applying machine learning and which they then use for solving problems are so different from the way we look at the world that, in general, humans cannot understand them. Machine learning algorithms represent knowledge in structures, which cannot be translated into a form intelligible for us without sacrificing their meaning.

This has serious implications for data protection as it means that we may not have the appropriate information about how our personal data is used and importantly, how decisions concerning us are taken, therefore making it impossible to meaningfully consent to the use (processing) of our data.

Another concern relates to the bias induced via the data used to teach the computer to understand and predict the part of the world with which it is dealing. As the machine learns from the information provided and has no means to contrast that information with a bigger picture, whatever bias is contained in the initial data set will influence the predictions made. If those predictions are used to take decisions, a vicious circle is created where the feedback the machine receives, reinforces the initial bias.

The data protection framework in Europe requires organisations (data controllers) to be transparent as far as the algorithms they use are concerned. This is especially demanding in the world of machine learning, where algorithms in use may be unknown and unpredictable for AI systems developers as designing the algorithms is a part of the machine learning process itself.

As technology develops, we need to ensure that we are prepared for the changes it will bring. We have a window of opportunity to build the right values into these technologies now prior to the mass adoption of these technologies. The foundation can be laid by bringing together researchers, developers and data protection experts from different areas in broad networks, such as the Internet Privacy Engineering Network (IPEN), which can contribute to a fruitful inter-disciplinary exchange of ideas and approaches.

In the near future, data protection authorities, as supervisors of the use of personal data, will deal with cases where machine learning has been used for taking or supporting a decision.

Without an intelligible model to analyse, determining whether or not an individual’s data protection rights have been breached will require an analysis of the machine learning process itself.

Data protection authorities may soon have to decide whether they should develop their own resource of artificial intelligence expertise to be able to re-create and analyse the models used by the organisations under their supervision.

For this reason data protection authorities gathered at their annual global assembly, the International Conference of Data Protection and Privacy Commissioners in Marrakesh in October 2016 chose the challenges of artificial intelligence as the main topic of the conference and the EDPS contributed to the discussions with a background paper.

This article was originally published in The Parliament Magazine supplement on 7 November 2016