Print

What do we learn from Machine Learning?

Giovanni Buttarelli

The history of Artificial Intelligence (AI) can be seen as a sequence of increasing expectations and of frustrating disappointments. Unlike the usual hype cycle for new technologies, AI has already experienced several cycles of “peaks of inflated expectations” and “troughs of disillusionment” in the sixty years since the first coining of the term “artificial intelligence” by Stanford professor John McCarthy, considered one of the “fathers” of AI, who in 1956 designated it to be the “science and engineering of making intelligent machines”. When expectations for the possibilities of AI were high, and rapid progress seemed likely, popular culture often reflected the hopes and fears associated with the human fascination in “artificial beings”. One landmark representation of this cultural reflection of scientific and technological advancement is Stanley Kubrick's 1968 film “2001 - A Space Odyssey”, the screenplay of which he wrote with Arthur C. Clarke. One of the main characters is the computer HAL 9000, so advanced in applying intelligent reasoning that it has developed a conscience and suffers from what might betermed a psychological conflict and personality disorder - with fatal consequences for all but one of the spaceship crew under its control.

While the story reflected the expectations of scientists at the time of its making, we now know that by 2001 neither space technology nor computer science were advanced enough to send an AI-controlled spaceship to Jupiter. More time will be needed to get there, despite mankind’s remarkable achievements and continued research on the relevant technologies.

One extraordinary feature of HAL-9000, the fictitious computer running the spaceship, was its ability to learn. The authors of the script envisaged HAL as a general-purpose device, which would acquire knowledge and capabilities by learning from its makers and other people. This contrasted with the computers of the time which could only execute precisely designed programs, something that is still the case for most computers today.

Machine learning has been a central discipline in the field of AI for decades, and the recent progress in this discipline has played a central role in the rise of interest in AI. Taking advantage of progress in computer hardware and software, which has enabled faster operations and the processing of larger amounts of data as well as new storage and communications possibilities, makes it possible to apply machine learning technologies to new and bigger tasks and to advance other disciplines of AI. Natural Language Processing, Image Recognition and all kinds of operations based on data analysis are making significant progress thanks to machine learning.

New applications are so significant that they have caught the attention of the public. One of the top-ranked academic conferences on AI, Empirical Methods in Natural Language Processing (EMNLP) 2018 - where 'empirical' may be properly understood to mean data-driven - took place in Brussels a few days after the global data protection and privacy community held their annual meeting here. At EMNLP, researchers from academia and from the big technology firms reported new results in applying machine learning technology to enable computers to communicate better with humans in speech or writing -  for example, in conducting dialogues on images, searches or health information. Research might also help create better tools which recognise hate speech or deceptive texts. The speed with which new research results become part of everyday products and services is astonishing, but raises concerns that the urge to be the first to launch a new service and the competition for market shares may overrule considerations about the societal impact of new AI services, or even prevent the proper assessment of this impact on society and the fundamental rights of individuals.

There are few authorities monitoring the impact of new technologies on fundamental rights so closely and intensively as data protection and privacy commissioners. At the International Conference of Data Protection and Privacy Commissioners, the 40th ICDPPC (which the EDPS had the honour to host), they continued the discussion on AI which began in Marrakesh two years ago with a reflection paper prepared by EDPS experts. In the meantime, many national data protection authorities have invested considerable efforts and provided important contributions to the discussion. To name only a few, the data protection authorities from Norway, France, the UK and Schleswig-Holstein have published research and reflections on AI, ethics and fundamental rights. We all see that some applications of AI raise immediate concerns about data protection and privacy; but it also seems generally accepted that there are far wider-reaching ethical implications, as a group of AI researchers also recently concluded. Data protection and privacy commissioners have now made a forceful intervention by adopting a declaration on ethics and data protection in artificial intelligence which spells out six principles for the future development and use of AI - fairness, accountability, transparency, privacy by design, empowerment and non-discrimination - and demands concerted international efforts  to implement such governance principles. Conference members will contribute to these efforts, including through a new permanent working group on Ethics and Data Protection in Artificial Intelligence.

The ICDPPC was also chosen by an alliance of NGOs and individuals, The Public Voice, as the moment to launch its own Universal Guidelines on Artificial Intelligence (UGAI). The twelve principles laid down in these guidelines extend and complement those of the ICDPPC declaration.

We are only at the beginning of this debate. More voices will be heard: think tanks such as CIPL are coming forward with their suggestions, and so will many other organisations.

At international level, the Council of Europe has invested efforts in assessing the impact of AI, and has announced a report and guidelines to be published soon. The European Commission has appointed an expert group which will, among other tasks, give recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.

As I already pointed out in an earlier blogpost, it is our responsibility to ensure that the technologies which will determine the way we and future generations communicate, work and live together, are developed in such a way that the respect for fundamental rights and the rule of law are supported and not undermined. Developing such technologies in countries with the least protection for fundamental rights, controlled by authoritarian regimes, will not provide us with a sustainable and viable future infrastructure. The current debate on ethics will point to the ethical principles and values, some perhaps unconscious, to which we must pay particular attention. Policymakers and rule-makers around the globe will have to decide which laws are required to ensure that economic actors adjust their research and development strategies, as well as their business models, to bring them in line with a common understanding of what is morally sustainable in human advancement. We cannot run the risk that pure profit motive leads to all moral standards being ignored and rewards campaigns and practices which harm individuals, groups and wider society. The recent experiences with social media underline the need for a coordinated approach, driven by ethics and law, which is supported by an adapted and enforceable framework. In my own part of the world, I will continue to push the EU legislator to complete the modernisation of the EU’s data protection framework with the rapid adoption of a meaningful regulation on communications privacy.

Only by setting an example in AI and other areas of technological change can we motivate the rest of the world to follow the way of democracy and fundamental rights.