Why is there a privacy problem with AI?
AI might be summarised as being a collection of technologies and methods that draw on large data libraries as a “control” factor, to drive an element of responsiveness to, or enhanced analysis of, “real” data and scenarios. We have got used to real-time data. Current AI technology sees computers responding to or interacting with real-time events, in real time. It isn’t true “intelligence”, but it is a significant step beyond the “hard wired” responsiveness of predecessor technology, and it can involve machines making decisions that affect humans – indirectly (e.g. as a result of research that uses AI to make its findings) or even directly. This is one source of the privacy problem: do I want a machine to make decisions that affect me, and do I have a choice?
Effective AI requires oceans of data samples, and this is another source of the privacy problem. AI uses, and can be used to interact with or analyse, large data sets in many different situations, and many of these deploy information about individuals. We see its application in healthcare, for example, where information-based decision making can become quicker and more accurate. Some AI does not use personal data: for example it might use textual data (e.g. literary fiction), or financial or other statistical data. Where AI needs to use data libraries about humans (e.g. for research), the data is sometimes anonymised so that individuals cannot easily be identified from them, but anonymisation is technically hard to achieve, and under the GDPR individuals should be informed before their data is anonymised. On the other hand, often the data libraries cannot be anonymised. AI is used to assist with facial recognition by law enforcement agencies and service providers like airports. Image data can be analysed to identify individuals among a crowd quickly and efficiently. To identify an individual, or a type of individual, requires a huge data library of “control” photographs, each of which will unavoidably identify the human subject of the photo.
AI’s need for oceans of data brings a subtler privacy headache, which is that acquiring that much data is often beyond the reach of any one individual or organisation. So, where personal data is involved, a number of data controllers need to pool their data (in raw, or anonymised form) to feed the AI machine. Data sharing supply chains are becoming fundamental to AI. Like all supply chains, data supply chains need to be carefully managed, because a compliance error by one party in the chain (e.g. in failing to provide appropriate information to affected data subjects, failing to establish a legal basis for sharing their data for AI purposes, or failing to comply with rights exercised in relation to automated decision-making) can impact on the legal and operational viability of the entire AI operation – and the reputation and liability of its participants.
Many have concerns about the rapid growth of AI that makes use of personal data. Data may be collected in situations where individuals are unaware of it. AI may be used to make decisions about people’s lives, like whether they will be invited for interview, or offered a credit card, or stopped by the police or border control authorities. The data itself may be of a particularly sensitive nature – images from diagnostic scans for example. And the way AI algorithms analyse data and reach conclusions can be difficult to understand, particularly where machine-learning is involved to improve the system’s effectiveness “on the job”, so it can be difficult to tell individuals what they need to be told under GDPR.
The use of facial recognition by law enforcement authorities has encountered a strong negative reaction. Images taken from CCTV that can quickly be compared with existing “control” databases to track down suspects raises obvious privacy issues. Indeed, two US cities have recently banned the use of facial recognition technology by public authorities and others are considering bans. Public concerns over this kind of use are made worse by statistics indicating that under current technology, most “matches” are wrong, and false positives are more likely with ethnic minority and female faces. The UK Home Secretary recently backed trials of facial recognition technology by British police forces, but noted that legislation would be needed before it could be used more widely.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.