AI systems on offer range from basic chatbots to complex screening processes. Developed by third party suppliers they are advertised as secure, effective ways to achieve an organisation’s aims. However, a suitable and reliable system is likely to require significant investigation and investment, with careless or hasty selection leading to legal, regulatory and operational risk. Health and care organisations looking to invest in this area should ensure that their procurement processes are not overly focussed on cost, given that any initial saving could be lost later on via regulatory fines, litigation and reputational damage.
Being clear as to what kind of system is sought and the reasons it is needed is the first step in the decision-making process. Any AI implemented must be appropriate and capable of performing as needed. The outputs wanted from such a system must be identified, as well as the extent to which the AI will need to integrate with existing arrangements.
The legal requirement to implement privacy by design where individuals’ data will be processed means that planning time for AI use and the level of detail sought form suppliers during the procurement process will be high. A significant level of information should be provided by the supplier, sufficient for the organisation to understand any automated decision making, ascertain how and with what data sets the AI has been trained (to minimise the risk of building bias and discrimination into the process), confirm the security of the system, and satisfy itself as to the accuracy of the AI. Failure to obtain, review and appoint significance to such information is likely to breach the principle of accountability. Recent guidance published by the Department for Science, Innovation and Technology also strongly recommends piloting any AI technologies with a diverse range of potential users prior to adoption.
As well as completing a thorough Data Protection Impact Assessment (DPIA), organisations must also consider the wider issues of equality, human rights, and ethics. Data processing is only permitted where it is lawful, transparent and fair. Analysis of fairness is multi-faceted. For example, the use of AI in recruitment is highly controversial. Is it fair to use AI to attempt to track a job applicant’s emotional response in a video interview? Are decisions made on such a basis – being the unconscious motions and expressions of an applicant that would not normally be noticed - be unfair? What about automatically rejecting applications because statistical analysis of the phrasing used by the candidate doesn’t indicate sufficient enthusiasm? Are such determinations well-founded or based on pseudo-science? Is there a risk of discrimination, such as against stroke victims or those with facial deformities, those with darker skin tones (which many systems seem unable to pick up), those raised in different cultures or with different native languages? How will such risks be removed?
Lawful implementation of AI solutions is possible, but requires proper resourcing and attention. Health and care organisations considering using such tools should seek appropriate legal advice to avoid to inherent pitfalls.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.