Questions come up across many areas of application. Will police use of facial recognition technology be accurate and fair? Do we want to have our movements monitored by the authorities? Will driverless vehicles behave safely and responsibly? Will they use collected information about traffic and journeys to promote safe transport or will there be manipulative advertising and unwarranted surveillance? More widely, will AI lead to job losses on a massive scale, with a few savvy entrepreneurs reaping the rewards of a shift to an automated work force? Will deepfakes circulating online make it impossible for us to believe what we see and hear? These are just some of the problems that trouble both governments and commentators, and increasingly, those in the tech industry too.
Many are now calling for a coordinated international response to these questions. Regulation of AI was a hot topic at this year’s World Economic Forum meeting. Google and Alphabet CEO, Sundar Pichai, spoke on the need for a common regulatory framework for AI. He called for a global approach and praised EU efforts to engage in detail with the technology.
Is self-regulation the right approach?
For some, the best approach towards promoting safe and responsible AI is self-regulation. Organisations like the Partnership on AI have developed their own approach to implementing ethical principles in AI development. With over 100 partners from across industry and civil society, the Partnership on AI seeks to ensure that AI technologies are fair, safe, just, and inclusive.
In China, the 2019 Beijing AI Principles, released by a multi-stakeholder coalition involving major players like Baidu and Tencent, call for “the construction of a human community with a shared future, and the realization of beneficial AI for humankind and nature”.
Individual tech companies are developing their own programmes and principles. Microsoft, for example, has introduced a responsible AI programme to embed its own AI principles into development activity.
This widespread engagement with the ethical issues is impressive. But for many, the need for a specific regulatory structure is becoming more pressing, to ensure both a consistent framework and a level playing field for operators.
Writing in the Financial Times, Sundar Pichai acknowledged Europe’s data privacy framework, the GDPR, as a “strong foundation”. Pichai recognised the need for good regulatory frameworks for AI to address “safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways”.
What about existing laws?
Of course, AI is not developing in a legal vacuum. Many existing legal structures already apply to its development and use.
Where information about individuals is collected and processed, data privacy laws like the EU GDPR and the California Consumer Privacy Act will apply. These control the kinds of information that can be collected, and for what purposes, as well as the information and protections that must be provided to individuals.
Products using AI will fall within product safety rules. In the EU, for example, the General Product Safety Directive, and more specific rules applicable to products like medical devices and vehicles, set a framework for ensuring the safety of products put on the market.
If an AI product or service causes harm, then laws on compensating individuals and businesses for harm suffered will become important. For example, a healthcare technology involving the screening of diagnostic scans, which misses an obvious adverse result that would easily have been spotted by a human technician, could become the target of damages claims. Or a vehicle that operates in a way that causes an accident could lead to claims against machine learning systems that support its operation.
As AI systems become increasingly sophisticated, the reverse situation could apply. Failure to use an AI technology could itself be the basis for a claim. Under this scenario, a clinician who reviewed diagnostic scans by eye, without the support of a proven AI technology, could be sued for negligently failing to apply the best approach.
Contract law will often be relevant. A consumer or business buying services will normally enter into a contract with the supplier. Contracts will define what information can be used and for what purposes, and the situations in which compensation will be payable. Controls on what is allowed in a contract, particularly where consumers are involved, can restrict areas like exclusion or limitation of liability for harm.
Legal regimes dealing with cybersecurity are also important. These laws may require minimum standards for the protection of data and IT systems, particularly in high-risk contexts like healthcare and energy networks.
Intellectual property laws will be relevant to the ownership of datasets, AI systems and technologies, and the results arising from the use of AI.
And as has been seen in recent years, competition law has a role in examining and controlling the behaviour of large tech players, particularly where they have developed a powerful position in a particular market.
While these existing legal structures provide a framework within which AI can develop, two problems arise. First, they vary considerably from one country to another. Regional harmonisation like that within the EU goes some way towards international uniformity. But detailed differences between one country’s laws and another’s make it very difficult for international businesses to design in compliance. Second, existing legal structures leave a lot of gaps and uncertainties when applied to previously unseen situations. A couple of examples:
- If an AI system is used to identify a new drug candidate for a particular application, who is the rightful owner of that invention? Is it the developer of the AI system, or even the AI system itself? This is an area currently being explored by the World Intellectual Property Office.
- An AI system supports a surgeon carrying out an operation, but the procedure goes wrong. The allocation of liability between the human operator, the AI developer and anyone else involved in deploying the system in that environment may be a much more complex issue than those faced by current legal systems.
Given these problems, it is not surprising that technology companies are starting to call for specific, and internationally consistent, regulation.
A developing international consensus
Efforts at international level have made some progress in developing agreed principles.
The OECD AI Principles were adopted by 42 countries in May 2019. These principles formed the basis of the G20’s AI Principles, adopted in June 2019. Success within the G7 has been more mixed. Six of the seven member countries are ready to move ahead with a Global Partnership on AI. However, the US is less keen, with the Trump administration reportedly saying that the body will be heavy-handed and will slow down development.
What are governments doing?
A growing number of governments around the world recognise a need to regulate the development of AI, and many have ongoing projects to address it. In many cases, this is twinned with policies aimed at promoting the home-grown AI industry.
Europe has an advanced project to shape the future of AI. The EU has recently launched a set of policy papers including a White Paper aiming to foster “a European ecosystem of excellence and trust in AI” and a Report on the safety and liability aspects of AI.
China’s Ministry of Science and Technology released a set of eight principles for AI governance in 2019. These include fairness and justice, respect for privacy, security and open collaboration.
In February 2019, President Trump signed an executive order launching the American AI Initiative. This includes a role for Federal agencies by establishing guidance for AI development and use across different technologies and sectors. A consultation on draft Guidance for Regulation of Artificial Intelligence Applications, directed to Federal agencies, is currently open for comment. This calls for consideration of ten principles, including public trust, public participation, risk assessment and management, safety and security, and disclosure and transparency.
Looking across these and various other programmes and initiatives, the same themes often emerge. National approaches vary considerably in emphasis to reflect local priorities and culture, but there is a striking level of agreement about the issues to be addressed and the broad principles to be followed.
What elements are likely to be addressed in AI regulation?
We can expect to see the following areas addressed in AI regulation:
- privacy: measures to ensure that data about individuals is kept safe and not used in ways that they have not agreed to.
- cybersecurity: steps to counter the risk of hacking to avoid theft of data and falsification of important information.
- explainability and transparency: How does the system work? Can individuals affected by it understand how decisions are made?
- fairness and the elimination of bias: controls to prevent systems from operating unfairly, and particularly to avoid bias on the basis of characteristics like gender, race or ability.
- safety: developers should be required to apply risk management measures throughout the development process to ensure safe products and services, and to enable traceability of process and decisions in use.
- accountability: steps to enable the allocation of liability in a complex ecosystem.
Take away points
Governments around the world have AI under the microscope. Policy development tends to put coherent regulation second to strengthening national competitiveness, but AI-specific regulation and legal change is certainly on the agenda.
Businesses involved in the development and use of AI technologies should see this as an opportunity to shape the future. Broad principles have been hammered out internationally, and these are likely to inform the next steps. But the detail is yet to be worked out, and not all countries see an international approach as the best way forward. Now is the time to make the argument for international consistency and workable solutions to these questions.