Artificial intelligence is on the lips of politicians everywhere. AI is on the agenda for research institutes, international organisations and national governments around the world. The focus is welcome. Given the pace of change in AI, this is the right time to act and an international approach to a technology that crosses borders so readily must be the right one. The question is, are ethics and the law moving far enough and fast enough to meet the challenge that this world changing technology poses?
Do we need regulation?
Professor Stephen Hawking’s dark predictions about the future with unfettered AI taking over are nearly four years old now. The debate has moved on since then, but the concerns do not go away. Issues such as the future of employment, the prospect of wealth being condensed into the hands of a few entrepreneurs, military uses of AI and ever more sophisticated machine learning systems acting for their own benefit rather than for the benefit of human society arise repeatedly. These will need addressing as increasingly sophisticated technology embeds itself in our lives.
There is the voluntary approach.
Organisations like ecommerce pioneer Martha Lane Fox’s Doteveryone are working to promote a voluntary responsible approach. Doteveryone has developed Responsible Technology principles for those designing and operating technology. These are:
- Context — looking beyond the individual user and taking into account the technology’s potential impact and consequences on society.
- Contribution — sharing how value is created in a transparent and understandable way.
- Continuity — ensuring best practice in technology that accounts for real human lives.
Can a new approach change business for the better?
This voluntary approach ties in with a current trend towards running a business both for profit and as a force for good. Like the B-Corp movement. As an example, an analysis by B Lab authors Charmian Love, Michelle Meagher and Jay Coen Gilbert of how to fix the problems currently facing Uber shows how the B-Corp approach can change things.
But the voluntary approach isn’t for everyone. Without regulatory standards, preferably operating internationally, there will always be a mixture of positive and negative social impact.
Others recommend a light touch. Rushing into regulation is risky, the argument goes, because it can stifle innovation in the short term while not being fit for purpose in the longer term. We can rely on liability law to place responsibility at the door of the person who is able to control safety. But is this enough? We have seen recently the dangers of a light touch approach to data privacy and the powerful impact of proactive regulation in that field with the introduction of Europe’s General Data Protection Regulation.
What steps are law-makers taking?
A comprehensive framework for AI has yet to make its way onto the statute book. Addressing the wider issues presented by artificial intelligence and robotics is still in the research and analysis stages, it seems. Of course, a difficult balance needs to be struck in setting regulatory standards that protect users and put social benefit at the heart of developing technology, while not stifling innovation and enterprise. On top of this, international co-operation is necessary if we are to avoid a piecemeal and conflicting set of legal structures.
At a European level, joint commitment to developing a legal and ethical framework for AI is gaining momentum. On April 10, most of the EU’s member states plus Norway signed a commitment to work together on AI. The aims of the cooperation are threefold; boosting business with a view to economic growth and employment, education and reskilling people to tackle socio-economic challenges and building the legal and ethical framework.
Will existing laws be enough?
The European Commission has followed this with a strategy paper (Communication from the Commission on Artificial Intelligence in Europe) showing how some existing legal structures can be adapted to support and frame AI development. Existing law on product safety and liability, cybersecurity and data protection go some way towards a framework within which AI can operate. These areas of law will be assessed and developed to fit developments in AI, as well as IoT and robotics. But clearly more is needed to address the wider ethical concerns.
The EU Commission plans to co-ordinate a forum for the development of ethics guidelines (the European AI Alliance) within the year.
In the UK, the Government plans to invest £9m in a Centre for Data Ethics and Innovation. This initiative sits within the UK Industrial Strategy, that places AI as a key sector for development.
AI and social issues
Alongside this Government effort, social policy and ethics charity, the Nuffield Foundation, has recently committed £5m to launch a new institute to examine ethical and social issues arising from the use of data, algorithms and artificial intelligence. Named after pioneer computer scientist Ada Lovelace, the new institute aims to:
- Convene diverse voices to build a shared understanding of the ethical questions raised by the application of data, algorithms, and AI.
- Initiate research and build the evidence base on how these technologies affect society as a whole, and different groups within it.
- Promote and support ethical practices that are deserving of public trust.
A recent in-depth report by the UK House of Lords ("AI in the UK: ready, willing and able?") recommended a cross-sector AI Code embodying five principles, that might form the basis of an international consensus:
- Artificial intelligence should be developed for the common good and benefit of humanity.
- Artificial intelligence should operate on principles of intelligibility and fairness.
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
Internationally, Japan’s G7 Presidency put AI on the agenda in 2016. A Joint Declaration from the April 2016 meeting committed members “to ensure that our policy frameworks take into account the broader societal and economic implications of such technologies as they are developed while remaining technology neutral.”
The OECD is taking these ideas forward with intensive discussion and debate within the organisation’s two-year “Going Digital” project. As explained by Douglas Frantz, Deputy Secretary-General of the OECD,
“We should not proceed without trying to reach a global consensus on the type of institutional framework necessary to promote and control artificial intelligence as it moves deeper into the social and economic mainstream.”
Difficult though this will be, surely trying to reach an international consensus with at least a framework and principles to shape the development of AI, and doing that sooner rather than later, is the right answer.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.