As the EU’s ground-breaking AI Act moves into its final stages, we consider what its impact will be, with a focus on the life sciences sector.
AI has potential uses across many relevant disciplines, from drug discovery, through clinical trials, marketing and pharmacovigilance. It also plays an increasing role in medical devices, such as diagnostics and interaction with users. While these kinds of applications are not the main focus for the regulation, innovators will need to take account of the law as it is finalised in order to avoid surprises and ensure that they can achieve compliance where necessary.
The EU’s proposed AI Act has finished its passage through the European Parliament, leading to final negotiations at Council level and possible conclusion of the text by the end of 2023. This will not be a rubber-stamping exercise: national contributions to the process so far have set out a number of concerns on issues such as member state autonomy and restriction of innovation. However, the prospect of finalising the text this year does now seem realistic.
Introduced in 2021, the legislative proposal set out a technology-neutral, risk-based system to provide proportionate control while enabling the positive benefits of AI to be deployed to benefit both people and the environment.
We reviewed the initial proposal here. Several of the key features: broad application, extraterritorial effect, tough penalties, and a risk ladder from minimal risk to unacceptable remain in place.
The draft law includes a set of core principles that are intended to guide the development of all AI systems:
- human agency and oversight;
- technical robustness and safety;
- privacy and data governance;
- transparency;
- diversity, non-discrimination and fairness; and
- social and environmental well-being.
Changes introduced through the legislative process include greater emphasis on supporting innovation and supporting SMEs. Exemptions for research activities and AI components provided under open-source licences have been added. The new law promotes so-called regulatory sandboxes, or real-life environments, established by public authorities to test AI before it is deployed.
It addresses AI systems in risk categories.
- Specified “unacceptable risk” categories of AI will be prohibited. These range from systems that use subliminal techniques or deliberately manipulative or deceptive techniques through to those intended to influence the outcome of an election or referendum. For the life sciences sector, potentially relevant prohibited areas could include biometric categorisation systems that sort individuals according to sensitive or protected attributes (such as gender, race, ethnic origin).
- The next level – high-risk – will have to demonstrate compliance with principles and codes of conduct to ensure that they are used safely and in accordance with EU norms. This includes biometric systems generally, and systems to assess eligibility for public services including healthcare. This list will be subject to review and updating.
- General purpose AI systems will be required to comply with transparency obligations and ensure safeguards against the generation of illegal content.
- Many AI systems will fall into the lowest risk category and not be subject to regulation, although they will be expected to adhere to voluntary codes of practice.
An independent EU-level AI Office will be established to oversee the system and issue guidance, working with national supervisory bodies.
Recently introduced amendments also emphasise the need to take account of possibly overlapping legislation such as the Medical Devices Regulation ((EU) 2017/745) and In Vitro Diagnostic Devices Regulation ((EU) 2017/746), and a stronger role for national supervisory authorities.
The legislation is broadly cast, with a clear intention to influence the development of AI technology globally. It will affect both public and private sector entities, wherever they are based, if the AI system is put on the EU market, or its use affects individuals in the EU. Both developers and users of AI systems are covered. Given the broad categories of AI system in the regulated risk classes, developers would be well advised to monitor the progress of the legislation and, where possible, avoid falling foul of the prohibited category or unnecessarily falling within the high-risk category.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.