The EU institutions have been eying the artificial intelligence space for some time – due to concerns over the potential effects of AI on individual fundamental rights; due to Member States’ moves to implement diverse policies and regulations; and also due to the wish to see Europe compete with the US and China as a leader in this sector.
The twin objectives of the proposed Regulation are promoting the uptake of AI and addressing the risks associated with it. Possibly an EU-wide regulatory framework will provide legal certainty and build public trust; and the Regulation does include some provisions to assist SMEs and encourage innovation. However, in reality the draft Regulation focuses more on controlling undesirable uses and outcomes of AI. Like the GDPR, the proposed Regulation will make industry sit up due to its extra-territorial scope and significant penalties for non-compliance.
1. How is AI defined?
The Regulation would cover AI systems, i.e. “software that is developed with one or more of the techniques and approaches listed [below] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with […]
a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
c) Statistical approaches, Bayesian estimation, search and optimization methods.”
Users would include public authorities and EU institutions. However, AI systems developed or used exclusively for military purposes will not be covered.
Comment: this seems a very wide definition of AI – e.g. other definitions would place more emphasis on AI perceiving data from its environment which might exclude some statistical approaches. Of course, the idea is that the Regulation should be technology neutral and sufficiently flexible to take in future developments. Also, the Commission is proposing that it should have the ability to update the list of covered techniques and approaches from time to time (subject to receiving no objections from the EU Council or European Parliament). From an industry perspective some additional requirement for consultation when coming to update the list might help to provide more confidence and planning ability.
2. Who is covered?
The Regulation would apply to:
a) providers placing on the market or putting into service AI systems in the
EU, irrespective of whether those providers are established within the EU
or in a third country;
b) users of AI systems located within the EU;
c) providers and users of AI systems that are located in a third country, where the
output produced by the system is used in the EU.
Comment: The intention is clearly to address the AI ecosystem affecting the EU, so not just technology providers but also users, although the term “users” seems intended to cover business users and not end consumers of products including AI, which could need some clarification and may be seen by some as a gap. Also, the Regulation has extraterritorial effects by including those outside the EU that make available AI systems or outputs to the EU. While there is always a doubt over how this would be enforced, EU standards could not be avoided by setting up or exporting input data for processing in a third country (so once again the UK will not entirely escape the EU regulatory orbit…).
3. How will AI systems be regulated?
AI systems will be subject effectively to a four tier classification:
a) Prohibited AI Practices (never permitted)
b) High Risk AI Systems (permitted and regulated – these are the main target of the Regulation)
c) AI Systems intended for interaction with individuals (these will be subject to transparency obligations)
d) All others (not affected, except that the industry will be encouraged to draw up codes of conduct voluntarily adopting high risk system requirements for non-high risk AI systems, and other standards such as sustainability and accessibility)
Comment: This risk-based approach seems balanced enough – there are some practices falling within the broad definition of AI (e.g. product, music, or media recommendation or search engines) which clearly shouldn’t be subject to the same level of regulation as others, though of course the boundaries between the tiers will be all important.
4. What AI practices will be banned?
The practices that are deemed to present unacceptable levels of risk for EU values and individuals’ fundamental rights are:
a) use of subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause harm.
b) exploitation of vulnerabilities of a group due to their age, physical or mental disability in order to materially distort the behaviour of a person in that group in a manner that causes or is likely to cause harm.
c) evaluation or classification of the trustworthiness of individuals based on their social behaviour or known or predicted personal or personality characteristics, which may lead to detrimental or unfavourable treatment (i.e. social scoring).
d) use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except: (i) for targeted searches for specific potential victims of crime; (ii) to prevent a threat to life or a terrorist attack; or (iii) to detect, localize, identify or prosecute a perpetrator or suspect of a serious crime (where prior judicial/administrative authorisation has been obtained).
Comment: These seem fairly predictable “redlines” – and it’s helpful that what is banned are use cases, not systems as such. However, point (a) could warrant some clarification – where does persuasive marketing stop and where do illegal dark patterns start? And point (d) seems to enjoy very broad exceptions (even from the need for a prior court order in cases of “urgency”). The ban on real time remote biometric identification for law enforcement seems to indicate that this is permitted for commercial purposes, which may generate concerns for some.
5. What are considered high risk AI systems?
AI systems are considered “high-risk” if they are:
i) intended to be used as a safety component of a product, or are themselves a product covered by EU product safety legislation (e.g. machinery, toys, lifts, radio equipment, recreational craft equipment, medical devices); or
ii) listed in Annex III to the Regulation, which includes:
1. Biometric identification and categorisation of individuals (‘real-time’ and ‘post’);
2. Management and operation of critical infrastructure (road traffic, supply of water, gas, heating and electricity);
3. Education and vocational training (decisions on access or assignment to educational and vocational training institutions; assessments and admission tests);
4. Employment, worker management and access to self-employment (recruitment or selection; decisions on promotion and termination, task allocation, monitoring and evaluations);
5. Access to and enjoyment of essential private services and public services and benefits (eligibility for public benefits, credit scoring, dispatching of emergency services);
6. Law enforcement (offending or reoffending risk assessments, lie or emotional state detection, detection of deep fakes, evaluation of criminal evidence, profiling and risk assessment of criminal behaviour, crime analytics);
7. Migration, asylum and border control management (lie or emotional state detection, risk assessment of individuals, verification of travel documents, examination of applications for asylum, visa and residence permits);
8. Administration of justice and democratic processes (researching and interpreting facts and the law).
The Commission may also update this list, but in this case updates would need be based on an assessment that the systems pose similar or greater risks than those already on the list.
Comment: Possibly the medical devices category will cover some use cases but more generally healthcare might have been expected to feature on this list.
6. What are the rules for high risk AI systems?
High-risk AI systems will need to:
- have in place a risk management system;
- where data is used for training, validation and testing, ensure that such data meets quality criteria including being “relevant, representative, free of errors and complete”. This data should also be accessible via API or other remote means to Member States’ authorities. Where “necessary” and upon “reasoned request” such authorities must also be given access to the source code of the relevant systems and relevant documentation;
- be accompanied by prescribed technical documentation;
- generate logs capable of tracing the system’s functioning and monitoring risks;
- be sufficiently transparent to enable users to interpret the system’s output and use it appropriately;
- be capable of effective human oversight (and not override or reject human control) including the ability to disregard or reverse the output and press a “stop” button;
- meet appropriate levels of accuracy, robustness and cybersecurity.
In addition to ensuring these requirements are met, providers of high risk AI systems must:
- for AI systems which enable biometric identification and categorization of individuals, ensure that they undergo the relevant conformity assessment procedures through an official notified body (with additional obligations for credit institutions) (for other high risk systems, internal assessment procedures will be sufficient);
- register in a new publicly available EU database;
- inform Member State authorities of any non-conformities and correct actions;
- have in place a quality management system ensuring compliance with the Regulation;
- apply the CE mark of conformity where applicable;
- appoint a representative in the EU if not established there;
- have in place a post market monitoring system and plan.
Importers and distributors also have obligations to verify compliance with the Regulations and to notify competent authorities of cases of non-conformity or within 15 days report any serious incident or any malfunction which breaches EU laws.
Users of high risk AI systems must use them only in accordance with the instructions, where applicable ensure data inputs are relevant, monitor usage, keep logs, and use the information provided by providers to carry out data protection impact assessments where required under the GDPR.
Comment: Again, there may be scope for more precision here – “free of errors” and “sufficient to enable users to interpret” outputs seem subjective concepts difficult to meet with certainty.
The focus on data is interesting (infringement of the data provisions of the Regulation are in the category attracting the highest penalties). What we don’t have here is any obligation that algorithms or decisions should be of a certain quality, other than that systems should provide appropriate levels of “accuracy” and “robustness”. Obviously there are overlaps with other laws here, and biases, unfairness, or discrimination maybe hard to detect and eliminate (and the data may in reality be the factor), but this might be something that we might see moves to strengthen.
Also, most high risk systems will not need to go through external conformity assessments. Some will query whether internal processes are sufficient where we are still talking about critical systems such as energy supply or criminal justice.
Granting authorities access to our source code sounds onerous; however, possibly the more relevant (and tougher) provision is that we should as a matter of course provide them API access to our datasets.
7. What about systems intended to interact with individuals?
The main additional obligation on systems intended to interact with individuals (whether or not high risk) is transparency – the systems need to be designed and developed in such a way that individuals are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context.
This excludes criminal prosecution related systems but covers other emotion recognition or biometric categorisation systems. “Deep fakes” also need to disclose that they are not authentic.
Comment: The definition of “interact” may need some clarification – presumably we interact with a search engine on a website; but do we interact with e.g. Spotify’s or Netflix’s recommendation engine which automatically presents recommendations that we can ignore? It would seem strange and arbitrary if one needed to present a warning but the others didn’t.
8. Enforcement and penalties
Where a system is not in compliance with the Regulation, Member State authorities will be entitled to demand that the operator take appropriate corrective actions, withdraw it from the market, and/or make a product recall. Where the non-compliance extends across EU borders, they should inform the Commission and/or other Member States. The Commission may also intervene if measures taken by one Member State are not approved of by another, or where a Member State finds that a system that does comply with the Regulation still poses risks to health and safety or breaches other laws.
As regards penalties, Member States will set out the relevant rules and procedures. However, the Regulation sets out the following range of penalties:
- Up to €30m or 6% of the offender’s total worldwide annual turnover for the preceding financial year (whichever is higher) for: (1) engaging in prohibited practices; or (2) breaching the requirements on data and data governance;
- Up to €20m or 4% of annual turnover for non-compliance with any of the other requirements or obligations of the Regulation;
- Up to €10m or 2% of annual turnover for the supply of incorrect, incomplete or misleading information to notified bodies and Member State authorities.
Comment: These potential penalties match or outdo the GDPR’s already eye catching (and occasionally eye watering) fines. In some cases of illegal data use, both sets of penalties could be applicable. There is a clear message here that the Commission takes the consequences of AI very seriously. However, although there will always be a scale and mitigating and aggravating factors should be taken into account, there is sure to be a debate over whether these levels are appropriate and, although they will be applied by Member States according to local procedures, whether additional certainty and justification can be provided at EU level.
9. Final Comments
The European Commission has set out ambitious draft Regulation clearly geared towards “ethical AI”. The proposal will need to be agreed with the EU Council and the European Parliament (which from past pronouncements may be expected to also seek further reinforcement of individual rights and protections). Although the Regulation is intended to reflect the state-of-the-art, it casts a wide net and would bring many high value AI systems into the EU product conformity regime.
Unlike GDPR, the Regulation doesn’t seek to create new rights for individuals affected by AI decisions. So there is no EU-wide right to information for consumers or to an effective judicial remedy against AI operators or competent authorities. There is of course an overlap with GDPR provisions on fair and legal data processing and automated decision-making, and with product safety laws, but this seems an area ripe for debate.
On the other hand, like under the GDPR, there will be increased focus on building systems that incorporate the provisions of the Regulation by design, and it implies a significant effort in producing the required documentation and obtaining necessary marketing approvals.
Civil liability may also be an issue – a biased output of an AI system may be caused by data or software which are working perfectly and may be hard to bring within the definition of a “defect”. And who is responsible for decisions taken autonomously by an AI system? The provider, the business user, the programmer, the data provider? As under EU general product safety regulations, if importers or distributors market products under their own names, they also become responsible for the manufacturer’s obligations. We’ll need to review insurance policies and distribution contracts and decide to what extent they do or should cover these liabilities.
The proposal will be heavily debated – probably (inevitably) it goes too far for those who want to encourage innovation and investment; and probably (also inevitably) not far enough for those whose focus is individual rights. The GDPR fell into a similar category and took around 4 years to agree. In a fast moving sector, stay tuned for plenty of lobbying and amendments before this becomes effective. Like GDPR, we will be given two years from final adoption to prepare, but we should take the opportunity of engaging with this now in order to be well prepared (CE marks and conformity tests will require significant resources), and to provide practical input where appropriate.
Image by Gerd Altmann from Pixabay