A new GDPR for AI?

English

Newsletter

Published 05 May 2021
News image

By Eva Jarbekk

GDPR has 99 articles. The new suggested regulation on AI from European commission has 85 articles. The commission is suggesting very detailed rules on how AI is supposed to be developed in a legal manner in Europe onwards.

Most will agree that AI has highly interesting possibilities, both for the good of society and for new commercial possibilities. At the same time it is recognized that AI may also be used in ways that are not so attractive. There are ample examples of bias in algorithms with unfortunate results.

It is definitely an ambitious project that the commission is launching. It is the most detailed regulation of AI in the whole world – very much the same way that GDPR was the most advanced regulation of privacy in the world. It is expected that it will generate a lot of discussions, and most likely will it will take some time before this becomes a regulation that is in force. It is expected that there may be made substantial changes to the provisions. It may not yet be the time to study the provisions in detail, unless you're a lobbyist. And– if you do make or use AI, you should take the time to study this regulation.

It is a regulation, the same way as GDPR, and it will therefore apply in the same manner in all European countries as well as in Norway. The main take in the regulation is that different types of AI have to comply with different sets of rules. High risk AI is more regulated than AI with less serious consequences. The threshold for being a so-called high risk AI system, it’s actually quite low. Even an AI used for employment processes is considered high-risk. This is something that is actually used by many already today.

All high risk AI systems must comply with a large set of requirements. They shall have a risk management system, and a conformity assessment, very much resembling the data protection impact assessments set out in GDPR. In order to develop a high risk AI, one must also meet particular criteria on development, validation and testing of data sets. It is fascinating to see that they require human oversight on the operation of high risk AI systems in order to prevent or minimise the risks to health, safety or a fundamental rights as a consequence of the AI system. This shall be ensured and built into the system before the AI system is placed on the market. It will be interesting to see how this is supposed to be carried out in real life.

Not only the developers, but all parties involved in commercializing AI must meet new sets of criteria and documentation. It addresses developers and providers of high risk AI systems, as well as users of such systems. It even applies to importers of AI systems from third countries. In this way, the regulation in many ways also have a have extra territorial effect, much the same way as GDPR.

There are also detailed provisions on notification to public surveillance authorities. There is also set out a duty to report serious incidents and malfunctioning. This is a well-known concept from the GDPR. However, the timeframe is quite different. In GDPR, notices must be made within 72 hours after a breach. For AI, it is suggested that notification shall be made immediately after the provider has established a causal link between the AI system and the incident or malfunctioning or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the providers becomes aware of the serious incident or of the malfunctioning.

The sanctions for breach are set out to be exactly the same as GDPR: € 20 000 000, or 4% of the yearly net turnover. For special breaches the sanctions or even higher, up to €30 000 000 or 6% of the turnover.

The discussions onwards will most likely cover many aspects of the regulation. Is "high-risk" set too low? Are the criteria too many and too complicated? And what about the sanctions – are they too high? We live in interesting times.