Home / Publications / The EU Artificial Intelligence Act is almost ready...

The EU Artificial Intelligence Act is almost ready to go!

The Artificial Intelligence Act is a legislative proposal by the European Commission to regulate artificial intelligence (AI). So far, unlike other countries or territories, the EU is the first legislator to present a comprehensive proposal for the regulation of AI. The EU is attempting a balancing act: To ensure that those affected by AI do not suffer any disadvantages while on the other hand to promote innovation and give AI as much scope of development potential as possible.

On 8 December 2023, after three days of debate, the European Parliament and the Council of the European Union reached a provisional agreement on the Proposal for a Regulation laying down harmonised rules on artificial intelligence (the so-called AI Act). A final vote of the European Parliament on the AI Act has taken place on 13 March 2024. Since the European Parliament's Committees on the Internal Market and Consumer Protection (IMCO) and on Civil Liberties, Justice and Home Affairs (LIBE) have endorsed the proposed text to a great extent, an approval of the European Parliament can be expected soon. It is expected to be passed into law in spring 2024, once definitively approved by the European Parliament and the Council of the European Union.

What relevancy does this Act have for Switzerland and Swiss companies? 

Swiss AI providers, which intend to place AI systems on the market or put them into operation within the EU fall under the territorial scope of the AI Act. Considering that many or most AI-offerings are made accessible online (via Website [browser-based] or via Cloud), plenty of Swiss AI providers will quickly fall under the AI Act's scrutiny. The AI Act will also apply to Swiss AI providers and users of AI systems if the results produced by the AI system are henceforth used in the EU. In addition, many Swiss AI providers will likely not only develop their products for Switzerland. As a matter of consequence, the new European standards of the AI Act are likely to also become customary in Switzerland over the longer term.

How will Switzerland react?

In the past, Switzerland has been rather cautious with regard to legal regulation in the digital business field. In particular, within the world of AI, Switzerland's position has so far been to not specifically regulate AI as an overarching phenomenon, but rather with a principle-based approach and sector-specific regulation on an "as needed"-basis. For instance, rules on automated driving cars and similar AI-Tools are governed in road traffic security statutes and subject to sector-specific regulatory authorities while rules on automated decision making of companies are governed in the revised Swiss data protection act and other phenomena are governed in other sector-specific areas [e.g. AI-based human assessment tools may also be governed under employment law statutes and subject to review by public employment supervisory authorities]). The underlying rationale of this approach is that it avoid inflation of bureaucratic procedures and AI-risks are dealt with by the most competent sector-specific authority instead of a generic authority for AI which tries to deal with all risks and issues at the same time. Furthermore, as technology tends to develop and change quickly, Switzerland favors "technology-neutral" laws. They remain adaptable to new technical developments and continue to apply general principles to newly evolving phenomena. This reduces to need to revise laws over and over again because they become outdated very quickly. Nevertheless, it is without question that the European AI-Act will have a significant impact on the Swiss territory on a factual basis, i.e. the establishment of an EU-standard which most companies will strive to achieve. It is likely that a similar law on AI regulation could also come into force here over time, but it is to early to make predictions on that.

How does the AI Act attempt to regulate Artificial Intelligence?

The core element of the AI Act is a risk-based approach that entails various requirements and prohibitions based on the potential capabilities and risks. The higher the risk of an AI system to the health, safety or fundamental rights of individuals, the stricter the regulatory requirements. The AI Act therefore categorises AI applications into different risk categories with different consequences:

  •    Unacceptable risk (e.g. social scoring) - the use of corresponding AI systems is prohibited.
  •    High risk (e.g. AI systems that are to be used for the biometric identification of natural persons or for the assessment of examinations).
  •    Limited risk or no risk (e.g. a spam filter).
New Media & Technologies
Changing business models through new technologies

The regulatory approach of the AI act is essentially to (i) forbid AI systems bearing unacceptable risks, to (ii) permit AI systems bearing high-risks on condition of applying certain safeguards (e.g. conformity assessments, risk management systems, technical documentation, record-keeping obligations, transparency and information to users, human oversights, accuracy, robustness and cybersecurity, quality management systems, reporting of serious incidents and malfunctions, quality criteria for training and validation and test data sets) and (iii) limited risk or no risk AI systems shall generally be deemed permissible. Needless to say that the qualification of unacceptable risk-AI and high-risk AI has the potential to lead to endless discussions. It is for this reason that the AI Act attempts to provide some clarity based on the following pillars:

Definition of AI systems

The responsible members of parliament have agreed on a definition, which is harmonized with the future definition that will be in use by the OECD:

Art. 3(1) AI Act: "artificial intelligence system" (AI system) means a machine-based system that is designed to operate with varying degrees of autonomy and that can generate outputs such as predictions, recommendations or decisions that affect physical or virtual environments for explicit or implicit purposes."

For an AI system to fall within the scope of the AI Act, the system must therefore have a certain degree of autonomy, i.e. an independence from the human operator or human influence.

High-risk AI systems: Additional levels for categorization:

A comprehensive list of high-risk-AI systems is listed in Annex III of the AI-Act. The legislator has now added an additional material condition, i.e. a high-risk-AI system should only be deemed critical if it also poses a significant risk to health, safety or fundamental rights. Such risk may arise due to the severity, intensity, probability of violations and duration of its effects for an individual, a large number of people or specific group of people. Should an AI-system fall under Annex III and the AI provider is of the opinions that there is no significant risk, he must notify the competent supervisory authority which has three months to raise objections. Systems can be introduced into the market before expiry of these three months, but if the assessment proves incorrect, the provider can be sanctioned.

Prohibited AI systems: extended list

The use of software for biometric identification is forbidden. According to the current text of the statute, such recognition software may only be used in the case of serious criminal offences and with prior judicial approval. The use of AI-supported emotion recognition software in the areas of law enforcement, border management, the workplace and education is also prohibited. Furthermore, "intentionally manipulative or deceptive techniques" are prohibited. This prohibition does not apply to AI systems, which are used for authorized therapeutic purposes on the basis of informed and explicit consent. Finally, the prohibition on predictive policing (algorithms to forecast crimes or risk of relapse of criminal offefnders) has also been extended from criminal offences to cases of misdemeanours as well.

Stricter rules for foundation models and general purpose AI

The rise of ChatGPT and other generative AI systems has urged the legislator to also want to regulate "General Purpose AI systems" (GPAI) and "Foundation Models", i.e., AI systems without a specific purpose in mind (purposes can vary depending on their use).

The current compromise text does not categorise GPAI as high-risk per se. Only if providers integrate GPAI into their AI systems which on their hand are considered high-risk, the strict requirements of the high-risk category should also apply to GPAI. In this case, GPAI providers should also support downstream AI system providers in complying with the regulations by providing information and documentation about the respective AI model.

Stricter requirements are also suggested for foundation models, such as risk management, quality management, data management, security and cyber security as well as the degree of robustness of a foundation model. The AI Act governs the obligations of providers of foundation models, regardless of whether it is provided as a stand-alone model or embedded in an AI system, under free and open source licences, by deployment on premise or as a service or via other distribution channels. In addition to a number of detailed transparency obligations, providers of foundation models are also obliged to provide a "sufficiently detailed" summary of the use of copyright-protected training data. It is not yet clear how this should be implemented, since systems such as for example OpenAI, ChatGPT were trained on a data set of more than 570 GB of text data.

Establishment of an AI office

The two EU-parliamentary committees agreed that the enforcement architecture of the AI Act should ideally include a central element to support a harmonised application of the AI Act. For this reason, the establishment of an EU AI Office was proposed. The tasks of this office are explained in further detail in the AI Act.

Six AI principles

Finally, the AI Act contains so-called "General Principles applicable to all AI systems". All actors covered by the AI Act should develop and use AI systems and foundation models in accordance with the following six "AI principles":

  • Human behaviour and control: AI systems should serve humans and respect human dignity and personal autonomy, and function in such a way that they can be controlled and monitored by humans. 
  • Technical robustness and safety: Unintentional and unexpected damage should be minimised and AI systems should be robust in the event of unintentional problems. 
  • Data protection and data governance: AI systems should be developed and used in accordance with data protection regulations. 
  • Transparency: Traceability and explainability must be possible and people must be made aware that they are interacting with an AI system. 
  • Diversity, non-discrimination and fairness: AI systems should include different stakeholders and promote equal access, gender equality and cultural diversity, and conversely avoid discriminatory effects. 
  • Social and environmental well-being: AI systems should be sustainable and environmentally friendly and be developed and used for the benefit of all people.

Our experts at CMS Switzerland are happy to guide you through the AI Act and its implications to your business in Switzerland in further detail. 

Authors

Dirk Spacek
Dr Dirk Spacek, LL.M.
Partner
Co-Head of the practice groups TMC and IP
Zurich

Newsletter

Stay informed by subscribing to our newsletter.