Thursday, July 4, 2024
HomeLegalGuest post: It’s Time to Start Preparing Your Company for the European...

Guest post: It’s Time to Start Preparing Your Company for the European Union’s AI Act

By Maryam Salehijam, PhD

On March 13, 2024, the European Union Parliament voted to approve the EU’s Artificial Intelligence Act, known as the AI Act. The EU AI Act “entered into force” 20 days after publication in the EU Official Journal, on 21 May 2024.

As the European Commission explained, the AI Act aims to establish a “comprehensive legal framework on AI worldwide” for “foster[ing] trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.”

Risks from AI to health and safety include poor outcomes from algorithms (due to bias, low quality data, and lack of transparency) and leaks of personal data (due to breaches of privacy and security in data collection or in execution of AI algorithms). Risks from AI to democracy include the potential to mislead voters by contributing to the spread of disinformation and raising questions about the validity of information people are exposed to on a daily basis.

When the EU AI Act takes effect, it will apply to any company that develops, markets or sells AI systems in the EU and to any company that uses AI systems in the EU. In other words, similar to the EU’s General Data Protection Act (GDPR), companies around the world will be obligated to comply with the EU AI Act if they develop, provide, market or use an AI system in an EU Member State.

 

Is your company obligated to comply with the EU AI Act? What steps must your company take to get into compliance? How quickly must it act? What are the ramifications if your company doesn’t act?

 

If you are asking these questions, this article is for you.

 

EU AI Act Basics

 

The EU AI Act defines artificial intelligence systems as “a machine-based system that […] infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can affect physical or virtual environments.” The Act takes a risk-based approach by classifying AI systems according to the level of risk they pose. There are four risk categories: unacceptable risk, high risk, limited risk, and minimal risk.

Unacceptable risk

AI systems that pose an unacceptable risk are prohibited under the AI Act. Unacceptable risk is defined as those AI systems that pose a clear threat to the safety, livelihoods and rights of people, “from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.”

This includes AI systems that:

  • use manipulative or deceptive techniques to influence a person to make a decision that they otherwise would not have made in a way that causes or may cause that person or other persons significant harm;
  • target and exploit a person’s vulnerabilities arising from their age, disability or specific social/economic situation in order to influence the behaviour of that person in a way that causes or may cause significant harm to that person or other persons;
  • use biometric data to categorize individuals based on their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; or
  • create or expand facial recognition databases through untargeted scraping of facial images from the internet or closed-circuit television (CCTV) footage.

An AI system that falls into one of these categories of unacceptable risk will be banned in the EU six months after the Act “enters into force.”

High risk

The broadest category defined by the EU AI Act covers AI systems that pose a high risk of harm to individuals. Systems classified as high risk include AI technology used in critical infrastructure, education and vocational training, employment, essential private and public services (e.g., healthcare, banking), certain systems in law enforcement, migration and border management, and justice and democratic processes (e.g. elections).

For a full list of AI systems that the EU defines as high risk, see article 6 of the Act. Bear in mind, however, that under the AI Act, the European Commission and the EU AI Office are to develop practical guidance on how to identify if the AI system your company develops, markets or uses is high risk. The guidance is expected no later than 18 months from the date the AI Act enters into force.

 

High-risk AI systems will be subject to strict obligations before they can be marketed and sold. These include having procedures in place to assess risk and mitigation harm; high quality datasets feeding the system to minimize risks and discriminatory outcomes; detailed record keeping and documentation; clear directions to companies on how to safely deploy the system; human oversight measures to minimize risk; and a high level of accuracy and security.

Limited risk

AI systems that pose neither unacceptable risks nor high risks will be considered low risk and subject to transparency obligations. These low-risk systems are likely to include:

  • AI-powered Chatbots used for customer service: The Act requires that users be informed they’re interacting with a machine and have the option to speak with a human.
  • AI-powered Recommendations: Recommendation systems on e-commerce platforms or streaming services that use AI to suggest products or content likely fall under limited risk.
  • AI-assisted Editing Tools: Software that uses AI to check grammar, plagiarism, or suggest stylistic improvements in writing would likely be considered limited risk.

 

Generative AI

What about generative AI?

Large generative AI models allow for flexible generation of content, such as text, audio, images or video, that can readily accommodate a wide range of distinctive tasks. As such, they are considered general-purpose AI systems.

Under the EU AI Act, generative AI will be categorized based on the level of risk it poses. Generative AI tools such as Google’s Gemini or ChatGPT are likely to be put in the low-risk category.

However, all generative AI systems, regardless of categorization, must be designed to prevent them from generating illegal content such as hate speech. Users of Generative AI must also disclose that the content was generated by an AI system. For example, under the EU AI Act, a news article written by an AI system must be clearly marked as generated by AI and must avoid plagiarizing another’s work or violating copyright laws.

Penalties for noncompliance

A company doing business in the EU that develops, markets, or deploys an AI system that falls into the unacceptable risk category faces fines up to €35 million, or 7 percent of a company’s annual revenue, whichever is greater.

A company doing business in the EU that fails to comply with the obligations governing high-risk AI systems faces fines up to €15 million, or up to 3 percent of the company’s annual revenue, whichever is greater. The same is true for companies that provide general-purpose AI models.

These penalties are modeled on—but exceed—the penalties authorized in the GDPR. And if history is any guide, EU countries will not shy away from imposing large fines on companies that run afoul of the EU AI Act. The top 7 highest fines issued for violations of the GDPR exceed €100 million. The highest GDPR penalty—meted out against Meta—exceeds €1 billion.

How to prepare your company to comply with the EU AI Act

The legal obligations under the Act will be phased in.

The first trigger is the ban on AI systems that pose unacceptable risks. That ban will be in place six months after the AI Act “enters into force.”

For generative AI systems, the Act’s obligations will take effect one year after the Act “enters into force.”

The obligations imposed on high-risk AI systems specifically enumerated in the AI Act will go into effect two years after the Act “enters into force.” Those include AI systems used in biometrics, critical infrastructure, education, employment, access to public services, law enforcement, and administration of justice.

Companies developing, marketing, selling, or using other high-risk AI systems will need to comply with the Act no later than three years after it “enters into force.”

Despite the phase in, don’t delay. Get started now to figure out how to comply with the EU AI Act.

Does your company develop, market, sell, or use AI?

The first step to understanding whether and how your company must comply with the EU AI Act is to understand the kinds of AI systems your company develops, markets, sells, or uses. In other words, do your AI systems fall into the unacceptable risk, high risk, and/or low risk categories?

This may sound like a simple inquiry, but in practice it is not.

Experienced lawyers who have extensive compliance and technology expertise would be the ideal individuals to lead the effort. That person could as a first step create a questionnaire to help identify the AI systems created or used by your company and how those systems function.

The EU published a Compliance Checker website that can assist in creating the questionnaire, but it should be tailored to your company.

With a detailed questionnaire in hand, the experienced lawyer or compliance professional will need to do more than just touch base with the Chief Technology Officer or the Information Technology department. They need to talk to every business unit in the company and all internal departments, including human resources, finance, and customer service.

Once it is clear what AI systems your company creates, markets, sells, or uses and how they operate, company lawyers can do a risk analysis on each system. Do they fall into the unacceptable risk, high risk, or low risk category?

Put a governance structure in place

High-risk AI systems are subject to reporting and record-keeping requirements. To comply, your company will need a governance structure in place. Create an oversight body composed of lawyers and technical employees. Ask them to create a detailed compliance checklist. Make sure everyone in the company is informed of and understands the kinds of records to maintain.

Seek outside legal help

If your company doesn’t have the expertise or the bandwidth to undertake this kind of investigatory and compliance process, bring in a team of experienced lawyers to help.

Maryam Salehijam, PhD, is an Enterprise Account Executive at Axiom Law. She has a PhD in International Business Law, a LLB in European Law, and a LLM in International Laws. She published 14 peer-reviewed publications, including a book on “Mediation and Commercial Contract Law,” published by Routledge, and is a contributing author of the new “Legal Operations in the Age of AI and Data,” published by Globe Law and Business.

Source: NYPOST

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular