What You Need to Know About the New AI Act


Introduction

On August 1, 2024, the European Union took a groundbreaking step by implementing the world’s first comprehensive regulation on artificial intelligence (AI), known as the AI Act. This historic legislation is set to regulate AI technologies, ensuring they are developed and used responsibly while encouraging innovation. Here’s everything you need to know about the new AI Act, its implications, and how it affects both businesses and individuals. 

 

Understanding the AI Act 

The AI Act marks a significant milestone in managing the rapidly evolving landscape of AI technology. Proposed by the European Commission in April 2021 and adopted by the European Parliament and the Council in March and May 2024, respectively, this Act introduces a structured approach to regulating AI systems based on the level of risk they pose. The primary aim is to protect public safety, uphold fundamental rights, and preserve the environment, all while promoting a balanced and innovative digital economy. 

 

Main Goals of the AI Act 

The AI Act focuses on three key objectives: 


  1. – Ensuring Safety and Transparency: The Act requires that AI systems used in the EU must be safe, transparent, traceable, and non-discriminatory. 

  1. – Protecting Fundamental Rights: It addresses concerns related to health, safety, and individual rights by setting clear obligations for AI developers and users. 

  1. – Promoting Innovation: The Act supports the growth of AI technologies by providing a clear regulatory framework that encourages innovation while safeguarding public interests. 

  2.  

A Risk-Based Approach 

The AI Act classifies AI systems into different risk levels, each subject to varying degrees of regulatory requirements: 


Unacceptable Risk:

AI systems posing unacceptable risks are considered threats to fundamental rights and are banned. These include: 

  1. Cognitive Behavioral Manipulation: AI systems that manipulate behavior, such as voice-activated toys encouraging unsafe behavior in children. 

  2. Social Scoring: Systems that classify individuals based on behavior or socio-economic status. 

  3. Biometric Identification: Real-time remote biometric identification systems, such as facial recognition, used for social scoring or other invasive purposes. 


Exceptions may apply for law enforcement, where real-time biometric systems can be used in limited cases with court approval. 


High Risk: High-risk AI systems are those that impact safety or fundamental rights and are divided into two main categories: 

  1. Products Under EU Safety Legislation: This includes AI systems in critical sectors like aviation, medical devices, and automobiles. 

  2. Specific High-Risk Areas: AI systems used in critical infrastructure, education, employment, public services, law enforcement, and migration must be registered and comply with strict requirements. 


These systems must be assessed before entering the market and throughout their lifecycle. National authorities will oversee compliance and handle complaints from individuals. 


Minimal Risk: Many AI systems, such as spam filters or basic video games, fall into this category and face minimal regulatory requirements. Companies can voluntarily adopt additional codes of conduct for these systems. 

  1.  

Transparency Matters 

Generative AI models like ChatGPT are not classified as high-risk but must comply with transparency requirements: 


Disclosure: AI-generated content must be clearly identified as such. 

Content Management: Models must be designed to avoid generating illegal content. 

Data Summaries: Summaries of copyrighted data used for training must be published. 

 

High-impact general-purpose AI models, such as advanced versions of GPT, will undergo thorough evaluations. Serious incidents involving these models must be reported to the European Commission. 

 

Encouraging Innovation 

The AI Act emphasizes the importance of fostering innovation. It mandates that national authorities provide a testing environment that simulates real-world conditions for start-ups and SMEs. This approach ensures that new AI models can be developed and tested effectively before their public release. 

 

Implementation Timeline 

The AI Act will become fully applicable 24 months after its entry into force. However, several provisions will come into effect earlier: 


Unacceptable Risk AI Systems: The ban on these systems will be enforced six months after the Act’s entry into force. 

Codes of Practice: Guidelines for AI practices will be implemented nine months after the Act takes effect. 

Transparency Rules for General-Purpose AI: These will apply 12 months post-entry into force. 

 

High-risk systems will have a longer compliance period, with obligations becoming applicable 36 months after the Act’s entry into force. 

 

Impact on Businesses and Individuals 

For businesses, especially those involved in developing or deploying AI systems, the AI Act introduces several compliance requirements: 


Documentation and Reporting: Businesses must maintain detailed records and submit reports to ensure transparency and compliance. 

Risk Assessment: Companies must conduct thorough risk assessments for high-risk AI systems and implement mitigation strategies. 

Training and Adaptation: Organizations may need to invest in training and adapt their processes to meet new regulatory standards. 

 

Individuals will benefit from enhanced protection and transparency. The Act aims to ensure that AI systems do not infringe on personal rights or safety. By establishing clear guidelines and oversight mechanisms, the AI Act seeks to build public trust in AI technologies. 

 

Conclusion 

The AI Act represents a pivotal moment in the regulation of artificial intelligence, setting a global standard for AI governance. By focusing on risk-based regulation, transparency, and innovation, the Act aims to balance the benefits of AI with the need for robust safeguards. As the Act comes into full effect over the next few years, businesses and individuals will need to stay informed and prepared to navigate this evolving regulatory landscape. 

NEWSLETTER!

Join our newsletter to receive exclusive content and stay on top of the latest developments.