Navigating the AI Act: Balancing Innovation and Compliance

On 8 December 2023, the European Parliament and the Council reached a political agreement on the groundbreaking Artificial Intelligence Act (AI Act). This regulation aims to be a step towards safeguarding fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI, all while fostering research and innovation.

This regulation aims to be a step towards safeguarding fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI, all while fostering research and innovation.

The AI Act, proposed by the European Commission in 2021, stands as the world's first AI law. It establishes obligations for AI based on the potential risk it may pose. The trilogue's outcome includes a ban on certain uses of AI, addressing ethical concerns across various applications such as biometric categorization, untargeted facial image scraping, emotion recognition, social scoring, behavior manipulation, and exploitation of vulnerabilities based on personal characteristics.

MedTech Europe expressed concerns about the potential negative impacts on medical devices. They highlighted additional compliance procedures for AI-based medical devices, potentially delaying certifications and posing a risk of shortages.

The legislation will undergo a vote by the European Parliament and the Council, becoming law upon successful approval. The next IMCO meeting is scheduled for 24 - 25 January 2024.

Obligations for high-risk AI systems include a mandatory fundamental rights impact assessment, extending to sectors like insurance and banking. Citizens will have the right to lodge complaints about high-risk AI systems impacting their rights and receive explanations regarding their functioning and impact.

General-purpose AI (GPAI) systems and their models must meet transparency standards, including technical documentation, adherence to copyright law, and the dissemination of detailed training content summaries. High-impact GPAI models with systemic risk will face stricter obligations, including model evaluations, systemic risk assessments, adversarial testing, reporting incidents to the Commission, ensuring cybersecurity, monitoring energy efficiency, and adhering to codes of practice until harmonized EU standards are established. Non-compliance with the rules can result in fines ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5% of turnover, depending on the infringement and company size.

Further information can be found here


Date: 20/12/2023 | Tag: | News: 1529 of 1618
All news

News

More news

Events

More events
newsletter

Subscribe to the VPH Institute Newsletter

ARCHIVE

Read all the newsletters of the VPH Institute

GO