On May 11, the IMCO and LIBE committees of the European Parliament made a monumental decision to put people first in the AI Act. This decision comes at a critical moment in the global regulation of AI systems and is a massive victory for our fundamental rights. For many years, the European Digital Rights (EDRi) network and its partners have been advocating for a people-first, fundamental rights-based approach in the development and regulation of AI.
The Parliament’s vote sends a powerful message to governments and AI developers around the world. The committees have banned several uses of AI systems that pose significant threats to fundamental rights, including predictive policing systems, emotion recognition and biometric categorisation systems, and biometric identification in public spaces. These systems perpetuate systematic discrimination against already marginalised groups, including racial minorities, and turn public spaces into places of suspicion and suppression of our democratic rights and freedoms.
Banning Biometric ID in Public Places
This victory is a significant step towards protecting people in the EU from biometric mass surveillance (BMS) practices. The ban covers all real-time and most post remote biometric identification (RBI) in public spaces, discriminatory biometric categorisation, and emotion recognition in unacceptably risky sectors. It is a historic step to protect people in the EU from many BMS practices by both state and private actors.
Despite these successes, there are still concerns about the definition of “high risk” AI and AI in migration. The proposed changes to the risk classification process in Article 6 of the AI Act provide a large loophole for AI developers to argue that they should not be subject to legislative requirements. This loophole favours industry actors over people’s rights and risks undermining the EU AI Act.
Furthermore, the European Parliament has not taken sufficient steps to protect the rights of migrants from discriminatory surveillance. The MEPs failed to include in the list of prohibited practices where AI is used to facilitate illegal pushbacks or to profile people in a discriminatory manner. Without these prohibitions, the European Parliament is paving the way for panopticon at the EU border.
The European Parliament has also taken significant steps to empower people affected by the use of AI systems. The Parliament has demanded that all actors rolling out high-risk AI perform a fundamental rights impact assessment before using AI systems. However, the Parliament only requires public authorities and large companies to publish the results of these assessments, offering less public information when companies use high-risk systems.
Large Language Models Included
In addition, transparency requirements have been added for “foundational models” or large language models sitting behind systems like Chat GPT, including an obligation to show the computing power required. Significant steps have also been taken to provide notifications and explanations to people affected by AI-based decisions or outcomes and to provide remedies for when rights have been violated.
A plenary vote with all MEPs is expected to take place in June, which will finalise the Parliament’s position on the AI Act. After that, we will enter a period of inter-institutional negotiations with the Member States before this regulation can be passed and become EU law.
The European Parliament’s decision to put people first in the AI Act is a historic step towards protecting fundamental rights in the development and regulation of AI. The Parliament’s ban on certain uses of AI systems is a significant victory for the fight against practices that violate our privacy and dignity. However, there are still concerns about the definition of “high-risk” AI and the protection of migrants’ rights. As we move forward, it is crucial that we continue advocating for a people-first, fundamental rights-based approach to AI development and regulation.
Comments (0)