🔥 Join the Call: NO GPAI CODE WITHOUT RIGHT TO INFORMATION & TRANSPARENCY!

TO ADD YOUR NAME TO THE LIST OF SIGNATORIES, PLEASE CLICK HERE.

Friday, July 4th, 2025

Dear Vice-President Virkkunen, dear Commissioner McGrath, dear Mr. Roberto Viola, dear Ms. Sioli, dear Mr. Gross, dear Chairs and Vice-Chairs of the EU General Purpose AI Code of Practice,

We, the undersigned, thank you for yesterday’s presentation of the key changes to the EU General Purpose AI Code of Practice during the online Final Summary Plenary. A change that requires our immediate response concerns the deletion of public transparency provisions, which you indicated has no legal basis in the AI Act.

We believe however that there is a clear legal basis for public transparency of risk management documentation both within the AI Act itself and, more broadly, in the Union’s founding Treaties, such as Article 169 of the Treaty on the Functioning of the EU which highlights a “right to information.” As such, we ask you to revert to the previous version of the Code of Practice, whether draft 2 (Commitment 21) or draft 3 (Commitment II.16), on this particular topic.

First, Article 1 of the AI Act makes clear that “The purpose of this Regulation is to promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter”.

Transparency is a necessary condition to ensure trustworthy AI, to allow affected persons to make informed decisions and for them to benefit from their right to an effective remedy as ruled by the Court of Justice of the European Union. This is also the reason why public transparency is a key principle enunciated in the Ethics guidelines for trustworthy AI of the High-Level Expert Group on Artificial Intelligence (AI HLEG) and the European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)) (2021/C 404/04), all clearly mentioned in the AI Act.

Second, public transparency is an essential component of AI literacy. Article 95 (2) of the EU AI Act explicitly states that Codes shall include elements such as “(a) applicable elements provided for in Union ethical guidelines for trustworthy AI”, and this clearly includes public transparency and “(c) promoting AI literacy”. In this regard, not only does Article 3 (56) clarify that “‘AI literacy” means skills, knowledge and understanding that allow (...) affected persons, taking into account their respective rights and obligations in the context of this Regulation (...) to gain awareness about the opportunities and risks of AI and possible harm it can cause” but to avoid any misunderstandings or misinterpretations, the AI Act (preamble point (20)) specifies that AI literacy should equip affected persons with “the knowledge necessary to understand how decisions taken with the assistance of AI will have an impact on them. In the context of this Regulation, AI literacy should provide all relevant actors in the AI value chain with the insights required to ensure the appropriate compliance and correct enforcement.

Third, Article 56(2)(d) states that “The AI Office and the Board shall aim to ensure that the codes of practice cover at least the obligations provided for in Articles 53 and 55, including the following issues: (d) the measures, procedures and modalities for the assessment and management of the systemic risks at Union level, including the documentation thereof, which shall be proportionate to the risks”.

The right to public transparency of the Safety and Security Frameworks (SSF) and Model Reports in particular is strongly supported in the Act’s text and proportionality logic. The proportionality principle both underscores the necessity of public transparency and ensures that the obligations imposed on model providers remain balanced and not unduly burdensome:

1) Appropriateness - Public disclosure of the SSFs and Model Reports effectively improves the identification, assessment and management of one or more of the systemic risks that the AI Act seeks to address.

SSFs and model reports provide the necessary evidence of safety with regards to systemic risks. These risks, by definition, are characterised by their significant likelihood to expose third parties, including EU citizens, to unconsented severe risks, including to their physical integrity. Knowledge of such exposure enables society to mitigate residual risks by identifying flaws in risk assessment and management processes as soon as possible, developing mitigations, and preparing and deploying defensive infrastructure when necessary. As such, public transparency regarding the measures that organisations are taking to limit the likelihood and severity of threats to EU citizens are highly appropriate.

2) Necessity - There is no less restrictive yet equally effective means to achieve the intended legal objective. In other words, there is no viable alternative that imposes a lower economic, operational, or privacy burden while still effectively managing systemic risks.

Public transparency cannot be replaced as a mechanism for informing citizens of the unconsented third-party risk they are exposed to as a result of companies’ AI deployment activities and of the measures put in place to mitigate these. The most minimalist implementation that would enable such information to be available to those who want it is that it be available upon request, as opposed to available by default. This would be similar to publictransparency in terms of consequences for the company due to the right of EU citizens to share the concerns they may have publicly.

3) No Manifest Imbalance between the Costs and Benefit of Measure - The benefits associated with public disclosure evidently outweigh the costs.

Given that companies are assembling SSFs and model reports sufficient to demonstrate their compliance, the difference in costs between sharing these with the AI Office and making them publicly available, perhaps merely upon request, is minor. The public disclosure does not require producing any additional information and therefore does not add a significant burden. On the other hand, the benefits of EU citizens being aware and able to react to the unconsented third-party risk they're exposed to by AI deployment of a given company and whether the mitigations implemented by said company are proportionate to the magnitude of the expected harm are extremely high.

We look forward to working with you towards a solution on this important matter. Thank you for your consideration.

Best regards,

 

Organisations in alphabetical order:

Centre for AI & Digital Humanism

European Writers’ Council

Le Centre pour la Sécurité de l'IA (CeSIA)

Pour Demain

SaferAI

The Future Society

Experts:

Dr Karine Caunes, Research Associate, Lyon 3 University, Editor-in-Chief, European Law

Journal

Dr. Giulia Gentile - University of Essex

Dr. Marta Bieńkiewicz

Dr. Nada Madkour, Senior AI Standards Development Researcher and Non-Resident Research Fellow, UC Berkeley

Next
Next

🔥 Civil society’s urgent warning: No to a GPAI Code without fundamental rights protection!