European Commission - Directorate General for Communications Networks, Content and Technology

04/27/2026 | News release | Distributed by Public on 04/27/2026 06:36

Third GPAI Signatory Taskforce meeting – Safety and Security chapter

The March 27 meeting of the Signatory Taskforce for the General-Purpose AI (GPAI) Code of Practice, focused on 2 topics under the Safety and Security Chapter: aggregate forecasts of risk tiers and harmful manipulation.

Aggregate forecasts (Measure 1.1(2)(c) of the Safety and Security chapter) require signatories that are providers of models with systemic risk to include, in their frameworks, estimates of timelines when they reasonably foresee that their model will exceed the highest systemic risk tier already reached by any of their existing models. The relevant provision further states that such estimates 'may take into account aggregate forecasts, surveys, and other estimates produced with other providers.

Measure 3.1 of the Safety and Security chapter mentions 'forecasting of general trends' (e.g. forecasts concerning the development of algorithmic efficiency, compute use, data availability, and energy use)' as an example of a method to gather model-independent information for the risk assessment. This creates a concrete opportunity for structured forecasting exercises conducted across providers of GPAI models with systemic risk using a standardised framework.

To facilitate the discussion, the Forecasting Research Institute presented an introduction to the topic of forecasting. The Taskforce went on to discuss possible formats this could take. For instance, the signatories that are providers of GPAI models with systemic risk could individually answer a set up questions related to risk forecasts for the specified systemic risks (Appendix 1.4) twice a year. These individual forecasts could then be aggregated and anonymised to provide an industry-wide estimate. Signatories raised questions, such as the appropriate cadence for such an exercise (suggesting ranges from semi-annually to annually) and the implications for compliance. Taking all views into account, the AI Office will provide a concrete approach to this aggregate forecasting and respond to remaining open questions.

Concerning harmful manipulation - a specified systemic risk in Appendix 1.4(4) of the Safety and Security chapter - the Taskforce discussed approaches to concretise relevant risk scenarios. Establishing risk scenarios for each identified systemic risk is central for the risk assessment (Measures 2.2 and 3.3 Safety and Security Chapter).

Performing model evaluations that are sufficiently informative for the risk assessment requires that the measured model properties are relevant for a pathway to harm that is, in turn, relevant for the systemic risk in question. For model evaluations to be sufficiently specific to the risk of harmful manipulation, they should also be targeted at relevant risk scenarios. This means that the evaluation setting (e.g., system integration and user assumptions) should reflect conditions of such risk scenarios.

To kick-start signatories' discussion, Transluce presented an introduction, based on recent stakeholder input, on how such risk scenarios could be approached. For example, risk scenarios could be categorised according to the context of exposure, reflecting whether the user is interacting with a GPAI chatbot, a third-party application (such as a financial service), an agent, or disseminated AI-generated content, or whether the model interacts with an evaluator directly.

The AI Office thanked the participants for sharing their input on the implementation of these measures and invited them to propose topics of interest to be considered in the preparation of the next meeting of the Signatory Taskforce.

Find further information about:

Related topics

Artificial intelligence
European Commission - Directorate General for Communications Networks, Content and Technology published this content on April 27, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 27, 2026 at 12:39 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]