09/16/2025 | Press release | Distributed by Public on 09/16/2025 10:04
Ladies and gentlemen,
Good morning from Frankfurt, or rather good afternoon to everyone present in Singapore. It's a pleasure to join you today, even remotely.
International conferences as this one are a good opportunity for all of us to learn more about each other's perspective and share insights on where our sector is heading.
The digital revolution is in full swing, with artificial intelligence pushing hard on the accelerator pedal. We have a clear vision on how we want this digital revolution to unfold. We want to make sure that we achieve a secure, sustainable and equitable version of new technologies and AI for our businesses and citizens. Our conviction is that this objective can only be achieved when innovation and regulation are in balance.
And in my speech today, I want to share with you how this balance can be achieved: where we see the benefits of AI, as well as the areas where careful supervision and caution are needed; the regulation and supervision that we have in place at European level, and, lastly, the initiatives taken at international level, within the IAIS.
[AI: benefits and risks]
Let me start with the great potential of AI.
We increasingly use wearable devices that measure our health, drive cars with GPS-tracking, live in homes with smart features and use social media to log every aspect of our lives. All of this generates data that can potentially be leveraged by insurers, among others. AI is the latest addition to this mix, and it has the potential to fundamentally shake up the way our businesses and markets operate.
It is no surprise that we are seeing an increasing adoption of this technology across the insurance value chain. The launch of ChatGPT in late 2022 and the consequent spread of large language models that respond remarkably well to human queries have supercharged the adoption of AI technology.
Our data shows that in Europe about half of non-life insurers and a quarter of life insurers were already leveraging AI throughout the value chain, and this trend is only going upwards.
I don't need to further underline the possibilities that artificial intelligence holds in store for us. Rather, I'm here to underline something equally important: namely, that unrestricted innovation can come at a cost - one that risks eroding trust.
EIOPA's most recent Consumer Trends Report highlights not only the benefits I just mentioned, but also several challenges related to the adoption of AI.
Some of these drawbacks are related to its limited consideration of consumers' specific circumstances, or excessive standardisation of settlement procedures. Regarding pricing and underwriting, while national authorities in Europe have reported benefits such as more precise segmentation and price optimisation, which can lower costs and increase insurability, they are also concerned that AI can propose higher premiums and reduce access to insurance for high-risk or vulnerable clients.
Data privacy, security and ethical use is another area of concern. The amount of data collected and shared with third parties is increasing. The more sensitive the data, the greater the risks that need to be managed. When we see that 24% of EU consumers do not trust insurers to collect and use their personal data in an ethical way, according to our recent Eurobarometer survey, we realise that the trust element is real and needs to be seriously looked at.
There is another risk to highlight, which links well with the theme of the conference today on enhancing inclusive insurance: namely that while the integration of AI in financial services drives valuable transformation, we must remain attentive to its implications for equality. I am thinking of the risk of algorithmic bias. When AI systems are trained on historical data that reflects decades of gender inequality, for instance, they can perpetuate and scale these biases. Without careful efforts to debiasing both data and algorithms, we risk automating patterns of discrimination from the past. We need to make sure that AI solutions are a force of good.
And finally, there is the concern regarding mutualisation-the idea that risks are more manageable when shared across a broad and diverse pool. This principle helps make coverage more affordable, especially for those who might otherwise be priced out due to higher risk profiles.
However, the rise of big data and advanced technology is creating pressure on this model. As insurers gain the ability to price risk with increasing precision, policies can become more personalised. While this can lead to greater efficiency and better-tailored products, it can also fragment the risk pool, offering lower premiums to low-risk individuals while potentially excluding those deemed higher risk. It is important to ensure that personalisation does not erode collective protection.
This is why EIOPA brought together a Consultative Expert Group that works on data use in insurance, to explore how data can be used to promote fairness, inclusion, and innovation while safeguarding the principle of mutualisation.
This means that rather than driving a wedge between people, technology should bring inclusiveness and economic opportunity. Rather than perpetuating biases, it should treat people fairly, regardless of their sexual orientation, race, religion, age or socioeconomic status. Rather than concentrating power and wealth in the hands of a few, AI should help bridge economic divides by fostering innovation that benefits small businesses and local communities. Rather than fuelling misinformation and eroding trust, AI should enhance transparency and democratize access to reliable information and just processes.
We can only get there if we ensure that the right set of rules are in place and that everyone plays by these rules.
This leads me to the second part of my intervention, which is regulation and supervision of AI.
[The global landscape]
Let me start by giving you the bigger picture.
What we see is that the governance of AI is rather fragmented and challenging to grasp at a global level.
The first challenge is one of geographical nature. Much of today's AI infrastructure and many consumer-facing applications are concentrated in a few regions of the world. For Europe, and indeed for all jurisdictions, it is essential to have access to these systems-not only to understand how they function, given their complexity, but also to assess the risks they may pose. The reality is that while AI systems are not locally developed, they do need to be supervised locally, in line with the needs and responsibilities of each country.
To this we add a regulatory framework that is not harmonised when it comes to the usage of AI systems. This is inevitably creating challenges for companies that operate globally to navigate different jurisdictions, and for governments to effectively govern and oversee transnational AI systems.
International organisations like the United Nations, UNESCO, the OECD, and the G7 have issued principles and are working together to foster international cooperation and a shared understanding of AI risks. However, these efforts lack legal enforceability, and challenges remain in harmonising standards and definitions across borders.
So what we see is that despite divergent national and regional strategies, there is a growing recognition of the need for robust governance frameworks.
[AI ACT]
Speaking about robust frameworks, the EU is a global leader in AI regulation with its landmark AI Act, the first comprehensive legal framework on AI.
What does the AI Act bring to the table?
The AI Act is the world's first horizontal legislation that governs the development, introduction and use of AI systems in a standalone legal act. We say it is horizontal because it concerns all AI cases regardless of whether these are used by financial undertakings or other institutions, be it aviation companies, hotels, car manufactures as well as public institutions such as law enforcement, the judiciary, or indeed EIOPA.
The AI Act introduces a risk-based approach to all AI applications across the economy, balancing innovation with trust. The goal? To create a human-centric environment where new technologies can thrive-safely, responsibly, and with the confidence of businesses and consumers alike.
It defines four risk levels for AI systems, those posing:
Systems with unacceptable risks are essentially prohibited.
For high-risk systems, the AI Act establishes robust risk management methods, such as high data quality standards as well as strong data governance and record-keeping practices. In the insurance sector, AI systems used for risk assessment and pricing in life and health insurance are deemed as high-risk under the AI Act. Companies using high-risk AI systems need to retain human oversight, be able to meaningfully explain outcomes to users and inform their users upfront when they are subject to the use of high-risk systems. This should not come as "new" to insurers, as they were already regulated under Solvency II and IDD, which have the same expectations. Still, questions on definition and scope did come up.
Recent clarifications from the European Commission suggest that mathematical optimalisation methods and traditional statistical models that insurers have been using for a long time may be excluded from the scope of the AI ACT. This indicates that the AI Act's application may be more proportionate than originally anticipated.
As for the rest of AI systems in insurance that are not outright prohibited and do not constitute high risks, these continue to operate subject to existing sectoral legislation without new requirements, except that AI users must ensure AI literacy among their staff and inform customers when they are interacting with AI systems.
What I want to underline is the notion that even before the AI Act was adopted, the use of AI in insurance did not take place in an unregulated space. Due to its horizontal nature, the AI Act is to be applied in conjunction with existing sectoral legislation. For insurers, this means that the relevant provisions under Solvency II and IDD remain fully in force, including the requirements to act in the best interest of customers and to put in place an effective system of governance, which provides for a sound and prudent management of the business. The principle of proportionality, which is core to the European insurance legislative framework, also applies to the use of AI by insurance undertakings.
EIOPA has recently published an Opinion on AI Governance and risk management highlighting these aspects.
With its differentiated and targeted approach, the AI Act sets the foundation for a responsible uptake of all AI in Europe. It creates a reliable environment with a light-touch approach for non-high-risk AI systems, and this is a boom for innovation. Combined with the resilience features of DORA, it lays the groundwork for a successful, responsible and ethical deployment of AI in Europe.
[Supervision of AI: EIOPA's opinion and IAIS's application paper on AI]
Now let me go back to supervision, which is the main topic of my intervention today.
Only one month ago, in August 2025, EIOPA published an opinion on AI governance and risk management to ensure a responsible use of AI systems in insurance. This opinion does not add regulation to what is already in place. Rather, it provides clarity to supervisors in the European Union on how to interpret the provisions set out in existing insurance-sector legislation in the context of AI. It provides clarity to the industry on what supervisors expect from insurance undertakings regarding the use of AI systems.
It covers considerations like data governance, record-keeping, fairness, cyber security, explainability and human oversight.
The opinion clarifies existing governance and risk management principles while remaining flexible to allow tailoring for the specific characteristics of different AI systems.
Most importantly, it follows a risk-based and proportionate approach, to balance the benefits and risks of AI systems thereby leaving room for innovation.
Now we must also acknowledge that digitalization is a global phenomenon, and that our efforts to harness its potential and mitigate the risks must be coordinated across borders. International cooperation is essential if we are to ensure that our regulatory frameworks are consistent, effective, and responsive.
This is indeed what we are doing at the European level at EIOPA, and at international level within the IAIS, which I am proud to represent as Vice-Chair of the Executive Committee and Chair of its FinTech Committee.
In line with the work of other international standard-setting bodies, such as the OECD and the G20, the IAIS published in July this year its Application Paper on the supervision of artificial intelligence. The starting point of this paper was the conclusion that no change to the IAIS ICPs was needed to supervise AI. What was needed was guidance to supervisors on how to supervise in real life on the basis of ICPs. The IAIS bring together set of principles and requirements, such as data governance, explainability, and human oversight and explains how they could be applied in concrete AI use cases, considering the specificities of the insurance sector. And that makes it really relevant, for a lot is being written and said about AI, but we focus on AI in the context of insurance.
Just as EIOPA's opinion, the recommendations set out in the IAIS paper follow the risk-based supervision and the proportionality principle.
Allow me to take a deeper look at this approach.
What does risk-based supervision entail? In practice it means that supervisory activities and resources need to be aligned with the level of risk that policyholders, the insurance sector or to the financial system are exposed to. In the context of AI systems, we know that there are different types of AI systems, as well as use cases carrying different levels of risks. The example the IAIS is giving in its paper speaks for itself: if you take an AI system used for document retrieval, this system poses less risk than a system used for determining the claim payouts to policyholders. By taking into account the different levels of risk, the paper supports supervisors to allocate more supervisory resources to higher-risk AI use cases which pose greater market conduct and prudential risk.
Now let's look at proportionality. Based on this principle, supervisors should require insurers to put in place governance and risk management measures that are in line with the risk profile of the AI system they use. Indeed, higher-risk AI applications should be subject to more robust oversight and controls, whereas lower-risk systems may require more proportionate measures. Supervisory expectations are both risk-sensitive and focused on outcomes.
By focusing on a risk-based and proportional approach, the IAIS Application Paper seeks to find the sweet spot between promoting innovation and minimising risk.
EIOPA's opinion and the IAIS application paper are the two most recent and important papers that give recommendations on how to supervise insurance based on existing legislation.
[Next steps]
As we expect a fast adoption of AI in the insurance sector, it will probably not end her. EIOPA plans to develop more detailed analysis of specific AI systems or emerging issues related to their use in insurance, and to provide guidance where appropriate. Similarly, the IAIS will continue to monitor developments and work on an internal toolkit for supervisors to benefit from. In other words, you will see the focus of our work move further into the how to supervise in general, helping supervisors to organise as well as into specific AI systems for those supervisors who witness faster developments in their market, and we will share this among each other.
A particularly dynamic area is Generative AI (GenAI), which introduces distinct challenges and opportunities. These systems can generate text, images, or code, open new frontiers-from internal process automation to customer communication. But they also raise concerns about hallucinations, misuse, and explainability.
To better understand emerging practices, EIOPA is currently conducting a survey on the adoption of GenAI, governance, and use cases in the insurance sector. Preliminary results suggest rapid uptake, especially in back-office functions such as document summarisation, internal tooling, and code assistance.
The use of GenAI is likely to expand, and building supervisory knowledge now will help ensure firms adopt appropriate safeguards from the start. EIOPA's ongoing work aims to foster a constructive dialogue between supervisors and industry, combining innovation with strong consumer protection and prudential safeguards. The Fintech Forum will also keep this in focus.
[CONCLUSION]
Ladies and gentlemen,
By working together on these issues and sharing our experiences, we can develop common standards and best practices, address common challenges, and create a more level playing field for insurers and reinsurers to operate in.
We are convinced that together we can unlock the full potential of digitalisation and create a safer, more innovative, and more resilient insurance industry for the future to come and we can turn AI and the digital revolution into the successes they deserve to be.
Ladies and gentlemen, thank you very much for your attention.