11/11/2025 | Press release | Distributed by Public on 11/12/2025 05:38
11.11.2025 - (2025/2056(INI))
Committee on Economic and Monetary Affairs
Rapporteur: Arba Kokalari
on the impact of artificial intelligence on the financial sector
(2025/2056(INI))
The European Parliament,
- having regard to the Commission communication of 9 April 2025 entitled 'AI Continent Action Plan' (COM(2025)0165),
- having regard to the Commission communication of 24 September 2020 on a Digital Finance Strategy for the EU (COM(2020)0591),
- having regard to the report of 25 February 2025 by the European Securities and Markets Authority (ESMA) entitled 'Artificial intelligence in EU investment funds: adoption, strategies and portfolio exposures',
- having regard to the public statement of 30 May 2024 by ESMA on the use of Artificial Intelligence (AI) in the provision of retail investment services,
- having regard to the report of 10 February 2025 by the European Insurance and Occupational Pensions Authority (EIOPA) entitled 'Impact Assessment of EIOPA's Opinion on AI governance and risk management',
- having regard to the report of 30 April 2024 by EIOPA entitled 'Report on the digitalisation of the European insurance sector',
- having regard to the report of 29 November 2024 by the European Banking Authority (EBA) entitled 'Risk Assessment Questionnaire (RAQ) - Autumn 2024',
- having regard to the report of 4 August 2023 by the EBA entitled 'Machine learning for internal ratings-based models',
- having regard to the article of 26 February 2024 by Elizabeth McCaul, Member of the Supervisory Board of the European Central Bank (ECB), entitled 'From data to decisions: AI and supervision',
- having regard to the publication of 7 May 2024 by the ECB entitled 'The rise of artificial intelligence: benefits and risks for financial stability',
- having regard to the working paper of 15 December 2023 by the Organisation for Economic Co-operation and Development entitled 'Generative Artificial Intelligence in finance',
- having regard to the report of 12 December 2024 of the Bank for International Settlements entitled 'Regulating AI in the financial sector: recent developments and main challenges',
- having regard to the report of 13 June 2024 of the Bank for International Settlements entitled 'Intelligent financial system: how AI is transforming finance',
- having regard to the report of 19 December 2024 by the High-Level Panel of Experts to the G7 entitled 'Artificial Intelligence and Economic and Financial Policymaking',
- having regard to the report of 14 November 2024 by the Financial Stability Board entitled 'The Financial Stability Implications of Artificial Intelligence',
- having regard to Regulation (EU) 2024/1624 of the European Parliament and of the Council of 31 May 2024 on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing[1],
- having regard to Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)[2],
- having regard to Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector and amending Regulations (EC) No 1060/2009, (EU) No 648/2012, (EU) No 600/2014, (EU) No 909/2014 and (EU) 2016/1011[3](Digital Operational Resilience Act),
- having regard to Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)[4],
- having regard to Directive (EU) 2016/97 of the European Parliament and of the Council of 20 January 2016 on insurance distribution[5](Insurance Distribution Directive),
- having regard to Regulation (EU) No 600/2014 of the European Parliament and of the Council of 15 May 2014 on markets in financial instruments and amending Regulation (EU) No 648/2012[6](Markets in Financial Instruments Regulation),
- having regard to Directive 2014/65/EU of the European Parliament and of the Council of 15 May 2014 on markets in financial instruments and amending Directive 2002/92/EC and Directive 2011/61/EU[7](Markets in Financial Instruments Directive),
- having regard to Regulation (EU) No 575/2013 of the European Parliament and of the Council of 26 June 2013 on prudential requirements for credit institutions and investment firms and amending Regulation (EU) No 648/2012[8](Capital Requirements Regulation),
- having regard to Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC[9](Capital Requirements Directive),
- having regard to Directive 2011/61/EU of the European Parliament and of the Council of 8 June 2011 on Alternative Investment Fund Managers and amending Directives 2003/41/EC and 2009/65/EC and Regulations (EC) No 1060/2009 and (EU) No 1095/2010[10](Alternative Investment Fund Managers Directive),
- having regard to Directive 2009/138/EC of the European Parliament and of the Council of 25 November 2009 on the taking-up and pursuit of the business of Insurance and Reinsurance (Solvency II)[11],
- having regard to Directive 2009/65/EC of the European Parliament and of the Council of 13 July 2009 on the coordination of laws, regulations and administrative provisions relating to undertakings for collective investment in transferable securities (UCITS)[12],
- having regard to Directive (EU) 2015/2366 of the European Parliament and of the Council of 25 November 2015 on payment services in the internal market, amending Directives 2002/65/EC, 2009/110/EC and 2013/36/EU and Regulation (EU) No 1093/2010, and repealing Directive 2007/64/EC[13](Payment Services Directive),
- having regard to its resolution of 3 May 2022 on artificial intelligence in a digital age[14],
- having regard to Rule 55 of its Rules of Procedure,
- having regard to the report of the Committee on Economic and Monetary Affairs (A10-0225/2025),
A. whereas the EU Artificial Intelligence Act (AI Act) introduces the world's first comprehensive regulatory framework for artificial intelligence (AI);
B. whereas points 5(b) and (c) of Annex III to the AI Act define two high-risk use cases for the financial services sector, namely the use of AI systems for consumer credit scoring and creditworthiness assessments and their use for risk assessments and pricing of life and health insurance;
C. whereas an AI system is defined in the AI Act as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
D. whereas a general-purpose AI (GPAI) model is defined as an AI model, including those trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, but not including AI models that are used for research, development or prototyping activities before they are placed on the market;
E. whereas systems used to improve mathematical optimisation or to accelerate and approximate traditional, established optimisation methods, such as linear or logistic regression methods, fall outside the scope of the definition of an AI system;
State of AI adoption in financial services
1. Notes the broad and diverse adoption of AI across the EU financial services sector, with financial institutions, which have been using classical machine learning for an extended period, now gradually experimenting with the use of generative AI, including large language models (LLMs) and other foundation models, as a support tool; stresses that the majority of current AI use cases aim to streamline back-office processes, with most applications representing low-hanging fruit rather than high-risk innovation; notes, however, that the use of AI to evaluate the creditworthiness of natural persons or establish their credit score, currently defined as high-risk in the AI Act, is prevalent and increasing; stresses that the deployment of fully autonomous AI systems in the financial sector should have human oversight[15]; notes that financial institutions continue to explore use cases involving GPAI models, the greater complexity of which entails higher operational and compliance risk, but also notes that these applications largely remain in the testing phase;
2. Believes that AI is a major opportunity for EU financial institutions to develop more innovative products, streamline operations and improve competitiveness on a global scale; highlights that the use of AI in financial services has the potential to bring societal benefits, including more effective fraud detection and prevention, anti-money laundering checks and sanctions checks, customer support, transaction monitoring, personalised financial advice, environmental, social and governance data gathering, analysis and reporting, trading models and strategies, regulatory compliance assistance, market surveillance and abuse monitoring; considers that the use of AI in the financial sector should strike a balance between innovation and competitiveness on the one hand, and risk management, consumer protection and financial stability on the other hand; stresses that the benefits of AI use in financial services should be passed on primarily to end customers, for example through lower prices, better coverage, improved financial advice, greater financial inclusion and access, and enhanced financial literacy;
3. Notes that there are also risks from the use of AI in financial services; highlights that, prior to the recent breakthrough of LLMs, these risks stemmed from the quality, accuracy and representativeness of the data on which models were trained, as non-LLM AI outputs are only as reliable as the data inputs; underlines that poor data quality could lead, among other things, to discriminatory outcomes, mis-selling and reinforced systemic biases, while opaque and complex models could give rise to privacy breaches and the exclusion of vulnerable consumers through, for example, price discrimination, thereby exacerbating existing risks or creating new ones; stresses that LLMs introduce significant additional risks that can be hard to measure, including model hallucinations even where training data is of high quality; stresses that such risks and outcomes must be mitigated effectively; observes further challenges related to cybersecurity vulnerabilities and to the explainability of AI systems; stresses, therefore, the need to ensure robust data governance, rigorous testing and documentation of AI models, alongside maintaining a human in the loop and upholding a high standard for employing AI-systems in consumer-facing applications;
4. Understands that financial institutions have adopted a measured approach to developing, testing and deploying AI systems, with a view to ensuring compliance with existing cross-cutting and sectoral legislation; underlines that this prudent approach may also be driven by undemonstrated customer demand, evolving customer expectations and risk considerations, which have been outlined above;
5. Highlights that the rise of AI poses challenges for supervisory authorities, particularly given the lack of AI-specific expertise and adequate supervisory tools to assess advanced machine learning and generative AI models; calls on the European and national supervisory authorities to adapt to the increasing use of AI in financial services and to monitor, assess and mitigate risks to consumers and financial stability, while being mindful not to discourage innovation through disproportionate compliance burdens or overly prescriptive regulatory approaches;
6. Notes that the concentration among AI service providers that offer investment advice may lead to herd behaviour, driven by similar models and limited data sources; urges the European and national supervisors to monitor these risks and financial institutions in order to account for them when developing AI tools;
7. Notes the dependency of EU financial actors on third party technology providers (TPPs) to host and develop their AI models and highlights that the majority of financial firms are reliant on only a few TPPs for these services, which may lead to concentration risk and reduce the bargaining power of financial institutions when negotiating or modifying contractual terms for AI services; cautions that reliance on a small number of providers for a given service could lead to systemic risks in the event of disruptions, especially if rapid migration to alternative providers is not feasible;
8. Notes that the recently enacted Digital Operational Resilience Act (DORA) requires financial institutions to implement measures to mitigate concentration risk stemming from information and communication technology (ICT) TPPs, including contingency plans and arrangements to ensure business continuity; requests that the Commission and the European supervisory authorities assess, in particular, the feasibility of applying the exit strategies and transition provisions stipulated in DORA to AI models hosted by the infrastructure of TPPs, especially with regard to the considerable reliance on third-country TPPs for AI services;
9. Emphasises that EU companies must be able to use existing cloud infrastructure for AI development and deployment; calls for actively exploring avenues to strengthen the compatibility and interoperability of AI models and compliance frameworks with those of like-minded international partners, especially those that aspire to provide equally robust regulatory safeguards, to ensure that EU financial institutions maintain access to AI tools and suppliers and with a view to shaping balanced global standards while safeguarding legal certainty for European businesses;
10. Supports initiatives to boost AI and cloud development in the EU, especially with a view to developing AI services that are fully compliant with EU data protection and fundamental rights frameworks, while also strengthening strategic autonomy and resilience;
Regulatory landscape for AI in financial services
11. Stresses that the financial services sector is highly regulated, subject to multiple pieces of sectoral legislation at both national and EU level, requiring actors to manage risks in a variety of areas including data protection, data lineage, data quality, data governance, operational resilience, outsourcing, model risk, discriminatory outcomes, and market and credit risk, which together form the framework for AI deployment and governance in the financial services sector[16]; emphasises, however, the importance of continuously monitoring regulatory gaps and evolving use cases of AI in finance, especially with a view to safeguarding consumer rights and the right to privacy;
12. Notes that the EU has adopted a more risk-based approach to AI regulation than other jurisdictions; underlines that, while this may create challenges for the adoption and development of AI in financial services, it also offers an opportunity to build trust and support innovation, provided that the framework is clarified and implemented in a way that fosters legal certainty, proportionality and market confidence; recognises that the AI Act has not yet been fully implemented and that its practical implications have not yet been assessed;
13. Recalls that the AI Act explicitly takes into account the current financial services acquisand seeks to avoid duplication of requirements, particularly with regard to internal governance and quality management processes, by allowing for limited derogations for financial institutions in so far as equivalent requirements are laid down in EU financial services law; expresses concern that there are, nonetheless, regulatory overlaps and a lack of sufficient guidance on the interpretation of these regulatory overlaps and interactions, which introduces undue complexity, compliance burdens and legal uncertainty, thus hindering the uptake of AI in the financial services sector; underlines the importance of guaranteeing a legal, regulatory and administrative framework that is based on certainty, predictability and stability; notes that an expansive interpretation of the AI Act, rather than a proportional one, may risk leading to undue compliance requirements for financial institutions and causing legal uncertainty; recognises the challenge arising from the fact that supervisory agencies have differing legal interpretations and expectations in terms of the application of the acquis, resulting in fragmentation of the single market; asks the Commission and the national competent authorities to identify and address any inconsistencies in the course of the AI Act's implementation and as part of the upcoming Digital Omnibus package;
14. Supports the AI Act's recommendation to designate financial competent authorities as market surveillance bodies for AI systems used in financial services deemed high-risk; notes, however, that other national competent authorities will be responsible for supervising non-high-risk AI systems; recognises the challenges arising from having multiple supervisory agencies with competences regarding the application of the acquis; recognises, furthermore, the challenges arising from the differing legal interpretations and expectations of the various supervisory agencies, which could lead to the fragmentation of the single market;
15. Encourages the supervisory authorities to strengthen coordination, cooperation and information exchange to avoid overlapping jurisdiction claims; urges, moreover, the Commission and the supervisory authorities to strengthen cooperation with international partners in global standard-setting forums to ensure alignment and avoid the fragmentation of regulatory approaches, as well as to ensure that the EU keeps pace and aligns with global regulatory developments;
16. Notes that the General Data Protection Regulation and its requirements on data minimisation, purpose limitation, customer consent, and financial institutions' processing of personal data impose limitations on the use of AI in financial services; considers that the right balance is needed between reaping the benefits of the use of AI in financial services and the protection of consumers' data;
Recommendations to ensure responsible use of AI in financial services
17. Regrets that the EU is lagging behind in terms of AI innovation and investment, as illustrated by the EUR 33 billion in venture funding received by EU companies developing foundational models between 2018 and 2023, compared to over EUR 120 billion received by their US counterparts[17]; believes that the financial services sector, as the largest spender on ICT services and products, has the potential to act as a catalyst in mobilising private investment in AI; calls, against the backdrop of slow AI investment in the EU's financial sector, for an ambitious proposal to jump-start the European venture capital scene as part of the savings and investments union;
18. Calls on the Commission to provide clear and practical guidance, developed in consultation with the European and national supervisory authorities and stakeholders, on the application of existing financial services legislation with regard to the use of AI; considers that such guidance should aim to enable the use of AI in the financial services sector, including in a way that is ethical, responsible and transparent; calls for consistent definitions and the simplification of the regulatory framework to avoid duplicated requirements, including risk assessment reporting requirements, and cautions against a one-size-fits-all approach that places a disproportionate burden on smaller and medium-sized financial institutions; emphasises the need for a good balance between the responsible use of AI and providing enough room for innovation;
19. Calls on the Commission to explore how AI-driven tools can be used in financial markets, such as in intermediation, portfolio management and compliance automation, to contribute to the objectives of the savings and investments union, including by supporting retail investors in making informed investment decisions, enhancing financial education, fostering innovation among companies, reducing market fragmentation and ensuring a safe environment for consumers; stresses that achieving these goals requires a technology-neutral regulatory framework;
20. Believes that sectoral legislation regulating the use of AI in financial services is mainly sufficient to cover AI deployment in its current form; underlines that there should be continuous monitoring to determine if there are duplications or deficiencies in the current financial services legislation applicable to AI deployment; underlines that additional legislation would add complexity and uncertainty and ultimately risk depriving the sector of the benefits of AI use; stresses that reliance on current frameworks requires continuous supervisory attention, effective enforcement and clear allocation of responsibility for ensuring compliance, particularly in cross-border or outsourced AI deployment scenarios, as well as the monitoring and assessment of possible future gaps created by new AI developments if they create substantial risks to consumers and financial stability; strongly advises the Commission and the Member States to coordinate to avoid gold-plating relevant legislation and to prevent the creation of new barriers in cross-border markets; notes that the Commission, according to the AI Act, can assess the list of high-risk applications under Annex III to the AI Act;
21. Calls on the European and national supervisory authorities to support the responsible uptake of AI by promoting consistent interpretations and proportionate application of current regulations; believes that adequate regulation of AI deployment in the financial services sector supports uptake and societal trust in AI; emphasises that the attitude and approach of supervisors are as important as the rules themselves; recommends that supervisory efforts prioritise tangible, operational risks where identified, rather than abstract or theoretical concerns, while maintaining an active and proportionate approach to supervision, by balancing innovation and consumer protection, to manage unforeseen risks arising from the widening uptake of AI technologies; stresses the role of effectively monitoring and addressing AI-related risks, including those related to opacity, market concentration and loss of accountability, which could impact financial stability;
22. Calls on the Commission and the Member States to remove entry barriers within the EU for AI-driven innovative financial undertakings, including through streamlined licensing, cross border scale-ups and inclusion in supervisory innovation hubs;
23. Supports research into the environmental impact of AI use, with a focus on resource intensity and long-term sustainability, in order to increase transparency and help financial institutions to assess these aspects and their own environmental footprint;
24. Believes that the increasing use of AI, which may have implications for the financial services job market, requires strong AI literacy, digital skills, and talent involvement, supported by both public-sector upskilling initiatives and market-based solutions; supports industry efforts and targeted initiatives, including public-private partnerships and reskilling programmes, to build technical and ethical AI competencies, especially regarding rights and risks, in the financial workforce; underlines the importance of developing AI strategies that enhance productivity, while supporting workers' adaptation, upskilling and reallocation, while ensuring meaningful human oversight and control; asks for more clarity with regard to the AI Act's requirements for financial institutions to comply with AI literacy requirements; stresses, furthermore, the importance of ensuring and promoting equal access to AI tools and services, including for less digitally capable segments of the population;
25. Calls on the Commission and the European and national supervisory authorities to assess the added value of AI-specific regulatory sandboxes, innovation hubs and cross-border testing environments for financial services in enabling experimentation with AI-driven financial innovation, both to help start-ups test their products and to allow incumbent institutions to explore new use in a controlled setting, while safeguarding consumer protection and market integrity; believes that properly leveraging AI regulatory sandboxes could provide the structured, supervised testing environment necessary to facilitate innovation and responsible AI deployment within the financial services sector; encourages the European and national supervisory authorities to enhance supervisory tools and technology (SupTech) through the use of AI and integrate them into daily supervisory activities to improve the efficiency and effectiveness of financial supervision; notes that these tools are intended to support, not replace, human supervisors;
°
° °
26. Instructs its President to forward this resolution to the Council, the Commission and the governments and parliaments of the Member States.
This report examines the use and impact of AI in the financial services sector and the regulatory landscape. The Rapporteur provides policy recommendations to enable the use of AI in financial services and clarify regulatory overlaps. The report addresses aspects specific to the financial services sector and does not cover matters falling within the remit of other Committees.
The Rapporteur believes that it is crucial for the policy debate on AI in financial services to be grounded in reality and focused on tangible and plausible questions. Due consideration must be taken of the existing legal framework and the practical realities of the technology's use in financial services, rather than speculate about abstract or theoretical concerns.
The report therefore starts by analysing the deployment of AI in the sector. It notes that the majority of AI use cases aims to cut costs by streamlining operations, rather than create new revenue streams. Most use cases represent low-hanging fruit rather than high-risk innovation, meaning that it is safe to say that deployment of AI in finance has been prudent. We are far from experiencing a financial system run, or heavily dependent on, autonomous, auto-pilot AI models that threatens financial stability and consumers' interests.
The reality is the opposite: the sector is so heavily regulated, and the fiduciary responsibility of financial institutions so highly regarded, that the lion's share of use cases are both low-risk and include a human expert in the loop. Nonetheless, the diffusion and uptake of AI technologies across the financial services sector holds significant potential. Not only it can improve the sector's efficiency, enhance consumer services, and strengthen the competitiveness of European firms, but it can also support more effective anti-money laundering and fraud detection.
That is not to say that AI deployment in financial services is without risks. The issue of data quality, explainability and transparency of AI is a challenge in this domain as within others. However, the financial services sector with its myriad of detailed directives and regulations, is well positioned to handle these risks. Financial institutions, whether it be banks, insurances undertakings or asset managers, are required by EU financial services legislation to have systems in place for data quality, data lineage, data governance, operational resilience, outsourcing, model risk, concentration risks, discriminatory outcomes, and more, which provides a framework for AI deployment and governance. As deployment of AI in finance continues, it will be critical to continue monitoring these risks and to provide finance experts with resources, training and AI-literacy.
The alternative is to take a restrictive approach to AI deployment in finance, with new legislation out of fear of the unknown effects, or because status quo is comfortable. Such a policy would deprive the financial services sector of the opportunity to use AI. This would ultimately undermine the sector's competitiveness, the quality of services offered, and the benefits delivered to consumers. It would also have a negative impact on investment in AI technologies, considering that the financial services sector is the biggest spender on ICT services and products. Such a route should be off the table considering the global race for AI, the stark geopolitical realities underpinning it, and the fact that the EU is already lagging behind.
Pursuant to Article 8 of Annex I to the Rules of Procedure, the rapporteur declares that she included in her report input on matters pertaining to the subject of the file that she received, in the preparation of the report, from the following interest representatives falling within the scope of the Interinstitutional Agreement on a mandatory transparency register[1], or from the following representatives of public authorities of third countries, including their diplomatic missions and embassies:
|
1. Interest representatives falling within the scope of the Interinstitutional Agreement on a mandatory transparency register |
|
Associazione Bancaria Italiana |
|
L'Autorité de contrôle prudentiel et de résolution (ACPR), Banque de France |
|
BlackRock |
|
Bundesverband deutscher Banken e.V. |
|
Danske Bank A/S |
|
Deutsche Bank AG |
|
European Banking Authority (EBA) |
|
European Banking Federation |
|
European Commission (DG FISMA) |
|
European Fund and Asset Management Association (EFAMA) |
|
European Insurance and Occupational Pensions Agency (EIOPA) |
|
European Securities and Markets Authority (ESMA) |
|
Eurofinas |
|
Febelfin |
|
Fédération bancaire française |
|
Finance Denmark |
|
Finance Sweden |
|
Finance Watch |
|
Finansförbundet (Financial Sector Union of Sweden) |
|
Finansinspektionen (Financial Supervisory Authority, Sweden) |
|
Finanssiala ry - Finance Finland |
|
GDV - German Insurance Association |
|
Insurance Europe |
|
Klarna Bank AB |
|
Mastercard Europe |
|
Mistral AI |
|
Nordea Bank Abp |
|
OpenAI OpCo, LLC |
|
Rabobank |
|
Salesforce Inc. |
|
Skandinaviska Enskilda Banken AB (publ) |
|
Société Générale |
|
Stripe, Inc. |
|
Svenska Handelsbanken AB |
|
Svensk Försäkring (Insurance Sweden) |
|
Swedish Financial Technology Association |
|
Swedbank AB (publ) |
|
The Alan Turing Institute |
|
Trygg-Hansa |
The list above is drawn up under the exclusive responsibility of the rapporteur.
Where natural persons are identified in the list by their name, by their function or by both, the rapporteur declares that she has submitted to the natural persons concerned the European Parliament's Data Protection Notice No 484 (https://www.europarl.europa.eu/data-protect/index.do), which sets out the conditions applicable to the processing of their personal data and the rights linked to that processing.
|
Date adopted |
5.11.2025 |
|
Result of final vote |
+ : 36 - : 14 0 : 1 |
|
Members present for the final vote |
Georgios Aftias, Rasmus Andresen, Stephen Nikola Bartulica, Isabel Benjumea Benjumea, Gilles Boyer, Giovanni Crosetto, Fabio De Masi, Siegbert Frank Droese, Marco Falcone, Markus Ferber, Jonás Fernández, Claire Fita, Dirk Gotink, Enikő Győri, Michalis Hadjipantela, Eero Heinäluoma, Jaroslav Knot, Kinga Kollár, Aurore Lalucq, Rada Laykova, Marlena Maląg, Jorge Martín Frías, Costas Mavrides, Fernando Navarrete Rojas, Denis Nesci, Luděk Niedermayer, Ľudovít Ódor, Nikos Papandreou, Gaetano Pedulla', Kira Marie Peter-Hansen, Sirpa Pietikäinen, Pierre Pimpie, Jaroslava Pokorná Jermanová, Evelyn Regner, Jussi Saramo, Paulius Saudargas, Ralf Seekatz, Irene Tinagli, Pasquale Tridico, Lara Wolters, Stéphanie Yon-Courtin |
|
Substitutes present for the final vote |
Marc Botenga, Hanna Gronkiewicz-Waltz, Fernand Kartheiser, Arba Kokalari, Morten Løkkegaard, Marco Squarta, Mariateresa Vivaldini |
|
Members under Rule 216(7) present for the final vote |
Jaroslav Bžoch, Ana Catarina Mendes, Dan-Ştefan Motreanu |
|
36 |
+ |
|
ECR |
Bartulica Stephen Nikola, Crosetto Giovanni, Malag Marlena, Nesci Denis, Squarta Marco, Vivaldini Mariateresa |
|
NI |
Kartheiser Fernand |
|
PPE |
Aftias Georgios, Benjumea Benjumea Isabel, Falcone Marco, Ferber Markus, Gotink Dirk, Gronkiewicz-Waltz Hanna, Hadjipantela Michalis, Kokalari Arba, Kollár Kinga, Motreanu Dan-Stefan, Navarrete Rojas Fernando, Niedermayer Ludek, Pietikäinen Sirpa, Saudargas Paulius, Seekatz Ralf |
|
Renew |
Boyer Gilles, Løkkegaard Morten, Ódor Ludovít, Yon-Courtin Stéphanie |
|
S&D |
Fernández Jonás, Fita Claire, Heinäluoma Eero, Lalucq Aurore, Mavrides Costas, Mendes Ana Catarina, Papandreou Nikos, Regner Evelyn, Tinagli Irene, Wolters Lara |
|
14 |
- |
|
ESN |
Droese Siegbert Frank, Laykova Rada |
|
PfE |
Bzoch Jaroslav, Gyori Eniko, Knot Jaroslav, Martín Frías Jorge, Pimpie Pierre, Pokorná Jermanová Jaroslava |
|
The Left |
Botenga Marc, Pedulla' Gaetano, Saramo Jussi, Tridico Pasquale |
|
Verts/ALE |
Andresen Rasmus, Peter-Hansen Kira Marie |
|
1 |
0 |
|
NI |
De Masi Fabio |
Key:
+ : in favour
- : against
0 : abstentions