Norton Rose Fulbright LLP

09/22/2025 | News release | Distributed by Public on 09/22/2025 15:54

The rise in AI shareholder proposals: Navigating shareholder concerns

As discussed in our previous look at the 2025 proxy season, the rapid development of artificial intelligence (AI) technologies has elevated AI to a core governance concern for shareholders, and as AI continues to dominate headlines, the urgency of finding a balance between transparency, responsibility, and return on investment for shareholders is likely to spur a growing number of AI-related shareholder proposals in the coming years.

To navigate this evolving landscape, it is important that issuers develop an effective strategy for managing AI-related risks, including by providing clear disclosure of the company's efforts to manage AI-related risks, developing AI expertise, and meaningfully engaging with shareholders on the topic of AI.

Developing an effective strategy

As widespread usage of AI grows, so does shareholder awareness of the associated risks and opportunities. While the transformative potential of AI is high, its usage also means risks relating to misinformation and disinformation, data sourcing, privacy, copyright infringement, and human rights.

Accordingly, shareholders are increasingly expecting that boards adopt clear and effective strategies for managing these risks, particularly where companies are leveraging AI use in their operations and systems or where there is a clear expectation they will.

Glass Lewis, a leading American proxy advisory services company, recommends in its 2025 Canada Benchmark Policy Guidelines that companies using or developing AI technologies consider adopting strong internal frameworks for AI oversight, including clear disclosure of how boards are overseeing the use or development of AI by companies, and expanding their collective expertise and understanding in this area.

Clear disclosure and risk management

In 2024, more than 31% of the S&P 500 disclosed some level of board oversight of AI. Companies operating in the technology sector reported the highest level of board oversight (51%), followed by companies in the communications sector (37%), and then the health care and communications services industries (tied at 35%). Companies can demonstrate board oversight of AI in a number of ways, including by establishing a special committee tasked with AI oversight or an AI-focused ethics board, expanding the scope of an existing committee (typically audit or risk) to include responsibilities related to AI, ensuring certain directors have an adequate level of AI expertise, and providing ongoing training and continuing education on AI to management.

Glass Lewis recommends all companies that develop or engage AI in their operations provide clear disclosure of the board's role in implementing risk management strategies for the company's use of AI.

Board expertise

Where AI is fundamental to a company's strategy, shareholders are increasingly voicing their expectations that AI expertise be represented in the boardroom. Appointing board members with proven proficiency and experience on AI-related matters can be key to building trust with shareholders and demonstrating the company is prepared to manage related risks.

Among the S&P 500 companies, in 2024, 20.5% of companies had at least one director with AI expertise on their boards. This number nearly doubled since 2022, when only 10.5% of S&P 500 companies had the same board expertise, highlighting that shareholders and boards view AI competency as a high priority.

Engagement with proposals as a win

A review of the 2025 proxy season highlights that AI is increasingly being seen as a core governance issue for shareholders. For example, in 2025, Mouvement d'éducation et de défense des actionnaires (MÉDAC), a Quebec-based investors' rights group, brought AI-focused activist proposals to 14 of Canada's most prominent issuers, including all major Canadian banks, Dollarama, and BCE Inc.

In light of the lack of Canadian legislation related to AI use and related risk mitigation by companies, MÉDAC's proposals sought to encourage the issuers to sign the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (the Code), developed by the federal government in September 2023. The Code sets out measures to be taken by organizations developing or managing the operations of AI systems, in an effort to address and mitigate the risks of AI.

In nine instances, MÉDAC's proposal was defeated when taken to a vote (although it garnered as much as 17.4% support in one instance), and in five instances the vote was withdrawn due to the companies engaging in meaningful conversation with MÉDAC. Although support for the proposals was modest, we can expect to continue to see AI matters on the ballot in the coming years, and companies will benefit from proactively developing competent and responsive strategies and AI risk management tools.

The authors would like to thank Celine Xu, articling student, for her contribution to preparing this legal update.

Norton Rose Fulbright LLP published this content on September 22, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 22, 2025 at 21:54 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]