09/24/2025 | Press release | Distributed by Public on 09/24/2025 12:05
Litigators who defend directors in negligence and breach of fiduciary duty cases are watching a new area of liability emerge that many boards aren't prepared for: artificial intelligence oversight.
Canadian boards of directors now face an important question: can you breach your duties by ignoring artificial intelligence? The answer is increasingly yes.
Directors who fail to address AI governance may find themselves facing potential liability. Consider what your board's answers would be to the following questions:
Although a board's duties will vary depending on the nature of the business, if the answers are variations of "we never discussed that," this creates potential liability should your board's decisions come under scrutiny.
Directors must act with the care and diligence of a "reasonably prudent person" under both the Canada Business Corporations Act and provincial corporate legislation. That standard doesn't stay static - it evolves with the times.
As AI becomes increasingly material to business performance and risk management, boards that ignore AI-related technologies and risks my find it increasingly difficult to demonstrate reasonable prudence.
The Business Judgment Rule offers boards protection, but only if boards can prove they made informed decisions. The rule protects directors who make decisions in good faith, within their authority, and with the honest belief they're acting in the corporation's best interest-but only when those decisions are informed.
What it means to be "informed" is contextual and evolves with business realities. Canadian courts will examine the totality of the circumstances to determine whether directors took reasonable steps to understand AI's implications for their business. The focus is usually on the process by which a decision is made rather than the decision itself. The more significant the decision, the greater the obligation on directors to ensure a reasonable process was followed.
Currently, concrete AI regulation in Canada is very limited, but AI legislation is in development at both the federal and provincial levels, with several key initiatives moving through the legislative process.
When a court or decision-maker evaluates whether a board has met its duties, it will look at available standards and best practices. The regulatory frameworks (and other industry-specific standards) being developed across Canada are creating those benchmarks. Courts will ask: "What standards existed that a reasonable board should have known about and followed?" Those standards will inform what will be considered "reasonable care" for directors.
The Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems identifies measures that should be applied by all effected organizations with general-purpose generative AI capabilities, as well as additional measures that should be taken by organizations where the systems are made widely available for use, and which are therefore subject to a wider range of potentially harmful or inappropriate use. Companies that have signed this Code have essentially committed to meeting these standards - creating a clear benchmark for what constitutes reasonable AI governance.
While this Code is voluntary, it provides some direction about things to come. The federal government appointed an AI Minister in summer 2025, which signals regulatory momentum.
There are currently no material provincial laws that provide guidance about the scope of directors' duties related to AI. A few provinces have enacted laws related to personal information, privacy and disclosure of AI in job screening decisions, which may not directly engage directors, but may cause management to develop better governance policies or provide updates to boards about AI use.
Given the provinces' broad jurisdiction and the proliferation of AI tools, it is expected that the provinces will address AI in the coming years, although it may be in a piecemeal fashion.
Financial sector directors should be aware that the Office of the Superintendent of Financial Institutions (OSFI) has updated Guideline E-23 Model Risk Management to explicitly cover AI and machine learning models. These should not be seen as mere suggestions - they're regulatory expectations that will be used to judge financial institution directors' conduct.
Energy sector boards currently operate with less specific regulatory guidance, which means greater responsibility for proactive governance. Without clear industry standards, courts will evaluate a board's AI oversight against general corporate governance principles and cross-sectoral AI regulations. This environment calls for thoughtful, documented AI governance approaches where AI is material to the business.
Securities law implications are emerging through CSA Staff Notice and Consultation 11-348, which discusses the application of existing Canadian securities laws to the use of AI in capital markets and expressly addresses "AI-washing" - overstating AI capabilities in public communications. AI-washing litigation is increasing in the U.S. and demonstrates the litigation risk of misleading AI-related statements.
When defending directors in director liability claims, the key question is always: "What would a reasonably prudent director have done in these circumstances?" In the context of AI-related director liability, courts and other decision-makers may evaluate what a reasonably prudent director would have done by examining:
From a litigation defense perspective, below are some practical steps to take that can help create a defensible record. What is advisable and required in individual cases, however, will always depend on the nature of the business and other specific circumstances.
The regulatory framework in Canada has been slow to emerge but is anticipated to start picking up pace over the coming year, given the federal government's recent appointment of an AI Minister. The emerging regulatory framework should be tracked, not only for compliance purposes but also because relevant standards will inform a decision-maker's evaluation of whether directors met their duties.
Boards that educate themselves, establish governance frameworks, and document their oversight efforts and decision-making process when it comes to AI will be well-positioned to demonstrate they exercised appropriate care and that it took reasonable steps to understand and oversee the impact.