Lawson Lundell LLP

09/24/2025 | Press release | Distributed by Public on 09/24/2025 12:05

Can a Board of Directors Breach their Duties by Ignoring AI? A Litigator’s View

Litigators who defend directors in negligence and breach of fiduciary duty cases are watching a new area of liability emerge that many boards aren't prepared for: artificial intelligence oversight.

Canadian boards of directors now face an important question: can you breach your duties by ignoring artificial intelligence? The answer is increasingly yes.

Directors who fail to address AI governance may find themselves facing potential liability. Consider what your board's answers would be to the following questions:

  • What steps did the board take to understand AI's impact on the company?
  • When did management first brief the board on AI risks?
  • What governance frameworks did you establish for AI oversight?
  • How did you oversee third-party AI implementation?

Although a board's duties will vary depending on the nature of the business, if the answers are variations of "we never discussed that," this creates potential liability should your board's decisions come under scrutiny.

The Standard of Care is Evolving

Directors must act with the care and diligence of a "reasonably prudent person" under both the Canada Business Corporations Act and provincial corporate legislation. That standard doesn't stay static - it evolves with the times.

As AI becomes increasingly material to business performance and risk management, boards that ignore AI-related technologies and risks my find it increasingly difficult to demonstrate reasonable prudence.

The Business Judgment Rule Requires Informed Decision-Making

The Business Judgment Rule offers boards protection, but only if boards can prove they made informed decisions. The rule protects directors who make decisions in good faith, within their authority, and with the honest belief they're acting in the corporation's best interest-but only when those decisions are informed.

What it means to be "informed" is contextual and evolves with business realities. Canadian courts will examine the totality of the circumstances to determine whether directors took reasonable steps to understand AI's implications for their business. The focus is usually on the process by which a decision is made rather than the decision itself. The more significant the decision, the greater the obligation on directors to ensure a reasonable process was followed.

Why the Emerging Regulatory Landscape Matters

Currently, concrete AI regulation in Canada is very limited, but AI legislation is in development at both the federal and provincial levels, with several key initiatives moving through the legislative process.

When a court or decision-maker evaluates whether a board has met its duties, it will look at available standards and best practices. The regulatory frameworks (and other industry-specific standards) being developed across Canada are creating those benchmarks. Courts will ask: "What standards existed that a reasonable board should have known about and followed?" Those standards will inform what will be considered "reasonable care" for directors.

Federal Developments

The Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems identifies measures that should be applied by all effected organizations with general-purpose generative AI capabilities, as well as additional measures that should be taken by organizations where the systems are made widely available for use, and which are therefore subject to a wider range of potentially harmful or inappropriate use. Companies that have signed this Code have essentially committed to meeting these standards - creating a clear benchmark for what constitutes reasonable AI governance.

While this Code is voluntary, it provides some direction about things to come. The federal government appointed an AI Minister in summer 2025, which signals regulatory momentum.

Provincial Developments

There are currently no material provincial laws that provide guidance about the scope of directors' duties related to AI. A few provinces have enacted laws related to personal information, privacy and disclosure of AI in job screening decisions, which may not directly engage directors, but may cause management to develop better governance policies or provide updates to boards about AI use.

Given the provinces' broad jurisdiction and the proliferation of AI tools, it is expected that the provinces will address AI in the coming years, although it may be in a piecemeal fashion.

Industry-Specific Standards

Financial sector directors should be aware that the Office of the Superintendent of Financial Institutions (OSFI) has updated Guideline E-23 Model Risk Management to explicitly cover AI and machine learning models. These should not be seen as mere suggestions - they're regulatory expectations that will be used to judge financial institution directors' conduct.

Energy sector boards currently operate with less specific regulatory guidance, which means greater responsibility for proactive governance. Without clear industry standards, courts will evaluate a board's AI oversight against general corporate governance principles and cross-sectoral AI regulations. This environment calls for thoughtful, documented AI governance approaches where AI is material to the business.

Securities law implications are emerging through CSA Staff Notice and Consultation 11-348, which discusses the application of existing Canadian securities laws to the use of AI in capital markets and expressly addresses "AI-washing" - overstating AI capabilities in public communications. AI-washing litigation is increasing in the U.S. and demonstrates the litigation risk of misleading AI-related statements.

The Litigation Reality: What Courts Will Look For

When defending directors in director liability claims, the key question is always: "What would a reasonably prudent director have done in these circumstances?" In the context of AI-related director liability, courts and other decision-makers may evaluate what a reasonably prudent director would have done by examining:

  • What AI-related standards and best practices existed
  • Whether the board sought to understand AI's impact on their business
  • What governance frameworks the board and/or company established
  • How the board oversaw AI-related risks and opportunities
  • Whether public statements about AI capabilities were accurate and substantiated

Practical Steps to Protect Your Board

From a litigation defense perspective, below are some practical steps to take that can help create a defensible record. What is advisable and required in individual cases, however, will always depend on the nature of the business and other specific circumstances.

[1] Document AI education and understanding

  • Ensure board members have basic AI literacy relevant to the business
  • Create records of AI education and training initiatives
  • Document how the board educated itself about AI fundamentals and industry applications

[2] Treat AI as a strategic business Issue

  • Don't delegate AI oversight entirely to IT - maintain board-level strategic oversight
  • Conduct and document regular AI risk assessments covering, at minimum:
    • Employment and HR contexts (hiring, performance evaluation)
    • Employee use of AI tools (approved and unauthorized)
  • Require regular management reporting on AI strategy, implementation, and risk management
  • Consider whether there has been adequate resource allocation for AI initiatives-not only on the technology but also in employee training and risk management

[3] Establish governance frameworks early

  • Ensure that management develops written AI policies covering development, deployment, and monitoring, as well as ethics
  • Create reporting structures for AI oversight
  • Document oversight of third-party AI relationships and conduct due diligence on external AI providers
  • Ensure that emerging AI-related laws and regulations are tracked and the board is regularly updated and kept informed-not only to ensure the company remains in compliance but also to ensure the board understands the evolving standard of care

[4] Control AI communications

  • Ensure that management develops approval processes for all AI-related public communications, including:
    • Earnings calls discussing AI initiatives
    • Marketing materials claiming AI advantages
    • Investor presentations covering AI strategy
    • Regulatory filings mentioning AI risks or opportunities

The Bottom Line

A board of directors could be in breach of its duties if it ignores AI.

Given the proliferation of publicly available information about AI, boards are advised to educate themselves about material AI-related risks and opportunities in order to demonstrate informed decision-making. If AI is material to the business, a board that claims ignorance about AI's impact will face tough questions about whether they discharged their duties if the board's decisions come under scrutiny.

The regulatory framework in Canada has been slow to emerge but is anticipated to start picking up pace over the coming year, given the federal government's recent appointment of an AI Minister. The emerging regulatory framework should be tracked, not only for compliance purposes but also because relevant standards will inform a decision-maker's evaluation of whether directors met their duties.

Boards that educate themselves, establish governance frameworks, and document their oversight efforts and decision-making process when it comes to AI will be well-positioned to demonstrate they exercised appropriate care and that it took reasonable steps to understand and oversee the impact.

Lawson Lundell LLP published this content on September 24, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 24, 2025 at 18:05 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]