Simon Fraser University

04/30/2026 | Press release | Distributed by Public on 04/30/2026 11:16

Canadian youth lobby Ottawa for action on AI safety and online harms

media release

Canadian youth lobby Ottawa for action on AI safety and online harms

April 30, 2026

Canadian youth have delivered a set of AI policy recommendations to ministers, parliamentarians and senators in Ottawa as they seek to make their voices heard at a pivotal moment in the debate around AI and online harms.

Among the proposals is an age-verification system to restrict users' access to generative AI platforms, while there is also a call for AI companies to address the addictive design of AI chatbots.

The report comes after 100 youth, split across four citizens' assemblies, discussed and debated key issues around AI chatbots, information integrity, data privacy and age assurance.

Spearheaded by Simon Fraser University's Dialogue on Technology Project, representatives from the Gen(Z)AI project delivered their recommendations to Evan Solomon, Minister of Artificial Intelligence and Digital Innovation, and Marc Miller, Minister of Canadian Identity and Culture earlier today.

Download the report

Liam McKay-Argyriou, an SFU undergrad in communication, took part in the initiative and is among those making the trip to Ottawa.

"While the economic benefits of AI are exciting, young Canadians have experienced how these new tools can cause serious harms that individual citizens are not equipped to address," says McKay-Argyriou.

"We need legislation that enforces clear guardrails to protect the safety of our data, mental health and democratic institutions, in the same way our government upholds safety standards for automobiles or pharmaceuticals.

"It feels empowering to share my perspective on a topic of importance to me and know my voice will be heard by decision-makers, something many youth don't have the chance to experience."

There is currently no binding legal framework to regulate either AI systems or online platforms in Canada, following the collapse of both the Online Harms Act (C-63) and the Artificial Intelligence and Data Act (AIDA), in 2025.

"My message to legislators is that youth need to be consulted and involved in the policy processes," says Joie Marin, an SFU undergrad in communication.

"AI and online harms regulations are a form of care that need to be implemented in a way that supports the digital empowerment of young people.

"Now is a critical time to act and something must be done now to keep Canadians safe in digital environments."

The recommendations put forward in the report input directly into the process of shaping Canada's digital governance architecture, according to Fergus Linley-Mota, director of the Dialogue on Technology Project.

"Young people are on the front lines of AI technology and they're facing a whole series of disruptive changes to their lives," says Linley-Mota.

Yet they've been largely absent from the governance processes shaping their digital lives. Gen(Z)AI was set up to change that."

The 100 youth were selected nationally by civic lottery in order to reflect Canada's geographic, linguistic and demographic diversity.

The four citizens' assemblies each tackled a specific policy theme: AI chatbots in Toronto; information integrity in Montreal; data privacy in Vancouver; and age assurance in Halifax.

After three days of discussions, youth in each location came up with issue statements and a set of recommendations for their policy area.

"Our youth participants expressed a consistent and striking ambivalence - they use AI tools, often extensively, while simultaneously distrusting the platforms that deliver them, the governments that regulate them, and the incentive structures that shape them," says Helen Hayes, project co-lead and a fellow at SFU's Morris J. Wosk Centre for Dialogue.

"This is a rational response to a governance landscape that has, until now, spoken about young people rather than with them.

"The legislation being constructed now will shape the digital lives of young Canadians for decades to come. It's vital that they have a seat at the table."

The project was carried out in partnership with McGill's Centre for Media, Technology and Democracy and Mila - Quebec Artificial Intelligence Institute.

Select recommendations

AI and chatbots

  • Mandate that AI platforms address the addictive design of AI chatbots by requiring measures such as content filters and optional data cache deletion, and explicitly providing users with the ability to determine levels of responsiveness and conversationality.
  • Mandate accessible flagging capacity for users, require platforms to regularly report these instances, in a timely fashion, to an independent body with enforcement capacity, and make such reports accessible to the Canadian public.
  • Establish a new, independent government body to enforce AI safety standards, conduct systems evaluations, algorithm audits, and risk assessments, and intake user complaints, including by offering dispute resolution and other resource mechanisms.

AI and information integrity

  • Mandate that digital platforms explicitly label AI-generated content and give users the functionality to omit this content.
  • Give people copyright over their own features and likeness, and create an online regulator to enforce the removal of non-consensual AI-generated material, including Child Sexual Abuse Material (CSAM).
  • Mandate that platforms monitor, flag, and transparently share information, with both the public and government, about the spread of mis- and dis-information, especially during high-risk moments, including elections and public health crises.

AI and data privacy

  • Mandate that platforms and AI companies implement meaningful and informed consent mechanisms to users, including by publishing a version of their terms and conditions that uses plain-language and is accessible by default.
  • Impose privacy-by-default standards for all AI systems.

AI and age assurance

  • Create a standardized age-verification system to restrict users' access to generative AI platforms through the creation of an anonymized digital token system, with associated programs and accessible resources to inform the public about its implementation.
  • Mandate, in cases where age assurance is used, that companies adhere to stronger regulation, enforced by a Regulator, surrounding the use of sensitive age assurance data, including by:
    • Imposing time-limited storage;
    • Imposing safety audits on platforms and third-party data collectors;
    • Requiring leakage protection in models and training.
  • Mandate that any AI platforms accessible to children, including in educational contexts, implement safety-by-design protocols to safeguard their use and promote learning and skills development.

AVAILABLE EXPERTS

FERGUS LINLEY-MOTA, director, Dialogue on Technology Project
[email protected]

HELEN HAYES, fellow, SFU's Morris J. Wosk Centre for Dialogue
[email protected]

Youth representatives are also available for interview upon request.

CONTACT

SAM SMITH, SFU Communication & Marketing 778.782.3210 | [email protected]

Communications & Marketing | SFU Media Experts Directory 
778.782.3210 

ABOUT SIMON FRASER UNIVERSITY  
Who We Are
SFU is a leading research university, advancing an inclusive and sustainable future. Over the past 60 years, SFU has been recognized among the top universities worldwide in providing a world-class education and working with communities and partners to develop and share knowledge for deeper understanding and meaningful impact. Committed to excellence in everything we do, SFU fosters innovation to address global challenges and continues to build a welcoming, inclusive community where everyone feels a sense of belonging. With campuses in British Columbia's three largest cities-Burnaby, Surrey and Vancouver-SFU has ten faculties that deliver 368 undergraduate degree programs and 149 graduate degree programs for more than 37,000 students each year. The university boasts more than 200,000 alumni residing in 145+ countries. 

Please enable JavaScript to view the comments powered by Disqus.
Simon Fraser University published this content on April 30, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 30, 2026 at 17:16 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]