AI Safety in Montréal

1600+ members · aisafetymontreal.org

Local field-building hub serving the Montréal AI safety, ethics & governance community. We organize meetups, coworking sessions, targeted workshops, advising, and collaborations.


About

Montréal is home to one of the world's largest AI research ecosystems, anchored by Mila and surrounded by leading labs and startups. AI Safety in Montréal builds the local community of researchers, practitioners, and advocates working to ensure AI development benefits humanity.

We co-run the Mila AI safety reading group (biweekly sessions with 10–20 researchers) and serve members across AI safety, ethics, and governance.

What we do

Meetups Regular community gatherings for AI safety researchers and practitioners in Montréal.
Coworking sessions Focused work sessions for those working on AI safety projects.
Reading group Biweekly sessions at Mila discussing recent AI safety research with 10–20 researchers.
Workshops Targeted sessions on specific topics, from technical alignment to AI governance.
Advising One-on-one guidance for those looking to enter or advance in AI safety.

Past events

2025

Dec 16

Can AI systems be conscious? How could we know? And why does it matter?

Joaquim Streicher (Ph.D. candidate in Neuroscience; co-founder MONIC)

Presentation on the debate around AI consciousness (current vs future models), how consciousness might be assessed, and why avoiding false negatives/false positives matters ethically; includes introduction to MONIC. Recommended readings: Bayne et al. (2024), Butlin et al. (2023), Chalmers (2023), Colombatto & Fleming (2024), Martin, Streicher, O'Dea (2025).

Dec 2

Veracity in the Age of Persuasive AI

Taylor Lynn Curtis (Mila)

Talk on the tension between AI persuasion and ethical deployment; introduces "Veracity," a tool using AI to detect/mitigate misinformation and support data quality/user protection; closes with governance insights.

Nov 27

Tipping Points & Early Warnings: Complex Systems Theory on Catastrophic Transitions

Discussion of Scheffer et al. (Nature, 2009) on generic early-warning signals near tipping points (e.g., "critical slowing down"), and implications for AI governance.

Nov 25

Pessimists Archive

Emma Kondrup

Activity/discussion using pessimistsarchive.org to compare historical "new technology panic" headlines (cars/radio/TV) with modern AI narratives; explores when "AI exceptionalism" (or "existentialism") is justified.

Nov 22–23

Defensive Acceleration Hackathon

Hackathon focused on "defensive acceleration" (def/acc): building tech to strengthen defenses against major threats (pandemics, cybercrime, and AI risk). Prize pool: $20,000 USD. Co-organized with Apart Research.

Nov 20

Neuronpedia 101

Discussion + demo introducing Neuronpedia concepts (models, sparse autoencoders, features/lists, feature pages), running small experiments (search, activation tests), and ending with ways to contribute.

Nov 18

Co-design a National Citizens' Assembly on Superintelligence

Short workshop to co-design a National Citizens' Assembly on Superintelligence for Canada; intended outputs: a Concept Note, a Consortium Intent Memo, and an invite list.

Nov 13

Canada's 2025 Budget vs AI risk

Discussion of AI-related parts of Canada's 2025 federal budget and how they map onto AI risk reduction / threat models (power concentration, epistemics, bio, autonomy, misuse, systemic risk, etc.).

Nov 11

If Anyone Reads It, Everyone's Welcome

Small gathering/reading-group discussion of "If Anyone Builds It, Everyone Dies," using author-suggested discussion questions. Co-organized with PauseAI Montréal.

Nov 4

International AI Safety Report – First Key Update

Walkthrough/discussion of the International AI Safety Report "First Key Update: Capabilities and Risk Implications" (dated 2025-10-14), covering recent capability gains, longer-horizon agents, and implications for bio/cyber risks, monitoring/controllability, and labor-market impacts.

Oct 30

Canada's AI Strategy Survey Jam

Hands-on group session to complete the Government of Canada's consultation survey for the next national AI strategy; includes short briefing, 1-hour survey fill, and wrap-up.

Oct 28

If Anyone Builds It, Everyone Dies

Launch/discussion event for "If Anyone Builds It, Everyone Dies" (Yudkowsky & Soares): primer on claims, then discussion + audience Q&A on technical/policy/institutional risk-reduction moves.

Oct 23

A Definition of AGI

Walkthrough of a proposal operationalizing AGI as matching the cognitive versatility/proficiency of a well-educated adult, grounded in the CHC model; emphasizes concrete tests over a single benchmark.

Oct 21

Introducing PauseAI Montréal

Nik Lacombe

Introduction + discussion of PauseAI and its Montréal group; focuses on mitigating risks by convincing governments to pause development of superhuman AI.

Oct 16

Introducing aisafety.info

Olivier Coutu

Overview of aisafety.info: intro to existential AI risk, large FAQ, "Stampy" chatbot, and an alignment resources dataset; includes Q&A and requests for improvement suggestions/help.

Oct 14

Global Call for AI Red Lines

Discussion of the Global Call for AI Red Lines and what "do-not-cross" limits could look like in practice (prohibitions, treaty precedents, and Canadian roles).

Oct 7

Social Media Safety and the Unplug project

Evan Lombardi

Impacts of social media recommendation algorithms on mental health; survey of online manipulation/dark patterns, scams/deepfakes, extremist/explicit content, and mis/dis/malinformation; closes with an overview of the Unplug Project.

Oct 2

Verifying a toy neural network

Samuel Gélineau

Demo/project talk showing how to verify a neural network satisfies a safety property (beyond tested inputs) by adapting range analysis ideas to network weights.

Sep 16

Towards Guaranteed Safe AI

Orpheus Lummis

Presentation of core ideas from "Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems", followed by Q&A and open discussion.

Join the community

Visit aisafetymontreal.org ↗