AI Safety in Montréal
Local field-building hub serving the Montréal AI safety, ethics & governance community. We organize meetups, coworking sessions, targeted workshops, advising, and collaborations.
About
Montréal is home to one of the world's largest AI research ecosystems, anchored by Mila and surrounded by leading labs and startups. AI Safety in Montréal builds the local community of researchers, practitioners, and advocates working to ensure AI development benefits humanity.
We co-run the Mila AI safety reading group (biweekly sessions with 10–20 researchers) and serve members across AI safety, ethics, and governance.
What we do
Past events
2025
Can AI systems be conscious? How could we know? And why does it matter?
Joaquim Streicher (Ph.D. candidate in Neuroscience; co-founder MONIC)
Presentation on the debate around AI consciousness (current vs future models), how consciousness might be assessed, and why avoiding false negatives/false positives matters ethically; includes introduction to MONIC. Recommended readings: Bayne et al. (2024), Butlin et al. (2023), Chalmers (2023), Colombatto & Fleming (2024), Martin, Streicher, O'Dea (2025).
Veracity in the Age of Persuasive AI
Taylor Lynn Curtis (Mila)
Talk on the tension between AI persuasion and ethical deployment; introduces "Veracity," a tool using AI to detect/mitigate misinformation and support data quality/user protection; closes with governance insights.
Tipping Points & Early Warnings: Complex Systems Theory on Catastrophic Transitions
Discussion of Scheffer et al. (Nature, 2009) on generic early-warning signals near tipping points (e.g., "critical slowing down"), and implications for AI governance.
Pessimists Archive
Emma Kondrup
Activity/discussion using pessimistsarchive.org to compare historical "new technology panic" headlines (cars/radio/TV) with modern AI narratives; explores when "AI exceptionalism" (or "existentialism") is justified.
Defensive Acceleration Hackathon
Hackathon focused on "defensive acceleration" (def/acc): building tech to strengthen defenses against major threats (pandemics, cybercrime, and AI risk). Prize pool: $20,000 USD. Co-organized with Apart Research.
Neuronpedia 101
Discussion + demo introducing Neuronpedia concepts (models, sparse autoencoders, features/lists, feature pages), running small experiments (search, activation tests), and ending with ways to contribute.
Co-design a National Citizens' Assembly on Superintelligence
Short workshop to co-design a National Citizens' Assembly on Superintelligence for Canada; intended outputs: a Concept Note, a Consortium Intent Memo, and an invite list.
Canada's 2025 Budget vs AI risk
Discussion of AI-related parts of Canada's 2025 federal budget and how they map onto AI risk reduction / threat models (power concentration, epistemics, bio, autonomy, misuse, systemic risk, etc.).
If Anyone Reads It, Everyone's Welcome
Small gathering/reading-group discussion of "If Anyone Builds It, Everyone Dies," using author-suggested discussion questions. Co-organized with PauseAI Montréal.
International AI Safety Report – First Key Update
Walkthrough/discussion of the International AI Safety Report "First Key Update: Capabilities and Risk Implications" (dated 2025-10-14), covering recent capability gains, longer-horizon agents, and implications for bio/cyber risks, monitoring/controllability, and labor-market impacts.
Canada's AI Strategy Survey Jam
Hands-on group session to complete the Government of Canada's consultation survey for the next national AI strategy; includes short briefing, 1-hour survey fill, and wrap-up.
If Anyone Builds It, Everyone Dies
Launch/discussion event for "If Anyone Builds It, Everyone Dies" (Yudkowsky & Soares): primer on claims, then discussion + audience Q&A on technical/policy/institutional risk-reduction moves.
A Definition of AGI
Walkthrough of a proposal operationalizing AGI as matching the cognitive versatility/proficiency of a well-educated adult, grounded in the CHC model; emphasizes concrete tests over a single benchmark.
Introducing PauseAI Montréal
Nik Lacombe
Introduction + discussion of PauseAI and its Montréal group; focuses on mitigating risks by convincing governments to pause development of superhuman AI.
Introducing aisafety.info
Olivier Coutu
Overview of aisafety.info: intro to existential AI risk, large FAQ, "Stampy" chatbot, and an alignment resources dataset; includes Q&A and requests for improvement suggestions/help.
Global Call for AI Red Lines
Discussion of the Global Call for AI Red Lines and what "do-not-cross" limits could look like in practice (prohibitions, treaty precedents, and Canadian roles).
Social Media Safety and the Unplug project
Evan Lombardi
Impacts of social media recommendation algorithms on mental health; survey of online manipulation/dark patterns, scams/deepfakes, extremist/explicit content, and mis/dis/malinformation; closes with an overview of the Unplug Project.
Verifying a toy neural network
Samuel Gélineau
Demo/project talk showing how to verify a neural network satisfies a safety property (beyond tested inputs) by adapting range analysis ideas to network weights.
Towards Guaranteed Safe AI
Orpheus Lummis
Presentation of core ideas from "Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems", followed by Q&A and open discussion.