Sûreté de l'IA à Montréal
Hub local servant la communauté montréalaise en sûreté, éthique et gouvernance de l'IA. Nous organisons des meetups, des sessions de cotravail, des ateliers ciblés, de l'accompagnement et des collaborations.
À propos
Montréal abrite l'un des plus grands écosystèmes de recherche en IA au monde, ancré par Mila et entouré de laboratoires et de startups de premier plan. Sûreté de l'IA à Montréal bâtit la communauté locale de chercheurs, praticiens et défenseurs qui travaillent pour que le développement de l'IA bénéficie à l'humanité.
Nous coanimons le groupe de lecture en sûreté de l'IA de Mila (sessions bihebdomadaires avec 10 à 20 chercheurs) et servons des membres en sûreté, éthique et gouvernance de l'IA.
Ce que nous faisons
Événements passés
2025
Can AI systems be conscious? How could we know? And why does it matter?
Joaquim Streicher (Ph.D. candidate in Neuroscience; co-founder MONIC)
Presentation on the debate around AI consciousness (current vs future models), how consciousness might be assessed, and why avoiding false negatives/false positives matters ethically; includes introduction to MONIC. Recommended readings: Bayne et al. (2024), Butlin et al. (2023), Chalmers (2023), Colombatto & Fleming (2024), Martin, Streicher, O'Dea (2025).
Veracity in the Age of Persuasive AI
Taylor Lynn Curtis (Mila)
Talk on the tension between AI persuasion and ethical deployment; introduces "Veracity," a tool using AI to detect/mitigate misinformation and support data quality/user protection; closes with governance insights.
Tipping Points & Early Warnings: Complex Systems Theory on Catastrophic Transitions
Discussion of Scheffer et al. (Nature, 2009) on generic early-warning signals near tipping points (e.g., "critical slowing down"), and implications for AI governance.
Pessimists Archive
Emma Kondrup
Activity/discussion using pessimistsarchive.org to compare historical "new technology panic" headlines (cars/radio/TV) with modern AI narratives; explores when "AI exceptionalism" (or "existentialism") is justified.
Defensive Acceleration Hackathon
Hackathon focused on "defensive acceleration" (def/acc): building tech to strengthen defenses against major threats (pandemics, cybercrime, and AI risk). Prize pool: $20,000 USD. Co-organized with Apart Research.
Neuronpedia 101
Discussion + demo introducing Neuronpedia concepts (models, sparse autoencoders, features/lists, feature pages), running small experiments (search, activation tests), and ending with ways to contribute.
Co-design a National Citizens' Assembly on Superintelligence
Short workshop to co-design a National Citizens' Assembly on Superintelligence for Canada; intended outputs: a Concept Note, a Consortium Intent Memo, and an invite list.
Canada's 2025 Budget vs AI risk
Discussion of AI-related parts of Canada's 2025 federal budget and how they map onto AI risk reduction / threat models (power concentration, epistemics, bio, autonomy, misuse, systemic risk, etc.).
If Anyone Reads It, Everyone's Welcome
Small gathering/reading-group discussion of "If Anyone Builds It, Everyone Dies," using author-suggested discussion questions. Co-organized with PauseAI Montréal.
International AI Safety Report – First Key Update
Walkthrough/discussion of the International AI Safety Report "First Key Update: Capabilities and Risk Implications" (dated 2025-10-14), covering recent capability gains, longer-horizon agents, and implications for bio/cyber risks, monitoring/controllability, and labor-market impacts.
Canada's AI Strategy Survey Jam
Hands-on group session to complete the Government of Canada's consultation survey for the next national AI strategy; includes short briefing, 1-hour survey fill, and wrap-up.
If Anyone Builds It, Everyone Dies
Launch/discussion event for "If Anyone Builds It, Everyone Dies" (Yudkowsky & Soares): primer on claims, then discussion + audience Q&A on technical/policy/institutional risk-reduction moves.
A Definition of AGI
Walkthrough of a proposal operationalizing AGI as matching the cognitive versatility/proficiency of a well-educated adult, grounded in the CHC model; emphasizes concrete tests over a single benchmark.
Introducing PauseAI Montréal
Nik Lacombe
Introduction + discussion of PauseAI and its Montréal group; focuses on mitigating risks by convincing governments to pause development of superhuman AI.
Introducing aisafety.info
Olivier Coutu
Overview of aisafety.info: intro to existential AI risk, large FAQ, "Stampy" chatbot, and an alignment resources dataset; includes Q&A and requests for improvement suggestions/help.
Global Call for AI Red Lines
Discussion of the Global Call for AI Red Lines and what "do-not-cross" limits could look like in practice (prohibitions, treaty precedents, and Canadian roles).
Social Media Safety and the Unplug project
Evan Lombardi
Impacts of social media recommendation algorithms on mental health; survey of online manipulation/dark patterns, scams/deepfakes, extremist/explicit content, and mis/dis/malinformation; closes with an overview of the Unplug Project.
Verifying a toy neural network
Samuel Gélineau
Demo/project talk showing how to verify a neural network satisfies a safety property (beyond tested inputs) by adapting range analysis ideas to network weights.
Towards Guaranteed Safe AI
Orpheus Lummis
Presentation of core ideas from "Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems", followed by Q&A and open discussion.