Sûreté de l'IA à Montréal

1600+ membres · aisafetymontreal.org

Hub local servant la communauté montréalaise en sûreté, éthique et gouvernance de l'IA. Nous organisons des meetups, des sessions de cotravail, des ateliers ciblés, de l'accompagnement et des collaborations.


À propos

Montréal abrite l'un des plus grands écosystèmes de recherche en IA au monde, ancré par Mila et entouré de laboratoires et de startups de premier plan. Sûreté de l'IA à Montréal bâtit la communauté locale de chercheurs, praticiens et défenseurs qui travaillent pour que le développement de l'IA bénéficie à l'humanité.

Nous coanimons le groupe de lecture en sûreté de l'IA de Mila (sessions bihebdomadaires avec 10 à 20 chercheurs) et servons des membres en sûreté, éthique et gouvernance de l'IA.

Ce que nous faisons

Meetups Rassemblements réguliers de la communauté pour les chercheurs et praticiens en sûreté de l'IA à Montréal.
Sessions de cotravail Sessions de travail concentrées pour ceux qui travaillent sur des projets de sûreté de l'IA.
Groupe de lecture Sessions bihebdomadaires à Mila discutant des recherches récentes en sûreté de l'IA avec 10 à 20 chercheurs.
Ateliers Sessions ciblées sur des sujets spécifiques, de l'alignement technique à la gouvernance de l'IA.
Accompagnement Conseils individuels pour ceux qui cherchent à entrer ou progresser dans le domaine de la sûreté de l'IA.

Événements passés

2025

16 déc.

Can AI systems be conscious? How could we know? And why does it matter?

Joaquim Streicher (Ph.D. candidate in Neuroscience; co-founder MONIC)

Presentation on the debate around AI consciousness (current vs future models), how consciousness might be assessed, and why avoiding false negatives/false positives matters ethically; includes introduction to MONIC. Recommended readings: Bayne et al. (2024), Butlin et al. (2023), Chalmers (2023), Colombatto & Fleming (2024), Martin, Streicher, O'Dea (2025).

2 déc.

Veracity in the Age of Persuasive AI

Taylor Lynn Curtis (Mila)

Talk on the tension between AI persuasion and ethical deployment; introduces "Veracity," a tool using AI to detect/mitigate misinformation and support data quality/user protection; closes with governance insights.

27 nov.

Tipping Points & Early Warnings: Complex Systems Theory on Catastrophic Transitions

Discussion of Scheffer et al. (Nature, 2009) on generic early-warning signals near tipping points (e.g., "critical slowing down"), and implications for AI governance.

25 nov.

Pessimists Archive

Emma Kondrup

Activity/discussion using pessimistsarchive.org to compare historical "new technology panic" headlines (cars/radio/TV) with modern AI narratives; explores when "AI exceptionalism" (or "existentialism") is justified.

22–23 nov.

Defensive Acceleration Hackathon

Hackathon focused on "defensive acceleration" (def/acc): building tech to strengthen defenses against major threats (pandemics, cybercrime, and AI risk). Prize pool: $20,000 USD. Co-organized with Apart Research.

20 nov.

Neuronpedia 101

Discussion + demo introducing Neuronpedia concepts (models, sparse autoencoders, features/lists, feature pages), running small experiments (search, activation tests), and ending with ways to contribute.

18 nov.

Co-design a National Citizens' Assembly on Superintelligence

Short workshop to co-design a National Citizens' Assembly on Superintelligence for Canada; intended outputs: a Concept Note, a Consortium Intent Memo, and an invite list.

13 nov.

Canada's 2025 Budget vs AI risk

Discussion of AI-related parts of Canada's 2025 federal budget and how they map onto AI risk reduction / threat models (power concentration, epistemics, bio, autonomy, misuse, systemic risk, etc.).

11 nov.

If Anyone Reads It, Everyone's Welcome

Small gathering/reading-group discussion of "If Anyone Builds It, Everyone Dies," using author-suggested discussion questions. Co-organized with PauseAI Montréal.

4 nov.

International AI Safety Report – First Key Update

Walkthrough/discussion of the International AI Safety Report "First Key Update: Capabilities and Risk Implications" (dated 2025-10-14), covering recent capability gains, longer-horizon agents, and implications for bio/cyber risks, monitoring/controllability, and labor-market impacts.

30 oct.

Canada's AI Strategy Survey Jam

Hands-on group session to complete the Government of Canada's consultation survey for the next national AI strategy; includes short briefing, 1-hour survey fill, and wrap-up.

28 oct.

If Anyone Builds It, Everyone Dies

Launch/discussion event for "If Anyone Builds It, Everyone Dies" (Yudkowsky & Soares): primer on claims, then discussion + audience Q&A on technical/policy/institutional risk-reduction moves.

23 oct.

A Definition of AGI

Walkthrough of a proposal operationalizing AGI as matching the cognitive versatility/proficiency of a well-educated adult, grounded in the CHC model; emphasizes concrete tests over a single benchmark.

21 oct.

Introducing PauseAI Montréal

Nik Lacombe

Introduction + discussion of PauseAI and its Montréal group; focuses on mitigating risks by convincing governments to pause development of superhuman AI.

16 oct.

Introducing aisafety.info

Olivier Coutu

Overview of aisafety.info: intro to existential AI risk, large FAQ, "Stampy" chatbot, and an alignment resources dataset; includes Q&A and requests for improvement suggestions/help.

14 oct.

Global Call for AI Red Lines

Discussion of the Global Call for AI Red Lines and what "do-not-cross" limits could look like in practice (prohibitions, treaty precedents, and Canadian roles).

7 oct.

Social Media Safety and the Unplug project

Evan Lombardi

Impacts of social media recommendation algorithms on mental health; survey of online manipulation/dark patterns, scams/deepfakes, extremist/explicit content, and mis/dis/malinformation; closes with an overview of the Unplug Project.

2 oct.

Verifying a toy neural network

Samuel Gélineau

Demo/project talk showing how to verify a neural network satisfies a safety property (beyond tested inputs) by adapting range analysis ideas to network weights.

16 sept.

Towards Guaranteed Safe AI

Orpheus Lummis

Presentation of core ideas from "Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems", followed by Q&A and open discussion.

Rejoindre la communauté

Visiter aisafetymontreal.org ↗