
AI Safety Unconference
The AI Safety Unconference (AISU) brings together researchers from leading organizations to share work, launch collaborations, and advance the field. Our unconference format puts participants in the driver’s seat—sessions emerge from the community rather than being pre-planned.
2026 edition
A hybrid AISU is planned for 2026. Sign up to be notified.
Visit website ↗Format
- Lightning talks — Short presentations on recent work or ideas
- Moderated discussions — Structured conversations on key topics
- 1:1 sessions — Facilitated networking and collaboration
- Breakout groups — Self-organized deep dives
2024 VAISU retrospective
We co-organized the Virtual AI Safety Unconference 2024 (VAISU) on May 23 to 26, 2024, in partnership with AI Safety Camp.
What is VAISU?
As an unconference, VAISU is a collaborative and inclusive online event designed to feature the sessions, discussion, and contributions of the community.
The purpose is to reduce AI risk through facilitation of progress in AI safety R&D, by offering a high quality research event. The event enables information sharing, further collaboration through connection and trust-building, promoting active research work, skill-building, …
Sessions and talks relate to the question: “How do we ensure the safety of AI systems, in the short and long term?”. This includes topics such as alignment, corrigibility, interpretability, governance, strategy, etc.
It happens over a week, with 4 days of sessions over multiple tracks. Session scheduling is a function of participant preferences via a custom algorithm. A chat platform and a networking system are provided.
Recordings
23 videos of talks/sessions were recorded and published. Watch them on Youtube.
Prize winners
We offered a 2000 USD-equivalent prize to the top 10 sessions, elected by the participants, based on the criteria of “impact on research, insight, distillation, and engagement”.
The winners are:
- Provably Safe AI - Steve Omohundro
- Making sure LLMs cannot do hidden reasoning - Filip Sondej
- We Are Not Prepared - Joe Rogero
- Conscious AI & Public Perceptions: Choose-Your-Own-Adventure! - Jay Luong & Nicoleta Kyosovska
- ProgressAlign: Towards Moral Progress Algorithms Implementable in the Next GPT - Tianyi (Alex) Qiu
- Fundamental Controllability Limits: limits to AGI controlling effects of own interacting components - Remmelt Ellen
- AI alignment: Should it be just about humans? - Tse Yip Fai
- AI Ethics in Practice: Real World Challenges - Sarah D’Andrea
- Experiments in Local Community building - Gergő Gáspár
- Artificial Wisdom for Alignment - Madhusudhan Pathak
Highlights and notes
- It featured 40 sessions, 400 registered participants, of which 140-150 joined the Discord, and approximately 60-100 engaged in sessions. The 2023 edition had 41 sessions, 14 volunteers, 2 staff, and ~280 registered participants.
- 400 persons registered, approximately 140-150 joined the event Discord server, and approximately 60-100 engaged in the sessions.
- The schedule was optimized for participant preferences and availability, based on a custom scheduling/allocation algorithm.
- 19 participants responded to the post-event survey. It indicated an average participation rate of 4.5 sessions. The event benefited them in particular in terms of connections, opportunity to present or run a session, and learning broadly. ~63% indicated it was counterfactually “somewhat of a net positive”, 21% “Significantly more valuable than the counterfactual”, ~11% “A game changer for my career or otherwise”, and ~5% “A waste of time”.
- Regarding our goal of “building the network” with connection, collaboration, and trust: The custom matchmaking solution / recommender system we created was weak because of lack of foresight - we didn’t ask enough data on participants in the registration form - and didn’t allocate enough time for it. We estimate 30-100 new we’ll-stay-in-touch connections, based on an average of 1.8 per user in the post-event form.
- We perceive the event as net positive. There is still significant room for improvement in execution of this event concept, especially by clarifying the event management methodology, inviting more senior researchers, and improving the matchmaking solution. We consider the continuation of the AI safety unconference series to be an achievement in itself.
Onward to the next unconference
We thank all session hosts, participants, supporters, Linda Linsefors and the AI Safety Camp for starting the series and funding some team stipends, the organizer team, and the LTFF for a grant that enabled more organizer time and the VAISU Prize!
If you participated in the event and benefited from it, consider donating to sustain the unconference series.
We are planning an upcoming and improved unconference event, stay tuned.