Speculative Friction in AI design and governance
<< home | about | blog | story lab | contribute | resources | games | EVENTS | subscribe
The Speculative Friction Educational Workshop Series brings together researchers and practitioners to examine the role of friction in the design and governance of artificial intelligence systems. Through lectures, collaborative workshops, and foresight activities, participants explore how moments of uncertainty or disagreement in socio-technical systems can reveal important insights about AI accountability, institutional design, and human–AI interaction. A central question we ask is: how do we remove unproductive frictions, and where could intentional frictions help people think critically, make better decisions, and maintain agency in increasingly automated environments?
Upcomming
>> April 9th, 2026, lecture at the AI, Algorithms, and Society class at Arizona State University
>> April 11th, 2026, in-person at WikiCredCon - Wikimedians Strengthening Knowledge and News Credibility on the Internet
Designing Productive Friction to Safeguard Credibility
Friction offers a useful lens for assessing how AI tools and other emerging technologies are reshaping Wikipedia editing by asking not only what these systems make easier, but also where intentional pauses, checks, and safeguards are needed to protect credibility, safety, and human judgment. Grounded in the idea that productive friction could create space for critical thinking, this approach can inform best practices for AI-assisted fact-checking by encouraging verification, appropriate reliance, transparency about system limits, and stronger editorial workflows around explanation and source checking. It also helps identify emerging challenges to platform credibility and safety, including overreliance on AI outputs, manipulative interface design, and recommender or notification systems that can distort attention, trust, and participation patterns. In that sense, friction is not about slowing Wikipedia down for its own sake, but about designing the right forms of procedural, social, and technical friction so that high-risk uses of AI become harder, while trustworthy collaboration, learning, and collective accountability become easier.
>> May 1st, 2026, online workshop at Data & Society Research Institute [invite only]
Past events
>> April 30th, 2025, lecture at the Computer Science and Engineering at the University of California, Santa Cruz
>> July 29th, 2024, Decentralized Web Camp
>> March 22nd, 2023 - Speculative Friction Community Call
>> January 19th, 2023 - An online panel discussion and workshop. Read about the outcomes of the event here - Building Positive Futures for Generative AI Adoption in Healthcare
What kinds of design fiction and constructive friction could contribute towards improved transparency, evaluation, and human agency in the context of generative AI systems?
There is a growing awareness of different kinds of frictions in the context of building, evaluating, and regulating generative AI models and their downstream impacts. Resisting the status quo of friction in the context of AI innovation, we intend to open and join new discursive spaces grounded in a "speculative everything" approach to the blurry boundaries between fact, fiction and friction in AI. Learn more in this blog post.
Join us for a panel discussion and interactive workshop that will explore
- Approaches to disrupting manipulative design patterns
- Speculative design as a method to interrogate social norms and values
- Human-centered and values-centered generative AI evaluation methods
Feel free to reach out to me if you or your organization is interested in learning more or getting involved in this broader initiative - b.rakova@gmail.com
Invited speakers and facilitators:
︎Gemma Petrie - Principal Researcher at Mozilla focused on the intersection of people, competition, and policy. Her research explores the dynamics of the internet ecosystem in order to advocate for technology that puts people first. Recent projects have explored how design frictions like browser choice screens can improve transparency, give people meaningful agency, and help them make choices that better align with their preferences.
︎Sophia Bazile - Futures Literacy and Foresight practitioner interested in capacity-building for transformative inner, organizational, and wider sociocultural change. Her approaches are multi-/trans-/interdisciplinary, rooted in decolonial praxis, and commitment to collective un/learning through perpetual experimentation. She has designed, curated, and convened inquiries around how technologies* facilitate reciprocal relationships between planetary beings—and the kinds of imagined and imposed relationships they amplify, disrupt, or inhibit. AI Cosmologies invites us to expand our ways of being/doing, think-feeling, sensing, and relating to, with, and through emerging technologies, “data”, and “artificial” intelligences. Broadly, the notion of AI ethics is fundamentally relational: responsive, emergent, and requiring a multitude of wisdoms, id/entities, histories, and existing and not-yet-imagined connections to being.
︎Richmond Y. Wong - an Assistant Professor of Digital Media at Georgia Tech's School of Literature, Media, and Communication. He directs the Creating Ethics Infrastructures Lab where his research seeks to create social, cultural, and organizational environments that can support technologists and designers in ethical decisionmaking. This includes creating design approaches that propose alternate ways to consider human values, supporting worker and community-led actions, improving organizational ethics review practices, and understanding the role of law and policy. Recent projects include studying the technology workers’ organizational practices related to ethics, and creating design activities to help people talk through issues related to privacy and surveillance.
︎Tyler (T) Munyua - an Artist, Creative World Builder, and actor in the AI and Tech space. They are also the Wrangler Program Assistant on the Mozilla Festival Team. T activates conversations on the intersections of art, law and tech, and is working to amplify and spotlight the work of African artists and creatives using their craft as an instigator for decolonial work.
︎Bogdana (Bobi) Rakova - Senior Trustworthy AI Fellow at Mozilla Foundation, where she works on generative AI socio-technical evaluations and participatory mechanism design that centers a human-centered approach, equity, access, and consent. Previously, Bogdana was a research manager at a Responsible AI team in industry, leading algorithmic auditing projects and working closely with ML Ops, ML development, and legal teams on strategies for human-centered model evaluation, documentation, AI literacy, and transparency. She’s also held positions with IEEE’s Global Ethics Standards initiative, Partnership on AI, Samsung’s innovation Think Tank Team, and others. She holds a degree in computer science while also learning from the fields of science and technology studies, technology policy, organizational studies, systems thinking, and social and environmental justice.
* About the image above - it is inspired by the work of Ruha Benjamin contrasting artificial intelligence to collective wisdom. The image on the left shows a permanent wave machine from 1920s (unknown source) while the image to the right is a retro-futurism experiment by an artist called Ethiopia Ringaracka.
With the kind support of the Mozilla Foundation (2022-2024).