Speculative Friction in AI design and governance
<< home | ABOUT | blog | story lab | contribute | resources | games | events | subscribe

Check out our project zine here.
Speculative Friction in AI design and governance is a community exploring how (un)productive friction shapes our interactions with AI systems. Instead of assuming technology should always be seamless, the project asks what kinds of frictions help people think critically, make better decisions, and maintain agency in increasingly automated environments. Through essays, interviews, and experiments in speculative design, we examine the dynamic relationship between form, function, fiction, and friction in AI systems and the institutions that govern them.
Learn more about our recent work on Substack. On the blog, you can read interviews about AI design and governance, featuring people who are transforming AI-related frictions across legal, technical, product, and design spaces. You will find dialogues and perspectives from practitioners and researchers working on AI safety, policy, and accountability, for example, exploring hype as governance, cautionary tales from regulators, ideas on collective bargaining through code, and friction-in-design best practices.
The site also hosts the Legal Fiction as Institutional Imagination series - creative and speculative contributions, including design fiction and artistic interventions. These stories and experiments imagine alternative institutional arrangements for governing AI, helping expand our collective imagination about what responsible AI systems and infrastructures could look like.
The importance of friction
The disability community sees friction as access-making - disabled people’s acts of non-compliance bring awareness to gaps and opportunities for improvement in technology products and services. Anthropologist Anna Tsing studies collaboration with friction at its heart - it is the embodiment of interconnection across differences. Policy experts have proposed friction-in-design regulation that studies different friction based on its type, effect, architectural design, purpose, intended impact, scope, and governance. The Friction Project at Stanford Graduate School of Business teaches leaders how to identify where to avert and repair bad organizational friction and where to maintain and inject good friction. These are only a few recent examples of how friction has made its way to the public discourse around technology. From dark design patterns to design friction
AI systems are infrastructure. One metaphor for friction in AI is road signs or speed bumps on residential streets. No one advocates placing speed bumps on every street. They are often deployed selectively on shared roads by communities committed to safety and non-discrimination. Similarly, we can think of design frictions in AI as points of conscious decision-making during users’ interaction with a technology. What if we could have safety-enabling frictions in the context of how we design, build, and regulate generative AI? Otherwise, we’re left with a “frictionless” experience, which more often than not, has led to the proliferation of what researchers have called dark design patterns: patterns that may steer users towards specific, predefined choices. Instead, technology companies could use intentional design friction to signal to their users that they value consumer agency and choice. Researchers have proposed that they can disrupt “mindless” automatic interactions such as infinite scrolling, prompting moments of reflection and more “mindful” behaviors. For example, recent Mozilla research demonstrates that interventions such as browser choice screens can improve competition, giving people meaningful agency, transparency, and feelings of control.Speculative design as a method to interrogate social norms and values
Engaging in social dreaming and collective imaginaries allows us to step outside of the status quo, to suspend our disbelief, and imagine alternatives. Ultimately, it is a catalyst for change not in a distant future but in the present moment. Speculative design is a systemic inquiry through which designers envision, reason about, and offer for debate aspects of alternate futures. Design fiction is an approach that is often used in speculative design to engage people with “technological futures” and artifacts that make discussions and debates more tangible. Design fiction artifacts can be technical or not. They serve as props, not in trying to predict the future but in using design to open up all sorts of possibilities that can be discussed, debated, and used to collectively define a preferable future for a given group of people. Design fictions have started to emerge in combination with other methodologies within the field of Value-Sensitive Design as a means of surfacing responsible AI concerns and broader downstream risks and social implications of technology. This opens up space for questions such as: How do we conceptualize unknown unknowns? Do we dismiss them altogether or invite a sense of humble curiosity and deep contextual bravery?Human-centered and values-centered generative AI evaluation methods
Evaluation methods are a cutting-edge area of research in AI. There’s a limit to more general and abstract evaluations that ask questions such as: should a chatbot be allowed to give mental health advice or discriminate on race and sexual preference? Amplifying human choice and agency in generative AI requires builders to consider evaluation strategies that center their intended or unintended users in the particular context where the technology is deployed. Design friction offers one way to do that. For example, consider user agreements as a type of design friction to anticipate and repair harms of LLMs, or to solicit expert input on the use of multi-modal voice technology during health consultations. Team / Contact
︎Bogdana (Bobbi) Rakova - b.rakova@gmail.com
Bogdana is a senior data scientist at the DLA Piper AI team with a background in computer science, ML engineering, and cross-disciplinary socio-technical research. Previously, a Senior Trustworthy AI Fellow at Mozilla Foundation, where she worked on generative AI socio-technical evaluations and participatory mechanism design that centers a human-centered approach, equity, access, and consent. Bogdana has held positions as a data scientist at the Responsible AI team at Accenture, a research fellow at Partnership on AI, a senior ML research engineer at the Think Tank Team innovation lab at Samsung Research, and a key contributor to IEEE standards.
With the kind support of the Mozilla Foundation (2022-2024).
