<< home | ABOUT | living archive | resourcesevents | subscribe | contribute



A cartoon by Emily Bernstein
August 5, 2021

Speculative Friction Living Archive


A community of practice exploring what kinds of constuctive frictions and design fictions could contribute towards improved transparency, evaluation, and human agency in the context of generative AI systems and the data and labor pipelines they depend on.

A library of cognitive, organizational, technological, and design frictions that could contribute to more positive social outcomes.

Opening and joining new discursive spaces grounded in a speculative everything approach to the blurry boundaries between fact, fiction and friction in AI.
 

The importance of friction

The disability community sees friction as access-making - disabled peoples’ acts of non-compliance bring awareness to gaps and opportunities for improvement in technology products and services. Anthropologist Anna Tsing studies collaboration with friction at its heart - it is the embodiment of interconnection across differences. Policy experts have proposed friction-in-design regulation that studies different friction based on its type, effect, architectural design, purpose, intended impact, scope, and governance. The Friction Project at Stanford Graduate School of Business teaches leaders how to identify where to avert and repair bad organizational friction and where to maintain and inject good friction. These are only a few recent examples of how friction has made its way to the public discourse around technology.

From dark design patterns to design friction

AI systems are infrastructure. One metaphor for friction in AI are road signs or speed bumps on residential streets. No one advocates placing speed bumps on every street. They are often deployed selectively on shared roads by communities committed to safety and non-discrimination. Similarly, we can think of design frictions in AI as points of conscious decision-making during users’ interaction with a technology. What if we could have safety-enabling frictions in the context of how we design, build, and regulate generative AI? Otherwise, we’re left with a “frictionless” experience which more often than not, has led to proliferation of what researchers have called dark design patterns: patterns that may steer users towards specific predefined choices. Instead, technology companies could use intentional design friction to signal to their users that they value consumer agency and choice. Researchers have proposed that they can disrupt “mindless” automatic interactions such as infinite scrolling, prompting moments of reflection and more “mindful” behaviors. For example, recent Mozilla research demonstrates that interventions such as browser choice screens can improve competition, giving people meaningful agency, transparency, and feelings of control.

Speculative design as a method to interrogate social norms and values

Engaging in social dreaming and collective imaginaries allows us to step outside of the status quo, to suspend our disbelief and imagine alternatives. Ultimately, it is a catalyst for change not in a distant future but in the present moment. Speculative design is a systemic inquiry through which designers envision, reason about, and offer for debate aspects of alternate futures. Design fiction is an approach that is often used in speculative design, to engage people with “technological futures” and artifacts that make discussions and debates more tangible. Design fiction artifacts can be technical or not. They serve as props, not in trying to predict the future but in using design to open up all sorts of possibilities that can be discussed, debated, and used to collectively define a preferable future for a given group of people. Design fictions have started to emerge in combination with other methodologies within the field of Value-Sensitive Design as a means of surfacing responsible AI concerns and broader downstream risks and social implications of technology. This opens up space for questions such as: How do we conceptualize unknown unknowns? Do we dismiss them altogether or invite a sense of humble curiosity and deep contextual bravery?

Human-centered and values-centered generative AI evaluation methods

Evaluation methods are a cutting edge area of research in AI. There’s a limit to more general and abstract evaluations that ask questions such as: should a chatbot be allowed to give mental health advice or discriminate on race and sexual preference? Amplifying human choice and agency in generative AI requires builders to consider evaluation strategies which center their intended or unintended users in the particular context where the technology is deployed. Design friction offers one way to do that. For example, consider user agreements as a type of design friction to anticipate and repair harms of LLMs, or to solicit expert input on the use of multi-modal voice technology during health consultations. 


Contact


︎Bogdana Rakovab.rakova@gmail.com
Bogdana is a Senior Trustworthy AI Fellow at Mozilla Foundation where she works on generative AI socio-technical evaluations and participatory mechanism design that centers a human-centered approach, equity, access, and consent. Previously, Bogdana was a research manager at a Responsible AI team in industry, leading algorithmic auditing projects and working closely with ML Ops, ML development, and legal teams on strategies for human-centered model evaluation, documentation, AI literacy, and transparency. She’s also held positions with IEEE’s Global Ethics Standards initiative, Partnership on AI, Samsung’s innovation Think Tank Team, and others. She holds a degree in computer science while also learning from the fields of science and technology studies, technology policy, organizational studies, systems thinking, and social and environmental justice.

With the kind support of Mozilla Foundation.