Home
EventsWorkshop

Can We Shape AI with Deliberative Societies?

A comprehensive report from our workshop exploring how deliberative methods can foster inclusive AI development and policy, bringing together researchers, policymakers, and practitioners.

Published Oct 12, 2024

Can We Shape AI with Deliberative Societies? Workshop Report

We recently hosted a workshop on "Can We Shape AI with Deliberative Societies". It brought together AI researchers, social scientists, policymakers, and practitioners, creating a collaborative environment grounded in ethnographic and deliberative approaches. The discussions examined how deliberative methods can foster inclusive AI development and policy, under the theme: Can communities, empowered to express their values and perspectives, meaningfully shape AI systems? The workshop was part of London Data Week supported by Mayor of London.

Event Highlights

The workshop, organized by the Equiano Institute in partnership with colleagues Omer Bilgin, Maximilian Kroner Dale, Shannon Hong at University of Oxford, Joël Christoph and Patti Garcia – it was a vibrant hub of ideas and collaboration. Our diverse audience, ranging from AI researchers to members of language communities, created a unique atmosphere of shared learning and discovery. Our goal was to address core issues in AI benchmarking and promote more open discussions for developing plural and culturally normative benchmarks.

Talks

In the morning session we had the honor of witnessing presentations and efforts from Maximilian Kroner Dale, Luke Thorburn, Yasushi Sakai and Omer Bilgin. Their work is informing how we engage with technology, and guiding how we can collaboratively shape deliberative futures.

The afternoon session featured Trupti Patel , Zarinah Agnew and Malak Sadek. This was a great opportunity to learn about the latest advancements in AI and Deliberation in government, civic society and academia from a variety of perspectives. Malak advocates for why we need more value-sensitive AI , Zarinah discusses public views and Trupi explores public voices towards norm-preserving AI.

Panel Session

We explored Public Values, Views, and Voices—to understand how they inform AI embedded in diverse societal contexts. Focusing on deliberative, and agentic societies, a panel on ethnography and deliberation in AI featuring Reema Patel (ESRC Digital Good Network , Elgon Social Research), Flynn Devine (Boundary Object Studio) and Celyn Bricker (Policy Lab UK).

The panel explored the application of ethnography to policymaking, with Celyn highlighting the use of ethnography, filmmaking, and metaverse technologies to address policy issues such as subsurface science. Reema discussed the role of ethnography in understanding and addressing exclusion in participatory settings, referencing Contact Theory. This theory outlines conditions under which intergroup contact reduces prejudice, including equal status among groups, shared common goals, cooperative efforts without competition, and support from authorities or social norms. Additionally, Flynn Devine introduced the concept of Systemic Deliberative Design as a framework for inclusive policy development.

Key Deliberations

The breakout rooms explored various themes related to the role of AI in social systems. Participants emphasized the importance of agency, trust, and democracy, raising concerns about how AI may influence decision-making processes, impact human autonomy, and reshape existing power structures. Discussions also highlighted the need for inclusive governance, transparency, and accountability in the design and deployment of AI technologies.

Group 1 – Trust & Transparency "Trust isn't a feature you bolt on at launch; it's the cumulative sum of every decision the public never sees." Participants from historically marginalized communities were blunt: past tech rollouts eroded their default assumption of good intent. They want explainability by design, public redress channels, and third-party audits baked into any AI stack.

Group 2 – Agency & Value Alignment "AI should expand my choices, not replace them." This table swapped horror stories of opaque recommendation engines with moments where AI felt like a creative collaborator. Their consensus: human-in-the-loop isn't enough; we need human-in-the-mandate—systems that continuously surface conflicting values and let users re-negotiate them.

Group 3 – Democratic Governance & Accountability "If AI shapes my kids' credit score, I deserve a vote in the rules before it ships." The group sketched out co-governance models: public data trusts, citizen juries, and open-source sandboxes where communities can fork an algorithm and stress-test it against their own realities.

Every group circled the same paradox:

"We want global standards and local control. We want rapid innovation and time for democratic deliberation."

The Problem with Current Approaches

The research identified three critical gaps in current approaches:

The Representation Gap: Who gets to participate in AI governance discussions? Workshop findings showed that even well-intentioned efforts often reach "the 10% of technically literate people" while excluding those most affected by AI systems—communities with limited digital access, non-English speakers, and those with historical distrust of institutions.

The Understanding Gap: How can communities meaningfully engage with technologies they don't understand? Participants noted the tension between wanting bottom-up input and the reality that "people just don't understand enough about AI technologies to begin with."

The Implementation Gap: Even when community input is gathered, how does it actually shape AI systems? Too often, the answer is: it doesn't.

Public Values, Views and Voice

The framework is built around what researchers call the "Three Vs":

Values: Navigating trade-offs in pluralistic societies

Views: Capturing diverse perspectives on AI systems and regulations

Voice: Amplifying marginalized communities in AI governance

What This Looks Like in Practice

Workshop participants described feeling genuinely heard, not just consulted. As one noted: "The people in the room should be in control of what happens to the data product that emerges out of the room." It's about fundamentally restructuring how AI systems learn and adapt based on ongoing community input rather than static training data.

Challenges and Limitations

Scaling deliberative processes to large populations remains difficult. Ethnographic data can be subjective. And there's always the risk that sophisticated-sounding processes become new forms of manipulation. Perhaps most critically, Deliberative Society requires significant investment in community capacity-building. As one participant noted: "You have to educate people on lower level first... if you want to achieve results that are going to be more fair."

The Path Forward

The Deliberative Society framework is still in development, with materials open-sourced for collaborative development. The researchers outline several critical next steps:

  • Developing scalable tools for community deliberation
  • Creating metrics to assess inclusivity and democratic legitimacy
  • Building technical infrastructure that can actually implement community decisions
  • Training facilitators who can bridge technical and community knowledge

But perhaps the most important step is conceptually moving from "AI + Democracy" towards democratic systems that learn and remember through collective deliberation.

Why This Matters Now

AI systems could either support democratic participation or deepen existing asymmetries in social interaction. A key question emerges: can communities meaningfully contribute to monitoring and shaping AI before harm occurs? Early discussions are promising, but realizing this potential will require a fundamental rethinking of current approaches to AI governance.

Gratitude

Special thanks to Luke Thorburn, Jennifer Ding and Dr Catherine Healy at King's College London for graciously hosting our workshop. We also appreciate the support from the Mayor of London, Sadiq Khan , whose efforts in fostering innovation and inclusivity in technology have been invaluable. Thank you all for your contributions to the success of London Data Week, 2025. Including the core organising team Jonas Kgomo, Joël Christoph and Omer Bilgin. We thank the audience for the great support for this participatory workshop. Thanks to Colleen McKenzie for helping us run Talk To The City at AI Objectives Institute

Listen to NotebookLM Here