
Democratic institutions face an unprecedented challenge: how to meaningfully process the vast volumes of citizen input generated through consultations, town halls, and online forums. Traditional methods of analysing public feedback—manual reading and categorisation—become impractical when thousands or tens of thousands of comments arrive on a single policy proposal. This bottleneck often forces governments to either ignore much of the input or reduce it to crude sentiment tallies that strip away the nuance and reasoning behind citizens' positions. Argument mining and deliberation summarisation address this challenge through natural language processing systems designed to automatically extract structured arguments from unstructured text. These tools identify claims, the supporting or opposing reasons offered for those claims, and the relationships between different positions. By mapping the logical structure of public discourse, they transform sprawling comment threads and deliberation transcripts into navigable argument graphs that reveal patterns of agreement, disagreement, and the emergence of consensus on specific points.
The practical value of these systems lies in their ability to make large-scale democratic participation analytically tractable without sacrificing depth. When a city government seeks input on a new development plan, argument mining can cluster thousands of submissions by theme, identify which concerns appear most frequently, and—crucially—surface minority viewpoints that might otherwise be overlooked in aggregate summaries. Advanced implementations can detect when different commenters are making essentially the same argument in different words, preventing the illusion that a coordinated campaign represents broad consensus. For policymakers, this means receiving structured briefings that preserve the diversity of public opinion rather than flattening it into binary for-or-against tallies. The technology also supports more informed facilitation of ongoing deliberations by highlighting areas where participants have reached agreement, where fundamental disagreements persist, and which questions require further discussion.
Several governments and civic organisations have begun piloting these tools in participatory budgeting exercises, constitutional consultations, and urban planning processes. Research suggests that when deployed with appropriate governance safeguards—including transparency about how arguments are categorised and opportunities for citizens to review and correct automated interpretations—these systems can enhance rather than diminish democratic legitimacy. However, the technology raises important questions about whose voices are most easily "mined." Comments that follow conventional argumentative structures may be more readily captured than those expressing concerns through narrative, emotion, or culturally specific rhetorical forms. As civic institutions increasingly adopt these tools to manage the scale of digital participation, the challenge will be ensuring that efficiency gains do not come at the cost of excluding perspectives that are less algorithmically legible. The trajectory of this technology will likely involve not just technical refinement but ongoing negotiation about what counts as a valid contribution to public deliberation and who gets to decide.
Maintainers of 'Polis', an open-source tool used by governments (like Taiwan and Bowling Green, KY) to visualize consensus in large-scale discussions using machine learning.
A research group at the University of Dundee dedicated to argument mining, computational argumentation, and the Argument Web.
An AI 'co-pilot' for healthcare consultations that listens in to provide documentation and decision support.
Developers of the Gemini family of models, which are trained from the start to be multimodal across text, images, video, and audio.
An AI safety and research company developing Constitutional AI to align models with human values.
An AI platform that allows a single moderator to converse with a large group in real-time, using algorithms to find representative consensus.
Creator of Semantic Scholar and various open-source models for scientific text processing.
Combines AI with expert human analysis to detect and mitigate disinformation and harmful content online.