The Missing Link Between Evidence and Democracy
The evaluation field has reached a curious juncture. We have sophisticated frameworks, rigorous methodologies, and decades of accumulated knowledge about what works. Yet the promise of evaluation - to inform better decisions, to drive learning, to improve outcomes - remains largely unfulfilled in policy circles. This paradox reveals something fundamental: we have been approaching evaluation as a technical exercise when it is, at its core, a political one.
The persistent gap between evidence and action
For years, evaluation has been conducted as if it exists in a vacuum, separate from the messy realities of power, politics, and public discourse. We produce reports that governments commission but rarely implement. We generate evidence that organisations acknowledge but struggle to act upon. We create knowledge that citizens cannot access, let alone use to hold their leaders accountable.
The resistance to evaluation findings runs deeper than institutional inertia. Many organisations continue to view evaluation as a punitive tool rather than a learning opportunity. This defensive stance is understandable—admitting what doesn't work requires vulnerability that few institutions are prepared to embrace. But this resistance perpetuates a cycle where evaluation becomes performative rather than transformative.
Citizens excluded from their own evidence
Meanwhile, citizens—the very people whose lives are affected by the programmes and policies being evaluated—remain largely excluded from the conversation. Most evaluation reports are written in technical language for technical audiences, stored in digital repositories that require insider knowledge to navigate. When citizens do gain access to findings, they often lack the mechanisms to meaningfully engage with the evidence or influence the decisions that follow.
This exclusion is particularly troubling given our current information landscape. We live in an era where information travels at unprecedented speed, yet much of it lacks curation, verification, or credible sources. Research from MIT shows that fake news spreads up to 10 times faster than true reporting on social media, whilst 66% of Americans believe that 76% or more of news on social media is biased. In this context, the fundamental premise of evidence-based decision-making—that quality information is valuable and will naturally influence rational actors—begins to feel almost naive.
The retreat from collective citizenship
The decline of traditional notions of citizenship compounds these challenges. The ideal of citizen participation in the public sphere gained remarkable momentum from the 1950s through the 1980s—exemplified by the civil rights movement's grassroots organising, community-led activism, and the broader democratisation of civic engagement. Yet this has given way to more individualistic approaches. We see this in the dramatic rise of corporate lobbying that prioritises narrow interests over collective good: in the EU, lobbying spending increased dramatically from 2012 onwards, with tech firms alone spending €113 million in 2023.
Meanwhile, patterns of civic engagement are shifting in telling ways. Whilst formal volunteerism has declined significantly in some contexts—with only 25% of American teenagers volunteering in 2015, down from 28% in 2005, ending three decades of rising civic participation among young people—there is more to the picture when we look at the type of volunterism. In Brazil, for instance, individual volunteering has been replacing institutional volunteering, according to a 2018 analysis by IBGE. This suggests a shift away from collective, organised civic participation towards more individualised forms of engagement. The spaces for genuine collective dialogue and institutional civic participation have indeed shrunk.
Yet perhaps this moment of crisis also presents an opportunity for the evaluation community to fundamentally reimagine its role.
Reclaiming evaluation's political purpose
As Carol Weiss observed decades ago, evaluation operates within political contexts that shape both the questions asked and the use of findings. If we accept this reality, then evaluators cannot remain neutral observers standing outside the political fray. We must engage actively in the public sphere, not as advocates for particular policies, but as champions for evidence-informed deliberation.
This requires a shift from viewing citizens as passive beneficiaries of evaluation insights to recognising them as essential partners in the evaluation process itself. For the past years I have worked in different settings trying to formally and informally use Evaluation Reference Groups to demonstrates how community members can shape evaluation questions, interpret findings, and develop action plans. When people participate meaningfully in generating evidence about their own lives, they become invested in acting on that evidence.
But participation alone is insufficient. Citizens need the tools, knowledge, and institutional mechanisms to transform evidence into influence. This means evaluation reports written in accessible language, findings disseminated through channels that communities actually use, and formal processes that connect evidence to decision-making.
Recommended by LinkedIn
It also requires evaluators to develop what we might call "political literacy"—an understanding of how power operates, where decisions get made, and how evidence can be strategically deployed to influence those processes. This doesn't mean abandoning rigour or neutrality in our methods, but rather acknowledging that the pursuit of evidence is inherently aimed at change.
The uncomfortable questions we must face
The questions this raises are not comfortable ones. How do we balance methodological integrity with the messy realities of advocacy? How do we maintain credibility whilst engaging more directly in political processes? How do we ensure that evidence serves collective rather than partisan interests?
I don't pretend to have the answers. But I believe these are the right questions for our field to grapple with as we consider evaluation's future. The alternative—continuing to produce technically sound but politically irrelevant studies—serves no one well, least of all the communities whose lives hang in the balance of the decisions our evidence might inform.
The path forward requires courage from evaluators to step beyond our traditional boundaries and wisdom from citizens to engage constructively with complex evidence. Most fundamentally, it requires recognising that evaluation's natural habitat is not the academic conference room or the consultant's office, but the public square where citizens and leaders negotiate the future of their communities.
Only when we create truly empowered citizens—people with access to evidence, skills to interpret it, and mechanisms to act on it—will evaluation fulfil its promise of contributing to better decisions and better lives.
Further reading: