Does anyone care for evidence?

Chinmaya Holla
3 min readMar 1, 2021

I considered and rejected “What’s the Impact of Evaluations” for this post. Feel free to use it, I think it’s a good title.

Summary:

  • Policymakers value evidence
  • Most evidence goes unused; likely follows power law in terms of being consumed
  • Three interlinked ways to think about this:
  1. Most evidence *is* pretty useless for policymakers; we need to radically overhaul the type and quality of evidence being produced
  2. It is good to have a corpus of evidence since we are not great at predicting what evidence will get used; most of it will get used at some point
  3. There is nothing to worry about; we are doing great

Hjort, Moreira, Rao & Santini have a new paper out where they look at if policymakers care at all about using research and if so, how much:

…in Brazil, we work with 2,150 municipalities and their mayors. We use experiments to measure mayors’ demand for research information and their response to learning research findings. In one experiment, we find that mayors and other municipal officials are willing to pay to learn the results of impact evaluations, and update their beliefs when informed of the findings.
They value larger-sample studies more, while not distinguishing on average between studies conducted in rich and poor countries. In a second experiment, we find that informing mayors about research on a simple and effective policy (reminder letters for taxpayers) increases the probability that their municipality implements the policy by 10 percentage points.”

A 2014 World Bank publication looked at consumption of reports put out by the World Bank. Let’s assume downloads of a report are correlated with consumption of evidence in that particular report:

As you can see on the right, the downloads seem to follow a power law.

How should we think about this?

  1. Most evidence *is* pretty useless for policymakers; we need to radically overhaul the type and quality of evidence being produced: There is considerable pushback against the kind of evidence that development economists are producing (let’s call it feigned ignorance). Policymakers go: “Wtf I knew this already what’s the point of you being an expert if you tell me you solved a puzzle that no one cared about”. (Counterpoint)
  2. It is good to have a corpus of evidence since we are not great at predicting what evidence will get used; most of it will get used at some point, but tweak the processes of research: Two points here: One, given Overton windows and all that, you never know what unused evidence becomes relevant at a point in time (catastrophically, sometimes). Two, we need to think about improving the process of drawing lessons from evidence and where it comes from. This from a Vivalt and Coville paper:

“.. We find that policy-makers care more about attributes of studies associated with external validity than internal validity, while for researchers the reverse is true. We also find evidence of asymmetric updating on good news relative to one’s prior beliefs and “variance neglect” — an insensitivity to confidence intervals. Finally, we show that information affects a real-life allocation decision”

“…policy-makers put the most weight on impact and whether it was recommended by a local expert, followed by some small weight on being an RCT.”

3. There is nothing to worry about; we are doing great: Maybe, I don’t know.

https://gfycat.com/rightwhirlwindboilweevil

--

--