Dear European Research Council, evaluating grant programs is harder than you think

Today the European Research Council tweeted about a study that supposedly shows how succesful their research grants are.

ERC grants provide a lot of money to upcoming and established researchers who are based in Europe to carry out larger research projects and agendas. Of course we would like to know whether the money is well spent. That’s why the ERC commissioned this study which found out that “73% of projects evaluated have made breakthroughs or major scientific advances” (follow the link in the tweet). Great success, huh? Well, I was not so happy.

The relevant metric to judge whether ERC grants are effective is not whether they create scientific breakthroughs but whether they create additional breakthroughs that otherwise wouldn’t have occurred. That’s what I meant with counterfactual thinking in my tweet. We have to compare the status quo, i.e. number of breakthroughs that happened in projects with ERC funding (the factual situation), with a hypothetical (counterfactual) situation, in which these projects wouldn’t have received an ERC grant.

Maybe the projects would have never been carried out without the grant. But maybe they would also have gotten funding from another source—for example a national science foundation. In the latter case it could very well be that exactly the same scientific breakthroughs would have happened. Then the additionality of an ERC grant—or what statisticians call a treatment effect—would have been zero.

Teasing out the treatment effect from non-experimental data is tough, but it can be done. We need more sophisticated methods and know-how than the authors of the linked study possessed though. That’s for sure.

I admit that I felt a bit triggered when I read the tweet this morning. In my job market paper I’m evaluating the effectiveness of another European grant program for innovative young companies. And I saw this type of naive “success story evaluations” all the time in policy documents: “projects that we’ve funded created this and that”. Great, but this tells us nothing about whether ERC grants are necessary or worth the money. So we could spend millions of taxpayers’ money on programs that create little or no extra value. Or, grants are super effective and we should increase their budget immediately. We simply cannot tell from the study.

Proper evaluation of grant programs is important. We want to spend our money effectively in order to make a real contribution to society. And we want to learn which kind of programs work and which don’t. We actually have the statistical tools to do a sound evaluation. So please, ERC, next time hire people with the right skills for the job.


Innovation on (government) demand?

Next week we will organize the 7th ZEW/MaCCI Conference on the Economics of Innovation and Patenting in Mannheim and the program will be great. We will have Bronwyn Hall from Berkeley and Pierre Azoulay from MIT as keynote speakers. I’m definitely looking forward to hear them speak.

Myself, I will present a new project on the relationship between public procurement and innovation. In brief the research question is the following. Continue reading Innovation on (government) demand?

European Integration in Science and Technology Policy

Recently we published a new discussion paper (updated in September 2017) I was working on for quite some time. My coauthor and me study the effectiveness of subsidies for research and development (R&D) at the European level. Subsidies to support R&D activities by private firms are an essential part of science and technology policy in all OECD countries. Economic theory tells us that (especially small and young) firms invest too little in innovation, either because they’re financially constrained or because their ideas get imitated. Therefore governments should step in to boost R&D and enhance the competitiveness of the economy. Continue reading European Integration in Science and Technology Policy