Labor unions may affect innovation negatively

An interesting paper by Daniel Bradley, Incheol Kim, and Xuan Tian got recently published in Management Science (link to the SSRN version): Continue reading Labor unions may affect innovation negatively

Advertisements

How effective are patents really?

Today, an interesting NBER working paper by Deepak Hegde from NYU Stern and coauthors got published:

We provide evidence on the value of patents to startups by leveraging the random assignment of applications to examiners with different propensities to grant patents. Using unique data on all first-time applications filed at the U.S. Patent Office since 2001, we find that startups that win the patent “lottery” by drawing lenient examiners have, on average, 55% higher employment growth and 80% higher sales growth five years later. Patent winners also pursue more, and higher quality, follow-on innovation. Winning a first patent boosts a startup’s subsequent growth and innovation by facilitating access to funding from VCs, banks, and public investors.

Continue reading How effective are patents really?

Judea Pearl on Angrist and Pischke

Today, Judea Pearl commented on a new NBER working paper by Josh Angrist and Jörn-Steffen Pischke in a mail for subscribers to the UCLA Causality Blog. I think the text is too good to hide it in a mailing list though. That’s why I will quote it here:

Overturning Econometrics Education
(or, do we need a “causal interpretation”?)

My attention was called to a recent paper by Josh Angrist and Jorn-Steffen Pischke titled; “Undergraduate econometrics instruction” (A NBER working paper)
http://www.nber.org/papers/w23144?utm_campaign=ntw&utm_medium=email&utm_source=ntw

This paper advocates a pedagogical paradigm shift that has methodological  ramifications beyond econometrics instruction;  As I understand it, the shift stands contrary to the traditional teachings of causal inference, as defined by Sewal Wright (1920), Haavelmo (1943), Marschak (1950), Wold (1960), and other founding fathers of econometrics methodology.

In a nut shell, Angrist and Pischke  start with a set of favorite statistical routines such as IV, regression, differences-in-differences among others, and then search for “a set of control variables needed  to insure that the regression-estimated effect of the variable of interest has a causal interpretation” Traditional causal inference (including economics)  teaches us that asking whether the output of a statistical routine “has a causal interpretation” is the wrong question to ask, for it misses the direction of the analysis. Instead, one should start with the target causal parameter itself, and asks whether it is ESTIMABLE (and if so how),  be it by IV, regression, differences-in-differences, or perhaps by some new routine that is yet to be discovered and ordained by name. Clearly, no “causal interpretation” is needed for parameters that are intrinsically causal; for example, “causal effect” “path coefficient”, “direct effect” or “effect of treatment on the treated” or “probability of causation”

In practical terms, the difference  between the two paradigms is that estimability requires a substantive model while interpretability appears to be model-free.
A model exposes its assumptions explicitly, while statistical routines give the deceptive impression that they run assumptions-free ( hence their popular appeal). The former lends itself to judgmental and statistical tests, the latter escapes such scrutiny.

In conclusion, if an educator needs to choose between the “interpretability” and “estimability” paradigms, I would go for the latter. If traditional econometrics education is tailored to support the estimability track, I do not believe a paradigm shift is warranted towards an “interpretation seeking” paradigm as the one proposed by Angrist and Pischke,

I would gladly open this blog for additional discussion on this topic.

I tried to post a comment on NBER (National Bureau of Economic Research), but was rejected for not being an approved “NBER family member”. If any of our readers is a “”NBER family member” feel free to post the above.

Note: “NBER working papers are circulated for discussion and comment purposes.” (page 1).

Judea

Update: By now, the text has been published on the causality blog.

Causality for Policy Assessment and Impact Analysis

Here is a great introductory lecture into causal inference and the power of directed acyclic graphs / bayesian networks. It repeats a point I made earlier on this blog that big data alone, without a causal model (i.e., theory) to support it, is simply not sufficient for making causal claims. Continue reading Causality for Policy Assessment and Impact Analysis