Published on Development Impact

Papers that caught my attention last week

This page in:
A lot of interesting papers came across my desk this past week. Here are a few that I think may be useful to you too (and why):

Practical advice on robust standard errors in (not so small) samples: Imbens and Kolesár have an old working paper just published in REStat that tells you to do three things:
  • Use their improvement to calculating robust standard errors (due to Bell and McCaffrey 2002) in small and moderately sized samples, and you’ll see improvements over the standard methods packaged in Stata.
  • Heteroskedasticity-robust standard errors work when the sample size is large or the distribution of regressors is balanced. In small or even moderately sized samples, bias can be substantial if the distribution of the regressor is skewed (e.g. control is much bigger than treatment. They state that this is a point understood in the theoretical but not the empirical literature.
  • Finally, they suggest a modification for cases with clustering.
Why you should care: I would gloss over papers like this whenever I saw the “small samples” caveat. The statement in the abstract of “even in samples of 50 clusters or more” got my attention. Now, I have to care, too – especially if I have a regressor that is skewed: that does not happen often in the field experiments I design, but we all do analysis where there are more children in one group than the other, etc. Plus, they have R code for implementing these confidence intervals at https://github.com/kolesarm/Robust-Small-Sample-Standard-Errors.
 
Long-term effects of cash transfers: Araujo, Bosch, and Schady have a new NBER WP out that examines the effects of a cash transfer program for children under 6 a decade later. They find no effects among those assigned to early vs. late treatment on performance in a variety of tests. Looking at older children (making decisions about secondary schooling in eligible households), the find at best modest effects on school attainment but not gains in employment. Inframarginal transfers may be one culprit (75% of the comparison group completed secondary school)…

These findings are in line with what we’re seeing in Malawi for older adolescents who received cash transfers for two years: schooling gains not translating into better outcomes in employment, empowerment, or health (slides I presented last week at a conference at UC Berkeley here, working paper forthcoming soon – will blog about it when it’s out). Even in subgroups where the schooling gains are large, they translate to some reduction in fertility and early childbearing and marriage, but not to gains in other important domains.

Why you should take a look: Frankly, because too much “give poor people cash” can be hazardous to your intellect. Plus, and I say this with love, you probably won’t come across this paper in blogs by Dylan Matthews or Chris Blattman

Should you predict data rather than collect it? My colleagues Tomoki Fujii and Roy van der Weide have a new paper that examines this question analytically by “maximizing statistical precision [or minimizing financial costs] under a budget constraint [statistical precision constraint] for a wide set of parameter values. They conclude that the gains are modest at best and conclude “you should collect new data when you need it and use predicted data to leverage existing data sources.”

This is a ubiquitous problem in development, especially updating poverty or nutrition measures, and is important to have some rules of thumb as a development practitioner. Their findings are not surprising: predicting your outcomes variable might pay off if (i) it is very expensive to collect; (ii) there are cheaply available (to be collected) predictors of the outcome indicator; and (iii) data are not spatially correlated.

Why you should care: Nonetheless, I did not have a prior on how much money one could save (holding precision constant) or vice versa...

How much do you spend on the health of the non-human members of your family? When my first dog Dizzy had four lesions in his brain and went through a course of chemotherapy and steroids, he seemed to recover after about 6 months. We had one MRI from January that had these masses in his brain and now he seemed better. When I called our specialist to see what was next, he suggested going back on chemotherapy, which had been messing badly with his liver: his reasoning (I am approximating) was that you don’t mess with the brain – you go after it. I will never forget his response when I objected and asked whether we should not check his brain again first: “If this was a human patient, that would be the first thing his doctor would order – another MRI. But, I don’t know any dog in New Zealand that have two MRIs of his brain.” (again, I am paraphrasing). Well, we went ahead and did a second MRI, and, miracle, the masses had disappeared: Dizzy lived a fairly happy and healthy, and chemo-free, 18 months after that. And, needless to say, there was no price on that (I will probably be expelled from the ranks of economics for saying that)!

Einav, Finkelstein, and Gupta have a new paper that document the similarities between human and pet healthcare in the U.S., titled “Is American Pet Health Care (Also) Uniquely Inefficient? In this short paper, they document four similarities: (i) rapid growth in spending as a share of GDP over the last two decades; (ii) strong income- spending gradient; (iii) rapid growth in the employment of healthcare providers; and (iv) similar propensity for high spending at the end of life.

The uniquely inefficient apparently comes from a debate about the reasons why the level (and growth) of US healthcare spending is high compared with other nations despite outcomes that are not better. One theory is that particular institutional features of the US healthcare sector, such as insurance, public sector reimbursement, and regulation, are proposed as one explanation of the “uniquely inefficient” system. Following the recommendation of Chandra et al. (forthcoming), they propose that insights might be gained from looking at other industries in the U.S. and choose the case of pet health care. While the institutional setting is quite different for this sector (less than 1% insurance and much less regulation), there are striking similarities, which may point to the role of preferences over health.

Why you should care: I am not sure…It’s an interesting paper that caught my attention.
 
P.S. In my last post, I promised to talk about power calculations and optimal design of field experiments. We ran into some messy variance-covariance matrices in the final proofs for our paper, which requires an additional few weeks of work. For that reader who is waiting with bated breath, I promise a resolution (along with an updated WP) before Thanksgiving…
 

Authors

Berk Özler

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000