Published on Development Impact

What does a game-theoretic model with belief-dependent preferences teach us about how to randomize?

This page in:

The June 2017 issue of the Economic Journal has a paper entitled “Assignment procedure biases in randomized policy experiments” (ungated version). The abstract summarizes the claim of the paper:
“We analyse theoretically encouragement and resentful demoralisation in RCTs and show that these might be rooted in the same behavioural trait –people’s propensity to act reciprocally. When people are motivated by reciprocity, the choice of assignment procedure influences the RCTs’ findings. We show that even credible and explicit randomisation procedures do not guarantee an unbiased prediction of the impact of policy interventions; however, they minimise any bias relative to other less transparent assignment procedures.”

Of particular interest to our readers might be the conclusion “Finally, we have shown that the assignment procedure bias is minimised by public randomisation. If possible, public lotteries should be used to allocated subjects into the two groups”

Given this recommendation, I thought it worth discussing how they get to this conclusion, and whether I agree that public randomization will minimize such bias.

Their model
The key assumptions in their model are as follows:

  1. The outcome of interest (e.g. employment) depends not only on the intervention (e.g. job training), but also on the effort (e.g. job search behavior) individuals exert, and there is a (quadratic) cost to individuals of effort.
  2. Individuals do not just care about their payoffs from treatment, but also the way they are treated. If they feel treated badly, they resent the experimenter, feel discouraged, and exert less effort. If they think they have been treated well, they are encouraged, want the program to succeed, and exert more effort.
  3. When public randomization is used, an individual’s beliefs about the payoff the experimenter intends to give him or her are not influenced by their treatment assignment. In contrast, if the assignment mechanism is not public randomization, the higher the payoff the individual believes the experimenter intends to give him or her, the more encouraged and less resentful they are.
The result is that the effort levels between the two groups are larger when assignment into the two groups is done directly (e.g. via private randomization), than through public randomization. Private randomization always leads to an over-estimate of the treatment effect in this model, because the control group are demotivated, and the treatment group motivated and work harder than would be the case if everyone was being treated.

In this model, even public randomization does not guarantee unbiasedness, since in this model, people also can view themselves as more (or less) deserving of treatment than others in the population, and so when treatment is only given to some people via public randomization, still feel encouragement or resentment at being treated or not when others aren’t. But this bias is smaller than with private randomization.

How plausible is all of this, and should we all just publicly randomize?
The authors say that they want “a unified theory based on one behavioural trait (i.e. reciprocity) that can jointly explain the Hawthorne and John Henry effects in RCTs”.

But there are several issues with this.
  1. The classic John Henry effect is of course the opposite of what is being proposed here – rather than discouragement, those in the control group seek to show they can succeed without treatment and would exert extra effort. But as I have previously discussed, the evidence for the existence of John Henry effects in practice is really slim – I cited an early version of this paper in that post, and note that it gave two old examples from the U.S. It now also notes the  Bulte et al paper that Berk has previously noted concerns with, and speculates about Progresa, noting the randomization method is not clearly explained.
  2. Evidence for Hawthorne effects is also somewhat limited. Jed provided an overview of what the health literature had to say – most effects seemed very short-lived. Dave Evans discussed this in education, where there were some effects on teacher effort of being observed – note that this was more about being observed than about treatment status.
  3. The model assumes that program participants have full information about who is getting treated and how many, regardless of the assignment mechanism. But one can easily imagine that the very act of holding a public randomization ceremony both i) makes the treatment seem much more important and salient in people’s lives; and ii) makes the identity of winners and losers in the randomization much more visible, leading to resentment and pride. The result is that the conclusion of the authors that public randomization will have less bias than private randomization would no longer hold.
Given these issues, I don’t think we can conclude that public randomization will always be better or less biased than private randomization. It is certainly more transparent, but it is not always the case that more transparency will lead to less behavioral response.

What the paper does suggest to me is that we should think more about testing the importance of these mechanisms. The following design comes to mind, where we randomize the way randomization is done for different individuals in the experiment (just don’t then ask me whether this first randomization should itself be public or private), allocating individuals to the following groups:
  • Public randomization: individuals in this group get invited to a public randomization ceremony, and they clearly and transparently see the reason they are selected or not is due to random chance. They also then see others get selected or not, and any resentment or elation from relative comparisons will apply.
  • Private randomization: Individuals in this group are told that they were randomly allocated by computer to treatment or control status. They won’t know how many were chosen, or necessarily know the identities of who else was chosen, but may harbor some doubt as to whether it was really random selection or not.
  • Random selection which participants take to be non-random: This would require some consideration, since it would involve deception. But I could imagine having a random draw at a government ministry, with someone from the ministry drawing the names out of a bag. Then participants in the treatment group could be told “You were selected by officials from the Ministry of Education” for this program, and those in the control told “You were not selected for the program when officials from the Ministry of Education made the selection”. This would technically be correct, although omits that the draw was done randomly.
Under the assumptions of this paper, the behavioral response in effort to treatment status is expected to be different under these three ways of doing the random selection. So comparing the three groups would help to rule out assignment procedure bias being important in your study.

Anybody know of studies which have tried this or something similar?
 

Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000