Wednesday, February 17, 2016

Winking and Mizer (2013) on a Field Experiment with the Dictator Game

Jeffrey Winking and Nicholas Mizer, “Natural-Field Dictator Game Shows No Altruistic Giving.” Evolution and Human Behavior 34(4): 288-293, July 2013.

• Laboratory experiments tend not to be fully anonymous, and the setting might also spur pro-social behavior; in other words, the external validity of lab experiments in questionable. [Recall Levitt and List (2007).]

• The authors run the dictator game – player one receives a windfall, and if he or she chooses, can split it with player two – in a field setting, where participants do not know that they are taking part in an experiment. In laboratory versions of the dictator game, almost 2/3 of dictators (player ones) offer some money, and the average offer is about 28% of the endowment. 

• The field version involves Confederate 1 waiting at a bus stop. The (involuntary, as it were) subject comes to the stop to wait for a bus. Confederate 1 moves off a bit to take a call, turning his back on the subject. Confederate 2 walks by quickly, on the phone, seems to notice casino chips in his pockets (three $5 chips and five $1 chips). Confederate 2 tells the subject that he is late for the airport, and offers him the chips. (In a second condition, Confederate 2 mentions that the subject could split the chips with Confederate 1.) Confederate 1 eventually sidles back after Confederate 2 has left, giving the subject an opportunity to split the chips. After 30 seconds, the experimenters reveal what they are up to, everyone enjoys a hearty laugh (I made that part up), and the subject completes a questionnaire for further payment. There is yet a third condition involving folks from the same bus stops playing an acknowledged dictator game experiment using casino chips. 

• In the field experiment, no one shared any chips. In the third condition (where the experiment was explicit), most subjects shared some chips, behaving in the standard laboratory fashion. 

• One possible confounding factor: it may be that the subjects usually would share the chips, but they just weren’t that keen on Confederate 1!

Disclosing Conflicts of Interest: Loewenstein, Cain, and Sah (2011)

George Loewenstein, Daylian M. Cain, and Sunita Sah, “The Limits of Transparency: Pitfalls and Potential of Disclosing Conflicts of Interest.” American Economic Review 101(3): 423–428, 2011 [pdf available here].

• Disclosure of conflicts of interest to customers would presumably eliminate any potential problems with such conflicts, if customers were standard rational actors. 

• But how do people respond to mandated disclosures? Those providers who have conflicts might increase the bias in their advice, realizing that their advice will tend to be discounted. They might even feel that the disclosure provides them with a “moral license” to mislead. 

• The recipients of the disclosure might think that the disclosure itself is a signal that the provider must be trustworthy. Further, they recognize that the rejection of the provider’s advice is now a sort of tacit accusation that the provider is corrupt – and people are wary of sending such signals. The desire to help out the provider might also push consumers to accept the conflicted advice. 

• The authors describe some experiments in which mandatory disclosure of conflicts of interest did indeed lead to worse advice, and acceptance of worse advice, with degraded outcomes for consumers (relative to when conflicted advisors were not required to disclose their conflicts.) 

• If the disclosures were not made by the provider, but by someone else, then consumers were better able to respond appropriately. It appears that it is when there is “common knowledge” of the conflict – the provider knows it, the consumer knows it, the provider knows that the consumer knows it, etc… – that the consumer’s incentive to go along with the biased advice is maximized. More time to respond to biased advice – a sort of cooling-off period – also is helpful to consumers. 

• An in-depth follow-up article on disclosure previously received the BE Outlines treatment.

Kamenica, Mullainathan, and Thaler (2011) on Poorly Informed Consumers

Emir Kamenica, Sendhil Mullainathan, and Richard Thaler, “Helping Consumers Know Themselves.” American Economic Review 101(3): 417– 422, 2011 [pdf available here].

• Do cell phone users know about how many minutes they will talk under various pricing plans? Cell phone companies might be better informed than consumers – and possibly offer contracts that consumers will wrongly think are their best option. This is the issue explored by Kamenica, Mullainathan, and Thaler (2011). 

• If price menus are fixed, the more information about her preferences a customer has, the better off she is. But when firms can choose pricing terms, increased information for consumers does not necessarily make consumers better off. 

• As in the Choice Architecture article, RECAP (Record, Evaluate, and Compare Alternative Prices) is suggested. The idea is that firms must disclose pricing schemes, along with information to consumers about their own usage. Presumably this information could be used by third-party firms to compare plans, and recommend to consumers the plan that is best for them. 

• “Adverse targeting [p. 418]” is what the authors call the phenomenon where firms offer pricing plans to consumers that will tempt those consumers but end up being costly to them. 

• The requirement to reveal pricing is meant in part to avoid price shrouding, where important secondary prices – late fees, luggage fees, internet hook-up charges – are not made readily available to consumers. 

• For RECAP to help consumers, the consumers must want the information, and have easy means of responding to the information. For new sorts of services, estimating a consumer’s usage from past behavior is not possible. [I am heartened to learn from David Halpern's Inside the Nudge Unit that RECAP has been abandoned as a term because no one could remember what it stood for.]