Sunday, September 23, 2018

Monterosso and Ainslie (2007) on Recovery from Addiction

John Monterosso and George Ainslie, “The Behavioral Economics of Will in Recovery from Addiction.” Drug and Alcohol Dependence 90(Supplement 1): S100-S111, September 2007.

• Addicts have severe self-control issues with respect to the object of their ardor.

• The goal of this article is to suggest that behavioral economics ideas are fruitful not only in thinking about the process of building an addiction, but also in understanding recoveries from addiction.

• Dynamic inconsistency seems to be tied up in addiction: addicts often express (quite credibly) desires to quit or cut down, but do not follow through on those desires.

• But dynamic inconsistency might also suggest pathways out of addiction – and many treatment programs look to develop these pathways, which involve internal, not external, commitment devices (“private side-betting”).

• Drug use would not be a problem if the pain arose immediately, and the gratification arrived after a delay.

• “Hyperbolic discounting implies that the increase in valuation that occurs when moving a fixed unit of time closer to an expected outcome is proportionately greater the closer one is to that outcome. Think of the experience of waiting for an additional day for an important event that is a year off, versus for one that is imminent [p. 3].”

• Hyperbolic discounting sets one up for dynamic inconsistency of the immediate gratification variety; further, addicts tend to display higher discount rates than non-addicted people (though causality might go both ways). How is it that people with this sort of discounting ever recover, and become dynamically consistent with respect to their intentions to indulge in drugs, say?

• A hyperbolic discounter has a multitude of different (time-based) “interests” – it is Jekyll and Hyde and the rest of the London population, too.

• How does one “interest” protect itself against foreseeable future “interests”? One approach is to make the tempting act unavailable, or raise its cost – perhaps by announcing to your social circle that you are on a diet, for instance, or are having a Dryuary. Or maybe you can deflect your attention (subconsciously) from activities that lead to your problem activity, or develop a repugnance towards them.

• These approaches are not really about willpower; rather, they signal a sophisticated understanding of your own future lack of willpower. And yet many treatment programs harp on building willpower.

• Another method to bolster willpower is a form of mental accounting: you bundle current choices with a string of future choices. In this way, a choice to drink today isn’t just about drinking or not drinking today: a choice to drink means that you will make a similar choice in future days – and that prospect might be sufficiently harrowing to keep you from drinking today.

• Experiments show that humans and non-human animals do choose less impulsively when they know that the current choice will bind similar choices down the road.

• What if you knew that having a drink today would have no effect on your future behavior, that you were pre-determined to drink every future day? What if you were told that you were pre-determined to never drink again, whether or not you drink today? It looks like current abstention is bolstered by the notion that it can influence future choices! But people do abstain, so they must see a link between today’s choice and future choices.

• Bundling can arise when someone sees that “I’ll smoke today and quit tomorrow” will also apply tomorrow, ad infinitum. Then, the actual choices today are “I’ll smoke today and forever” or “I’ll stop today and not relapse.” The personal “rule” becomes “never smoke.”

• The situation is like an intrapersonal repeated prisoners’ dilemma: the only reason you choose to “cooperate” today is if you can thereby make it more likely that your future selves also will choose to cooperate.

• But recall: in iterated prisoners’ dilemmas, it is hard to restore cooperation after a single defection. Likewise, a single lapse from abstinence by a recovering addict can lead to a binge. This sort of behavior looks like it is better described by the “bundling” model or willpower, not a story of binding commitments.

• Why would an addict fall off the wagon?: “[T]o the extent that her abstinence is based on a bundling effect, the primary danger comes from factors that reduce her differential expectation of future abstinence as a function of current abstinence [p. 9].”

• This reduction of perceived danger can derive from overconfidence, underconfidence, or rationalization.

• Twelve-step programs emphasize that willpower is unreliable, yet their adherents seem to do better (than those in other treatment forms) in overcoming cravings.

• Twelve-step treatments seem to respond to the threats created by overconfidence, underconfidence, and rationalization. How? (1) powerlessness and its related credos; (2) the focus on abstinence and the permanence of addiction; and (3) the adoption of doable goals, such as “one day at a time,” while tracking the abstinence streak.

• “When a person structures her choices with personal rules she can be expected to express different preferences than she would if she were making a choice just on the basis of its own merits, and these preferences are apt to differ as well among categories of reward, according to their temporal distribution, emotional relevance, dangerousness, impulse control history, and doubtless many other factors [p. 12].” 

• There is a possibility for a deleterious positive feedback loop, where proximity to the temptation good (or a cue) increases the probability of consumption which increases appetite which leads to increased probability of consumption…

Wednesday, August 22, 2018

Banker et al. (2017) on Sticky Anchors

Sachin Banker, Sarah E. Ainsworth, Roy F. Baumeister, Dan Ariely, and Kathleen D. Voh, “The Sticky Anchor Hypothesis: Ego Depletion Increases Susceptibility to Situational Cues.” Journal of Behavioral Decision Making 30(5): 1027-1040, December 2017 [pdf here].

• “Ego Depletion”: Exerting self-control undermines self-control in slightly later situations. 

• Is ego depletion a thing? Maybe, maybe not (pdf here); like many behavioral concepts (for example, grit [pdf here]), ego depletion has its detractors

• In a dictator game-type setting, does less altruistic behavior (from ego depletion) come about because self-controls over selfishness are undermined, or because people are more inclined to follow salient situational cues? This latter possibility is the sticky anchor hypothesis: depleted people are more suggestible or manipulable. 

• Some dictator game experiments show that depleted people do indeed keep more of the monetary stake for themselves. 

• Are the dictator game findings due to unleashed selfishness or to sticking with the default? To test between the two mechanisms, the authors switch the default: they conduct reverse dictator games. The other (anonymous) party is endowed with the full stake, but you, as the dictator, can choose to take some or all of it. The default, now, is the unselfish setting, so if depleted people have a hard time overcoming defaults, they will leave more of the cash with the other player (relative to the amount left by undepleted folks). Alternatively, if depletion makes you selfish, depleted folks will take more of the endowment from the other player. 

• The depletion manipulation (in Experiment 1, with overall N=54) involves writing some text without using the letters A or N. (Those in the “undepleted” camp write without using the letters X and Z. Incidentally, this reminds me of the singular novel Ella Minnow Pea.) That is, it is attention control that is the source of ego depletion within this experiment.

• On average, those in the depleted condition take less money for themselves ($2.62) than do those in the undepleted condition ($3.69). So, ego depletion does not seem to increase selfishness; rather, it makes it harder to overcome the influence of environmental cues: defaults become more sticky. Notice that neither group is particularly generous. 

• But maybe the attention control manipulation means that the depleted also feel like failures, because they perform poorly at writing without A’s or N’s: they don’t deserve the money in the reverse dictator game. Vicarious depletion [pdf here] to the rescue! A waiter is really hungry…(alternatively, not so hungry…) 

• Once again, the depleted folks (that is, the now vicariously depleted folks, who feel no shame) stick more closely to the anchor: it doesn’t seem to be a lack of desert that causes them to take less money (or fewer lottery tickets) for themselves. 

• But maybe it isn’t general environmental cues at work, maybe it is just our old friend, the status quo effect. So now (Experiment 3) the authors offer either a high or a low anchor before subjects decide how much money to take. Note that the default (the anonymous other gets all the cash) is unchanged, but there is a new situational cue, the anchor. (Subjects are first asked whether they want to take more or less than the anchor; only then are they asked for the precise amount they want to take.) Will depleted people respond to the anchor (more than non-depleted people do), or just to the default? 

• Both depleted and non-depleted subjects respond to the anchor. But the depleted respond more, particularly in the low-anchor treatment. The influence of environmental cues, and not the status quo per se, is what leads to different behavior by depleted folk relative to the undepleted.

Sunday, August 12, 2018

Shrader, Wooten, White, et al. (2017) on Using Loss Aversion to Motivate Students

Rebekah Shrader, Jadrian James Wooten, Dustin R. White, et al., “Improving Student Performance through Loss Aversion.” December 12, 2017; updated version available here.

• Pairs of nearly identical courses are offered, where one element of each pair calculates student points as losses from a perfect base: a score of 50 means the student has lost 50 points, as opposed to the usual (control) case where points accumulate with correct assignments. 

• The idea is to see if the “loss framing” triggers greater student effort in a bid to avoid or minimize losses (as opposed to hoping to acquire gains); that is, the authors are testing to see if enlisting aversion towards losses via the grading framework leads to better student performance. 

• Students were not informed when they signed up for their classes that they were part of a field experiment. 

• The loss framing (“counting down”) was associated with higher grades – some 2.6 to 4.2 percentage points higher. 

• Did students perform better in the loss framework simply because it was unusual? 

• A couple of related papers. not (yet?) covered by Behavioral Economics Outlines, are Roland G. Fryer, Jr, Steven D. Levitt, John List, and Sally Sadoff, “Enhancing the Efficacy of Teacher Incentives Through Framing: A Field Experiment," April 2018, pdf here) and Steven D. Levitt, John A. List, Susanne Neckermann, and Sally Sadoff, “The Behavioralist Goes to School: Leveraging Behavioral Economics to Improve Educational Performance,” American Economic Journal: Economic Policy 8(4): 183-219, November 2016. 

Clark and Lisowski (2017) on Prospect Theory and Moving

William A. V. Clark and William Lisowski, “Prospect Theory and the Decision to Move or Stay.” Proceedings of the National Academy of Sciences of the United States of America 114(36): E7432–E7440, September 5, 2017.

• Clark and Lisowski examine residential moves (of 70 kilometers or more) in Australia between 2010 and 2014. 

• The analysis assumes that the status quo residence represents the reference point. 

• The authors argue that the endowment effect in housing occurs because residents learn more about advantages and disadvantages of their housing, and that this raises “use values” relative to “exchange values.” 

• Previous empirical evidence indicates that the probability of moving decreases with the duration of living in the current residence; the authors, therefore, include a duration variable, as well as an indicator for owning versus renting, among their independent variables. 

• Clark and Lisowski also possess a variable that captures the extent of self-reported risk aversion on the part of the surveyed individual. It turns out that people who don’t move are quite likely to be in the top half of the population in terms of this measure of risk aversion. 

• Movers tend to be younger, and they tend to be renters in their initial residence. Couples with kids are less likely to move.  

• Both duration and home ownership are associated with decreased re-location, which the authors interpret as an endowment effect -- but are these endowment effects?

Friday, August 10, 2018

Chen and Schonger (2016) on Ambiguity Aversion

Daniel L. Chen and Martin Schonger, “Is Ambiguity Aversion a Preference?” TSE Working Paper No. 16-703, December 2016.

Ambiguity aversion has been implicated in many real world phenomena, including the equity premium puzzle: the stock market operates under Knightian uncertainty (ambiguity), not risk, and so ambiguity-averse investors need compensation to buy stocks. Overly punitive plea bargains are acceptable to ambiguity-averse defendants…

• But perhaps the sort of behavior exhibited by the Ellsberg paradox is not really indicative of underlying preferences – perhaps it is mistake, the use of a decision heuristic in inappropriate circumstances. Perhaps people are not actually ambiguity averse.

• Maybe people (rightly) shy away from unfamiliar offers, especially when the person making the offer possesses superior information – and this is the situation when experimental participants are presented with the Ellsberg game. Subjects suspect that the experimenter actually knows how many red and blue balls are in the urn.

• Chen and Schonger set up an Ellsberg experiment where the experimenter is not the party responsible for the contents of the ambiguous urn; rather, the choices of other subjects determine the contents. 

• Every subject decides which of two symbols to send to the others. In experiment 1, the symbol that gets the most “votes” is the symbol that will appear in the “ambiguous” urn for other participants (which need not be the same for all participants, incidentally). 

• All experiments involve a toss of a fair coin, where the subjects can choose to bet on either heads or tails, along with the two ambiguous options. A correct outcome yields 4€. The choice of the bet is determined by taking the maximum of the valuations provided by each participant for each of the four bets. 

• People turn out to prefer the ambiguous bets! “For each of the 16 sessions, individuals were more likely to bet on a symbol with subjective uncertainty, and in all but 2 of the 16 sessions, both bets with subjective uncertainty were more popular than the bets with objective uncertainty [p. 15].” This remains true in design 2, where there is a full-on urn and not just a specific symbol chosen by others.

Monday, July 30, 2018

Golman, Loewenstein, Moene, and Zarri (2016) on Belief Consonance

Russell Golman, George Loewenstein, Karl Ove Moene, and Luca Zarri, “The Preference for Belief Consonance.” Journal of Economic Perspectives 30(3): 165-188, Summer 2016 [pdf].

• People like to have beliefs that accord with the beliefs of others. Sharing beliefs enhances our connection to our group. 

• Much of world conflict is about beliefs – often about rather subtle differences in beliefs. Recall that people protect beliefs in which they have invested heavily. 

• Conflict over small differences in beliefs might arise because our beliefs are most threatened by those who are otherwise similar to us. 

• People do not like to have their beliefs challenged, so media have incentives not to challenge beliefs.

• Beliefs might come first, and only then do we develop the “rational” reasons that we hold them; see Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion, Pantheon, 2012. 

• Belief consonance can be self-reinforcing: when someone stubbornly refuses to agree with me, I can attribute her stubbornness to her own interest in protecting her initial beliefs – and therefore I do not have to reconsider my own beliefs. 

• In the interest of possessing consonant beliefs, all people might believe X, but believe that everyone else believes “not X”: “pluralistic ignorance”. See Timur Kuran, Private Truths, Public Lies: The Social Consequences of Preference Falsification, Harvard, 1998. 

• In trust and dictator games, people are more generous when paired with members of their own political party.

• Two (complementary?) approaches to the enticements  of belief consonance: (1) desire to match beliefs with a group that you are in, or want to join; and (2) desire to maintain desirable beliefs about yourself. 

• On the whole, belief consonance probably is detrimental to society.

Friday, July 20, 2018

Bénabou and Tirole (2016) on Motivated Beliefs

Roland Bénabou and Jean Tirole, “Mindful Economics: The Production, Consumption, and Value of Beliefs.” Journal of Economic Perspectives 30(3): 141-164, Summer 2016.

• People value their beliefs, both directly and for instrumental reasons; therefore, beliefs can withstand counter-evidence, and people may prefer to steer clear of evidence.

• Positive beliefs can be nurtured by responding asymmetrically to good and bad news.

• Shared beliefs also can be resistant to evidence, possibly with dreadful consequences.

• Bénabou et Tirole treat beliefs as if they were standard economic goods or assets, which people consume, invest in, produce, and so on.

• Optimism can serve as a sort of commitment device to stick with long-term projects and to avoid temptation. False beliefs can also influence other people in a way that might serve your interests. Religious beliefs might contribute both to self-discipline and to improve your view of yourself and your future. 

• Methods to protect valued beliefs include “strategic ignorance, reality denial, and self-signaling [p. 144].” More educated people are not better shielded from employing these methods. 

• Confirmation bias allows people to believe that their previous views have been corroborated. They anticipate this future corroboration, and this allows them to take on risky projects. 

• Emotional responses to a challenge to your beliefs are a signal that your beliefs are something you are trying to protect. Standard rationality suggests that challenges should be welcomed -- shades of  J. S. Mill.

• There’s a trade-off between accepting bad news and optimizing decisions given reality versus living in blissful ignorance for awhile before the piper must be paid. 

• Because it is easier to recall our actions than it is to recall our motives, we might try to self-signal, by choosing actions that will allow us to later have a good (but distorted) view of ourselves. 

• In laboratory experiments, subjects have a harder time remembering (accurately) their past failures.

• Self-deception rises with the size of the sunk costs. People persuade themselves of the future value of these investments. 

• People are quite good at believing they had sound reasons for their bad behaviors. 

• Blind persistence in social projects in the face of bad news might make the situation more tolerable. But if persistence can lead to bigger losses, watch out for “Mutually Assured Delusion.” Unfortunately, it is in this situation that denial becomes contagious. 

• Organizational failures might be a result of bad beliefs as much as bad incentives. 

• “Just-world” beliefs and the extent of the welfare state are negatively correlated. 

• Constitutional guarantees of free speech and a free press might serve as a commitment device to allow dissent even in the future bad states – and reduce the protection of current beliefs, because of the increased likelihood that they will be challenged, anyway.