Alberto Alemanno, “Nudging Healthier Lifestyles: Informing the Non-communicable Diseases Agenda with Behavioural Insights.” Chapter 14 in A. Alemanno and A. Garde, eds., Regulating Lifestyle Risks: The EU, Alcohol, Tobacco and Unhealthy Diets, Cambridge University Press, 2015.
• The European section of the World Health Organization has begun an initiative to combat “lifestyle” non-communicable diseases: those associated with tobacco, alcohol, and goods high in salt, sugar, and fat.
• Regulations that commonly are applied to lifestyle goods include mandatory information disclosure; marketing limitations; taxes and other devices to reduce availability.
• Behavioral economics-style nudges fit well with lifestyle issues. They tend to be low cost and preserve individual choice. Lifestyle nudges might include disclosure rules; default settings; and, simplification. For instance, the extremely graphic photos required on cigarette packs, and nutrition fact panels, are nudges of a sort.
• The European Union (oddly) is requiring that cigarette packs not disclose tar, nicotine, and carbon monoxide content. The rationale is that such information might lead smokers to think that some cigarettes are less harmful than others. (There is some evidence that consumers smoke low tar and nicotine cigarettes more intensely, undoing any relative health benefits.)
• Thaler and Sunstein’s “publicity principle” (borrowed from John Rawls, building on Kant) is that governmental nudges must be supportable if everyone is informed about them. If people will be upset if they were misled, even if the misleading resulted in better choices for them, then the misleading is wrong.
• Much of the evidence for nudges comes from the laboratory, and might not work in the real world. Or, a nudge might work at first, but then become less effective over time. Notice that marketing experts think that nudges work, however.
Since mid-2015, your source for bullet-point summaries of behavioral economics articles.
Sunday, July 26, 2015
Saturday, July 25, 2015
Sunstein on Choosing Not to Choose
Cass R. Sunstein, “Choosing Not to Choose.” Duke Law Journal 64(1):
1-52, October, 2014.
• This article essentially responds to a critique of one common behavioral economics-style policy intervention, the strategic use of default settings. The critique is that using default settings to nudge people is an affront to their autonomy, even if it is easy to choose something other than the default.
• Sunstein’s response is to suggest that a failure to provide a default could just as easily be an affront to autonomy: often, people do not want to choose, they want a well-chosen default in place so that they can avoid choosing.
• Therefore, mandating an active choice (as opposed to setting up a default) can itself be paternalistic, and maybe not even in a libertarian fashion. Sunstein calls it “choice-requiring paternalism” (page 7).
• A compromise between using a default and requiring an active choice would be to ask people if they want to choose, knowing that if they do not, they can “opt out” of choosing and receive a default choice (which they could alter) instead; this approach is called “simplified active choosing” (page 8).
• Of course, the question of whether you want to choose might itself be an unwelcome query.
• If you can’t trust the default, there is much to be said for active choosing. But if the situation is unfamiliar, then why choose, if the default setters can be trusted?
• Active choice might make more sense in situations where you can learn about choices over time, or develop your preferences.
• The rise of big data means that sellers might know better than consumers what the consumers want.
• One reason not to want to choose is to deflect responsibility. Also, people have plenty of decisions to make (200 a day for food, we are told) and might prefer to focus their autonomy on an important but manageable subset of decisions.
• Sometimes active choice is to protect third parties, or to provide credible proof that a choice was made in a deliberate fashion. This might be the case with organ donation, for instance.
• Is GPS undermining our ability to navigate? Does Pandora hinder learning about different types of music? Jane Jacobs noted that cities bring about unwelcome but nevertheless consciousness-expanding encounters, by providing an architecture of serendipity.
• People seem to value more highly options they have chosen than options they have been assigned, that is, they have a bias in favor of choice.
• Would you be willing to have your book purchases chosen by big data? How would you feel about being defaulted into a regime where big data chose your books and charged your bank account? (You could return the books for a refund.) Would you feel differently if we were talking not about books but about household goods like paper towels or detergent?
• In summary, choice can sometime be a curse, undermining your autonomy and your welfare.
• This article later spawned a book with the same title.
• This article essentially responds to a critique of one common behavioral economics-style policy intervention, the strategic use of default settings. The critique is that using default settings to nudge people is an affront to their autonomy, even if it is easy to choose something other than the default.
• Sunstein’s response is to suggest that a failure to provide a default could just as easily be an affront to autonomy: often, people do not want to choose, they want a well-chosen default in place so that they can avoid choosing.
• Therefore, mandating an active choice (as opposed to setting up a default) can itself be paternalistic, and maybe not even in a libertarian fashion. Sunstein calls it “choice-requiring paternalism” (page 7).
• A compromise between using a default and requiring an active choice would be to ask people if they want to choose, knowing that if they do not, they can “opt out” of choosing and receive a default choice (which they could alter) instead; this approach is called “simplified active choosing” (page 8).
• Of course, the question of whether you want to choose might itself be an unwelcome query.
• If you can’t trust the default, there is much to be said for active choosing. But if the situation is unfamiliar, then why choose, if the default setters can be trusted?
• Active choice might make more sense in situations where you can learn about choices over time, or develop your preferences.
• The rise of big data means that sellers might know better than consumers what the consumers want.
• One reason not to want to choose is to deflect responsibility. Also, people have plenty of decisions to make (200 a day for food, we are told) and might prefer to focus their autonomy on an important but manageable subset of decisions.
• Sometimes active choice is to protect third parties, or to provide credible proof that a choice was made in a deliberate fashion. This might be the case with organ donation, for instance.
• Is GPS undermining our ability to navigate? Does Pandora hinder learning about different types of music? Jane Jacobs noted that cities bring about unwelcome but nevertheless consciousness-expanding encounters, by providing an architecture of serendipity.
• People seem to value more highly options they have chosen than options they have been assigned, that is, they have a bias in favor of choice.
• Would you be willing to have your book purchases chosen by big data? How would you feel about being defaulted into a regime where big data chose your books and charged your bank account? (You could return the books for a refund.) Would you feel differently if we were talking not about books but about household goods like paper towels or detergent?
• In summary, choice can sometime be a curse, undermining your autonomy and your welfare.
• This article later spawned a book with the same title.
Loewenstein, Bryce, Hagmann, and Rajpal, “Warning: You are About to Be Nudged”
George Loewenstein, Cindy Bryce, David Hagmann, and Sachin Rajpal, “Warning: You are About to Be Nudged,” March 28, 2014
• Does informing people about the use of a behavioral nudge – here, default choices – alter their behavior relative to using the nudge without informing them?
• The experiment involves a hypothetical directive concerning end-of-life care. Subjects could choose the “Prolong” option, in which medical authorities do whatever is necessary to keep someone alive, despite the potential for more suffering, or subjects could choose the “Comfort” option, in which doctors would try to be make the patients comfortable, at some cost in terms of longevity. Subjects also could choose to deputize their relatives and doctors to make the choice for them at the appropriate time.
• Each subject was provided with a default option, either “Prolong” or “Comfort,” though it was easy to override the default. Most people preferred the “Comfort” alternative, and the default setting did not influence these preferences in the aggregate. Telling people in advance about the extraordinary staying power of defaults had no effect relative to telling them after their initial choice.
• Along with the general Prolong/Comfort option, there were questions concerning five specific medical interventions. Choices on whether to pursue these options showed a significant influence from the default settings, even when subjects were pre-informed that they were being “defaulted.” Some of the influence remained for the respondents who were post-informed that they had been defaulted, and were asked to choose again without a specified default.
• Pre-informing people of the default had little impact on diminishing the power of the default nudge; likewise, post-informing them of the default, and giving them the opportunity to choose again, did not diminish the power of the default. Nonetheless, these findings take place in an experimental context in which default pressures themselves are not large.
• Does informing people about the use of a behavioral nudge – here, default choices – alter their behavior relative to using the nudge without informing them?
• The experiment involves a hypothetical directive concerning end-of-life care. Subjects could choose the “Prolong” option, in which medical authorities do whatever is necessary to keep someone alive, despite the potential for more suffering, or subjects could choose the “Comfort” option, in which doctors would try to be make the patients comfortable, at some cost in terms of longevity. Subjects also could choose to deputize their relatives and doctors to make the choice for them at the appropriate time.
• Each subject was provided with a default option, either “Prolong” or “Comfort,” though it was easy to override the default. Most people preferred the “Comfort” alternative, and the default setting did not influence these preferences in the aggregate. Telling people in advance about the extraordinary staying power of defaults had no effect relative to telling them after their initial choice.
• Along with the general Prolong/Comfort option, there were questions concerning five specific medical interventions. Choices on whether to pursue these options showed a significant influence from the default settings, even when subjects were pre-informed that they were being “defaulted.” Some of the influence remained for the respondents who were post-informed that they had been defaulted, and were asked to choose again without a specified default.
• Pre-informing people of the default had little impact on diminishing the power of the default nudge; likewise, post-informing them of the default, and giving them the opportunity to choose again, did not diminish the power of the default. Nonetheless, these findings take place in an experimental context in which default pressures themselves are not large.
Bartling, Fehr, and Herz, “The Intrinsic Value of Decision Rights”
Björn Bartling, Ernst Fehr, and Holger Herz, “The Intrinsic Value of Decision Rights.” University of Zurich, Department of Economics Working Paper No. 120, April 19, 2013 [updated version available].
• Consider a principal-agent situation, where the principal would like to have a task accomplished. The principal can control fully the choice of task and effort, or can delegate the choices to an agent with different preferences -- though the delegation, if it takes place, can specify a minimum effort level. The question that the authors explore is whether the principal is willing to sacrifice some expected return just to keep control.
• In their experiment, the answer is… “Yes”: principals give up more than 16% in certainty equivalent terms to control the choices. The higher the stakes, the greater the intrinsic value that principals place on control. Also, and oddly, the closer the alignment between principal and agent preferences, the greater the intrinsic value of control to the principal, even though the agent would make similar choices to what the principal makes, and the principal knows that.
• Note that the intrinsic value of control or ownership is non-transferable; it is subject to a sort of endowment effect. Sometimes proposed corporate mergers become undone because neither group of executives is willing to cede control.
• Entrepreneurs and scientists seem to sacrifice income for control.
• Consider a principal-agent situation, where the principal would like to have a task accomplished. The principal can control fully the choice of task and effort, or can delegate the choices to an agent with different preferences -- though the delegation, if it takes place, can specify a minimum effort level. The question that the authors explore is whether the principal is willing to sacrifice some expected return just to keep control.
• In their experiment, the answer is… “Yes”: principals give up more than 16% in certainty equivalent terms to control the choices. The higher the stakes, the greater the intrinsic value that principals place on control. Also, and oddly, the closer the alignment between principal and agent preferences, the greater the intrinsic value of control to the principal, even though the agent would make similar choices to what the principal makes, and the principal knows that.
• Note that the intrinsic value of control or ownership is non-transferable; it is subject to a sort of endowment effect. Sometimes proposed corporate mergers become undone because neither group of executives is willing to cede control.
• Entrepreneurs and scientists seem to sacrifice income for control.
Saturday, July 18, 2015
M. Ryan Calo (2013), “Code, Nudge, or Notice?”
M. Ryan Calo, “Code, Nudge, or Notice?” University of Washington School
of Law Research Paper, February 7, 2013.
• Code (including physical and virtual architecture), nudges, and information disclosure (notice) can be alternatives to formal law. They can alter behavior, but they might not include the procedural safeguards, nor the transparency, that commonly accompany law. Some nudges, for instance, can be invisible and unknown to the public. Like placebos, they might even lose their effectiveness if they were known.
• Consider driving. Speed bumps are a type of “code; visual illusions like those lines on Lake Shore Drive are a type of nudge; and, “kids at play” signs are a species of notice.
• If code precludes violations, then the potentially beneficent role of civil disobedience is undermined. Code makes some types of legal activity impossible; for instance, some fair uses of copyrighted material are ruled out by Digital Rights Management techniques.
• Calo invokes a rather singular definition of “nudges,” such that they necessarily take advantage of decision-making biases (such as the status quo bias) to push people in a preferred (by whom?) direction. Code rebiases, as opposed to debiases.
• Does frequent nudging lead to infantilization?
• Notice works where informed decision making works (and informing works, too). Some scholars take a very negative view of mandated information disclosure, on the grounds that it is frequently ineffective and even counterproductive.
• Are extremely graphic cigarette warnings a case of information provision (notice), or nudge? The answer might determine the constitutionality of mandates for such warnings.
• Code (including physical and virtual architecture), nudges, and information disclosure (notice) can be alternatives to formal law. They can alter behavior, but they might not include the procedural safeguards, nor the transparency, that commonly accompany law. Some nudges, for instance, can be invisible and unknown to the public. Like placebos, they might even lose their effectiveness if they were known.
• Consider driving. Speed bumps are a type of “code; visual illusions like those lines on Lake Shore Drive are a type of nudge; and, “kids at play” signs are a species of notice.
• If code precludes violations, then the potentially beneficent role of civil disobedience is undermined. Code makes some types of legal activity impossible; for instance, some fair uses of copyrighted material are ruled out by Digital Rights Management techniques.
• Calo invokes a rather singular definition of “nudges,” such that they necessarily take advantage of decision-making biases (such as the status quo bias) to push people in a preferred (by whom?) direction. Code rebiases, as opposed to debiases.
• Does frequent nudging lead to infantilization?
• Notice works where informed decision making works (and informing works, too). Some scholars take a very negative view of mandated information disclosure, on the grounds that it is frequently ineffective and even counterproductive.
• Are extremely graphic cigarette warnings a case of information provision (notice), or nudge? The answer might determine the constitutionality of mandates for such warnings.
Cooper and Kovacic (2012) on Regulators
James C. Cooper and William E. Kovacic, “Behavioral Economics: Implications for Regulatory Behavior.” Journal of Regulatory Economics
41: 41-58, 2012.
• The principal is the legislative or executive overseer, whose preferences are assumed to be short-sighted. The agent is the regulator, who thinks she knows the (socially) optimal policy – but she might be biased -- and pays a price when the policy choice varies from what she views as best. The regulator also possesses career concerns that favor obeying the legislator.
• If the regulator puts no weight on pleasing her boss, or if the overseer is unable to punish a wayward agent, the regulator will choose what she thinks is the first-best policy.
• Behavioral biases (like availability, optimism, hindsight) make the regulator more like the short-term politician. The call to “do something” might push the confirmation bias in the direction of the principal’s preferences, too. The status quo bias and the confirmation bias have ambiguous implications for the regulator’s policy choices.
• Regulators only hear about some issues when intervention is requested by a constituent – creating an anchoring effect. Publicly announced positions also generate an anchor.
• Regulators do not operate in a competitive market environment, and their sources of feedback are not as strong or timely as firms often receive – so biases can persist. Poor regulators generally do not exit the profession. Indeed, the feedback received from legislators is likely to result in regulators who are too short-term oriented, or whose biases make them behave as if they were.
• Internal and external adversarial proceedings might aid regulatory choices. The regulators can set up an A-Team and a B-Team and let them present their cases. (The FTC has economists work up a case independently of lawyers.) More ex post evaluations, focused on outcomes, not outputs, along with longer tenure for regulators, could help.
• The principal is the legislative or executive overseer, whose preferences are assumed to be short-sighted. The agent is the regulator, who thinks she knows the (socially) optimal policy – but she might be biased -- and pays a price when the policy choice varies from what she views as best. The regulator also possesses career concerns that favor obeying the legislator.
• If the regulator puts no weight on pleasing her boss, or if the overseer is unable to punish a wayward agent, the regulator will choose what she thinks is the first-best policy.
• Behavioral biases (like availability, optimism, hindsight) make the regulator more like the short-term politician. The call to “do something” might push the confirmation bias in the direction of the principal’s preferences, too. The status quo bias and the confirmation bias have ambiguous implications for the regulator’s policy choices.
• Regulators only hear about some issues when intervention is requested by a constituent – creating an anchoring effect. Publicly announced positions also generate an anchor.
• Regulators do not operate in a competitive market environment, and their sources of feedback are not as strong or timely as firms often receive – so biases can persist. Poor regulators generally do not exit the profession. Indeed, the feedback received from legislators is likely to result in regulators who are too short-term oriented, or whose biases make them behave as if they were.
• Internal and external adversarial proceedings might aid regulatory choices. The regulators can set up an A-Team and a B-Team and let them present their cases. (The FTC has economists work up a case independently of lawyers.) More ex post evaluations, focused on outcomes, not outputs, along with longer tenure for regulators, could help.
Friday, July 17, 2015
Rizzo and Whitman (2009) on the Slippery Slopes of Soft Paternalism
Mario J. Rizzo and Douglas Glen Whitman, “Little Brother Is Watching You: New Paternalism on the Slippery Slopes [pdf].” Arizona Law Review 51: 685-739, 2009.
• A slippery slope argument is one suggesting that adopting a policy in the instant case will increase the probability of another, substantially more worrisome policy in a “danger case” (Schauer’s terminology) down the road.
• When cases vary along a continuum, then it might be hard to settle on a stable dividing line; such situations seem ripe for slippery slopes. Legal precedents can slide down the slope, for example, because of the inability to distinguish nearly (but not precisely) identical cases.
• Policies involve fixed costs that, once paid, make it inexpensive (in terms of resources) to slide down the slope.
• The common law typically endorses business customs and practices as its method of establishing defaults. Choice architects are not similarly disciplined, and hence are likely to both shift and increase transactions costs.
• Soft paternalists shift the frame into one where intervention is the default condition, and the only question is how much to intervene. They do not sufficiently distinguish between public and private policies.
• There is an inescapable arbitrariness in assigning any single discount rate for welfare comparisons involving quasi-hyperbolic people. A similar point applies to siding with cold state or hot state preferences.
• Soft paternalism can crowd out, perhaps completely, self-regulation.
• Quasi-hyperbolic policymakers will overvalue short-term gains and undervalue long-term costs.
• A slippery slope argument is one suggesting that adopting a policy in the instant case will increase the probability of another, substantially more worrisome policy in a “danger case” (Schauer’s terminology) down the road.
• When cases vary along a continuum, then it might be hard to settle on a stable dividing line; such situations seem ripe for slippery slopes. Legal precedents can slide down the slope, for example, because of the inability to distinguish nearly (but not precisely) identical cases.
• Policies involve fixed costs that, once paid, make it inexpensive (in terms of resources) to slide down the slope.
• The common law typically endorses business customs and practices as its method of establishing defaults. Choice architects are not similarly disciplined, and hence are likely to both shift and increase transactions costs.
• Soft paternalists shift the frame into one where intervention is the default condition, and the only question is how much to intervene. They do not sufficiently distinguish between public and private policies.
• There is an inescapable arbitrariness in assigning any single discount rate for welfare comparisons involving quasi-hyperbolic people. A similar point applies to siding with cold state or hot state preferences.
• Soft paternalism can crowd out, perhaps completely, self-regulation.
• Quasi-hyperbolic policymakers will overvalue short-term gains and undervalue long-term costs.
Glaeser (2006), “Paternalism and Psychology”
Ed Glaeser, “Paternalism and Psychology [pdf].” University of Chicago Law Review 73(1): 133-156, Winter 2006.
• This article is a relatively early contribution in what has become a crowded area, critiques of the policy prescriptions -- typically soft or hard paternalistic policies -- that often accompany behavioral economics analyses.
• The sorts of shortfalls from full rationality that behavioral economics documents strengthen the case against government intervention. (Glaeser focuses on bounded rationality, not self-control shortcomings.) The key consideration is that mistakes are not exogenous, but respond to incentives.
• Consumers have better incentives to make choices that are valuable to them than do bureaucrats, who are choosing for other people. Lab experiments generally do not capture the full scope of methods that people have for informing themselves about important decisions.
• A firm might find it in its interest to persuade people to buy its product or behave in a certain fashion. It is probably cheaper to convince one bureaucrat (or a small number of bureaucrats) than it is to directly influence millions of consumers; therefore, leaving decisions in the market and not coerced by bureaucrats will mute the power of such a firm.
• Consumers have better incentives to be informed and to choose wisely in their market activity than in their voting behavior. Politicians can be poor agents for voters.
• Soft paternalism can become costly if implemented unwisely by bumbling politicians.
• Do we want government to have more scope for, and practice at, persuading citizens? Propaganda is effective. Many policy positions that now are widely reviled were once promoted by the government.
• The emotional tax of soft paternalism might be dominated by an explicit tax – and soft paternalistic policies are harder for citizens to monitor. Further, political economy considerations suggest that soft paternalism is a road to hard paternalism.
• This article is a relatively early contribution in what has become a crowded area, critiques of the policy prescriptions -- typically soft or hard paternalistic policies -- that often accompany behavioral economics analyses.
• The sorts of shortfalls from full rationality that behavioral economics documents strengthen the case against government intervention. (Glaeser focuses on bounded rationality, not self-control shortcomings.) The key consideration is that mistakes are not exogenous, but respond to incentives.
• Consumers have better incentives to make choices that are valuable to them than do bureaucrats, who are choosing for other people. Lab experiments generally do not capture the full scope of methods that people have for informing themselves about important decisions.
• A firm might find it in its interest to persuade people to buy its product or behave in a certain fashion. It is probably cheaper to convince one bureaucrat (or a small number of bureaucrats) than it is to directly influence millions of consumers; therefore, leaving decisions in the market and not coerced by bureaucrats will mute the power of such a firm.
• Consumers have better incentives to be informed and to choose wisely in their market activity than in their voting behavior. Politicians can be poor agents for voters.
• Soft paternalism can become costly if implemented unwisely by bumbling politicians.
• Do we want government to have more scope for, and practice at, persuading citizens? Propaganda is effective. Many policy positions that now are widely reviled were once promoted by the government.
• The emotional tax of soft paternalism might be dominated by an explicit tax – and soft paternalistic policies are harder for citizens to monitor. Further, political economy considerations suggest that soft paternalism is a road to hard paternalism.
Camerer, et al. (2003) on Asymmetric Paternalism
Colin Camerer, Samuel Issacharoff, George Loewenstein, Ted O'Donoghue, and Matthew Rabin, “Regulation for Conservatives: Behavioral Economics and the Case for ‘Asymmetric Paternalism’.” University of Pennsylvania Law Review 151(3): 1211-1254, 2003.
• An asymmetric paternalistic regulation creates large gains for the less-than-rationals, while imposing little upon rationals.
• Are there predictable circumstances in which people tend to be less than rational?
• Asymmetric paternalism as a method for correcting internalities. The tradeoff is meant to assure that the interventions do not impose high costs on rationals.
• It cannot be taken as given that people’s choices maximize their well-being. It’s an empirical issue, of course.
• Many current regulations are asymmetrically paternalistic.
• Defaults have to take into account what would be the most common best choice, as well as the possibility that the costs of error are asymmetric.
• Cooling-off periods allow bad decisions to be reversed, but generally lower the value of the decision when it is rational. If sellers bear costs when decisions are reversed, they might want to ensure rational deliberation ex ante. Should consumers be allowed to waive cooling-off periods?
• Constitutions implement the equivalent of cooling-off periods.
• Are there private incentives to provide paternalistic interventions? Maybe the reason such interventions are needed is the reason they will not be demanded.
• An asymmetric paternalistic regulation creates large gains for the less-than-rationals, while imposing little upon rationals.
• Are there predictable circumstances in which people tend to be less than rational?
• Asymmetric paternalism as a method for correcting internalities. The tradeoff is meant to assure that the interventions do not impose high costs on rationals.
• It cannot be taken as given that people’s choices maximize their well-being. It’s an empirical issue, of course.
• Many current regulations are asymmetrically paternalistic.
• Defaults have to take into account what would be the most common best choice, as well as the possibility that the costs of error are asymmetric.
• Cooling-off periods allow bad decisions to be reversed, but generally lower the value of the decision when it is rational. If sellers bear costs when decisions are reversed, they might want to ensure rational deliberation ex ante. Should consumers be allowed to waive cooling-off periods?
• Constitutions implement the equivalent of cooling-off periods.
• Are there private incentives to provide paternalistic interventions? Maybe the reason such interventions are needed is the reason they will not be demanded.
Monday, July 13, 2015
Sayette et al. (2008), “Exploring the Cold-to-Hot Empathy Gap in Smokers.”
Michael A. Sayette, George Loewenstein, Kasey M. Griffin, and Jessica J. Black, “Exploring the Cold-to-Hot Empathy Gap in Smokers.” Psychological Science 19: 926-932, 2008.
• The cold-to-hot empathy gap is the notion that people (when in the cold state) underestimate the extent to which visceral factors will impact their future decisions (made in the hot state).
• The experimental set-up: smokers know that at the next gathering they will be in a craving state. They are asked to precommit to a willingness to pay (wtp) to accept craving (in the form of delayed access to a smoke). Some participants are asked for this wtp when they are craving, and others are asked when they aren’t.
• In session 2, now with everyone in the hot state (craving), they are given a chance to revise their wtp.
• Results: people in the cold state in the first session revise upward their wtp in the second session: they seem to suffer from a cold-to-hot empathy gap. The cold group also seems to underpredict the depth of their future cravings.
• The cold-to-hot empathy gap is the notion that people (when in the cold state) underestimate the extent to which visceral factors will impact their future decisions (made in the hot state).
• The experimental set-up: smokers know that at the next gathering they will be in a craving state. They are asked to precommit to a willingness to pay (wtp) to accept craving (in the form of delayed access to a smoke). Some participants are asked for this wtp when they are craving, and others are asked when they aren’t.
• In session 2, now with everyone in the hot state (craving), they are given a chance to revise their wtp.
• Results: people in the cold state in the first session revise upward their wtp in the second session: they seem to suffer from a cold-to-hot empathy gap. The cold group also seems to underpredict the depth of their future cravings.
Bernheim and Rangel (2004) on Cues and Addiction
B. Douglas Bernheim and Antonio Rangel, “Addiction and Cue-Triggered Decision Processes [pdf].” American Economic Review 94(5): 1558–1590, 2004.
• Three premises: (1) use among addicts is frequently a mistake; (2) experience sensitizes users to environmental cues that trigger mistaken usage; (3) addicts understand their susceptibility to cue-triggered mistakes.
• Addictive substances interfere with how the brain forecasts near-term hedonic rewards, leading to the cue-conditioned problems.
• Upon exposure to cues, you might enter a “hot” mode, in which you consume the substance irrespective of preferences. Sensitivity to cues depends on past consumption. “Wanting” a drug is not the same as “liking” the drug; see Robinson and Berridge (1993, 2001).
• Some patterns of addictive behavior that the model is consistent with: (1) unsuccessful attempts to quit and recidivism even though the short-term, painful withdrawal costs have already been incurred; (2) cue-triggered recidivism; (3) self-described mistakes, even in the act of consuming; (4) precommitment strategies; and (5) the use of behavioral and cognitive therapy.
• The model involves a consumer who enters each period in a cold state, but who can choose among different lifestyles for that period; each lifestyle has a different prospect of presenting cues, and hence of being forced to mistakenly consume the drug.
• Three premises: (1) use among addicts is frequently a mistake; (2) experience sensitizes users to environmental cues that trigger mistaken usage; (3) addicts understand their susceptibility to cue-triggered mistakes.
• Addictive substances interfere with how the brain forecasts near-term hedonic rewards, leading to the cue-conditioned problems.
• Upon exposure to cues, you might enter a “hot” mode, in which you consume the substance irrespective of preferences. Sensitivity to cues depends on past consumption. “Wanting” a drug is not the same as “liking” the drug; see Robinson and Berridge (1993, 2001).
• Some patterns of addictive behavior that the model is consistent with: (1) unsuccessful attempts to quit and recidivism even though the short-term, painful withdrawal costs have already been incurred; (2) cue-triggered recidivism; (3) self-described mistakes, even in the act of consuming; (4) precommitment strategies; and (5) the use of behavioral and cognitive therapy.
• The model involves a consumer who enters each period in a cold state, but who can choose among different lifestyles for that period; each lifestyle has a different prospect of presenting cues, and hence of being forced to mistakenly consume the drug.
Loewenstein (1996) is “Out of Control”
George Loewenstein, “Out of Control: Visceral Influences on Behavior [pdf].” Organizational Behavior and Human Decision Processes 65: 272-292, March 1996.
• People can make choices that are mistaken, and make them in full knowledge, at the time of choosing, that they are making a mistake. This disjunction between perceived self-interest and behavior is caused by intense visceral factors such as cravings, “drive states” (hunger, thirst, sexual desire), moods and emotions, and pain.
• While rational choices require that visceral states be taken into account, many types of self-destructive behavior seem to involve an excessive influence of visceral factors on choices. Indeed, some intense visceral factors seem to preclude “decision making” – no one chooses to fall asleep while driving.
• Sales people, con men, cults: all take advantage of visceral factors. They tend to apply pressure for immediate action – visceral factors fade over time.
• People under the sway of intense visceral factors tend to narrow their focus. Addicted people narrow their focus to the addictive good; people become more self-centered.
• Loewenstein’s seven propositions amount to: “visceral factors operating on us in the here and now have a disproportionate impact on our behavior [p. 276].” Alternatively, the same factors, past or future, or experienced by someone else, are underweighted, despite substantial experience with visceral factors.
• Quasi-hyperbolic or other non-exponential discounting approaches have two significant limitations: (1) people seem to have such preferences only with respect to certain kinds of choices; and, (2) time delay is not the only feature of a choice situation that seems to involve impulsivity. Physical proximity and sensory contact, for instance – the smell of cookies! – also play a role.
• Almost any cue associated with a reward can produce an appetitive response – especially a priming dose.
• Vividness (such as terrorist attacks) affects decisions. Does vividness alter subjective probabilities, or does it intensify the emotions associated with thinking about the outcome?
• Though visceral states have an undue influence on behavior, people are not fully sophisticated about the extent to which their own future (or past!) behavior will respond to visceral forces. A recovering alcoholic might underestimate how difficult abstaining will be if he goes to the office Christmas party.
• Rational addiction is wrong because it doesn’t fit the facts – e.g., addicts should buy in bulk to save money on their long term habit; further, the “rapid downward hedonic spiral” is “difficult to understand” as a rational choice. So why do people become addicted? In part, because they underestimate the influence of craving and withdrawal on their future behavior. They wrongly believe they will be able to quit.
• People blame themselves for a lack of past effort because they discount the effect of fatigue.
• The visceral model suggests a multiple-selves interpretation. The farsighted self is the one relatively immune from visceral factors, and is much more consistent over time than the selves who are influenced by fluctuating visceral factors.
• People can make choices that are mistaken, and make them in full knowledge, at the time of choosing, that they are making a mistake. This disjunction between perceived self-interest and behavior is caused by intense visceral factors such as cravings, “drive states” (hunger, thirst, sexual desire), moods and emotions, and pain.
• While rational choices require that visceral states be taken into account, many types of self-destructive behavior seem to involve an excessive influence of visceral factors on choices. Indeed, some intense visceral factors seem to preclude “decision making” – no one chooses to fall asleep while driving.
• Sales people, con men, cults: all take advantage of visceral factors. They tend to apply pressure for immediate action – visceral factors fade over time.
• People under the sway of intense visceral factors tend to narrow their focus. Addicted people narrow their focus to the addictive good; people become more self-centered.
• Loewenstein’s seven propositions amount to: “visceral factors operating on us in the here and now have a disproportionate impact on our behavior [p. 276].” Alternatively, the same factors, past or future, or experienced by someone else, are underweighted, despite substantial experience with visceral factors.
• Quasi-hyperbolic or other non-exponential discounting approaches have two significant limitations: (1) people seem to have such preferences only with respect to certain kinds of choices; and, (2) time delay is not the only feature of a choice situation that seems to involve impulsivity. Physical proximity and sensory contact, for instance – the smell of cookies! – also play a role.
• Almost any cue associated with a reward can produce an appetitive response – especially a priming dose.
• Vividness (such as terrorist attacks) affects decisions. Does vividness alter subjective probabilities, or does it intensify the emotions associated with thinking about the outcome?
• Though visceral states have an undue influence on behavior, people are not fully sophisticated about the extent to which their own future (or past!) behavior will respond to visceral forces. A recovering alcoholic might underestimate how difficult abstaining will be if he goes to the office Christmas party.
• Rational addiction is wrong because it doesn’t fit the facts – e.g., addicts should buy in bulk to save money on their long term habit; further, the “rapid downward hedonic spiral” is “difficult to understand” as a rational choice. So why do people become addicted? In part, because they underestimate the influence of craving and withdrawal on their future behavior. They wrongly believe they will be able to quit.
• People blame themselves for a lack of past effort because they discount the effect of fatigue.
• The visceral model suggests a multiple-selves interpretation. The farsighted self is the one relatively immune from visceral factors, and is much more consistent over time than the selves who are influenced by fluctuating visceral factors.
Becker, Grossman, and Murphy (1991) on Rational Addiction
Gary S. Becker, Michael Grossman and Kevin Murphy, “Rational Addiction and the Effect of Price on Consumption [pdf].” American Economic Review 237-241, May 1991.
• This paper provides a capsule summary of what might be considered the non-behavioral model of addiction, the approach to addictive behavior based on full-on standard rational economic choice; the original, fuller treatment is Becker and Murphy (1988). (A still earlier precursor is Stigler and Becker (1977), "De Gustibus Non Est Disputandum.")
• A consumer's preferences can be described by the utility function U(t) = u[c(t), S(t), y(t)], where y is a non-addictive good, c is an addictive good, and S is the stock of addictive capital; t represents the time period. c(t), therefore, is consumption of the addictive good in time period t.
• Reinforcement: the higher the stock of addictive capital, the higher the current consumption of the addictive good. The stock of addictive capital S comprises past consumption of the addictive good, though this stock depreciates at a constant per period rate, so if someone chooses to go cold turkey, S would wither away over time.
• Rationality here, as elsewhere, means having fixed, forward-looking preferences.
• There exists a low-consumption, unstable steady state, as well as a high-consumption (addicted), stable steady state. At a steady state, each period's chosen (optimal) consumption of the addictive good (c) equals the depreciation in S, so that the next period, the consumer wakes up with an unchanged stock of addictive capital -- and hence once again chooses the same consumption c, and so on.
• Implications: long-run elasticities are greater in magnitude than short-run elasticities; current consumption responds to anticipated future price changes; past, current, and future consumption are mutually complementary. These implications can be tested, and the tests (generally on legal addictive behaviors such as smoking) tend to support rational addiction as opposed to myopic (non-forward-looking) behavior.
• The government cannot help rational addicts by making it harder for them to procure their drug of choice. They freely chose to become addicted, knowing the consequences of their behavior. If they had to do it all over again, they would make the same choices. Addicts might not be very happy, but their other choices were even less palatable to them than was becoming an addict. They probably have high discount rates, however, as the future costs of current consumption of the addictive good did not weigh heavily in their decisions.
• This paper provides a capsule summary of what might be considered the non-behavioral model of addiction, the approach to addictive behavior based on full-on standard rational economic choice; the original, fuller treatment is Becker and Murphy (1988). (A still earlier precursor is Stigler and Becker (1977), "De Gustibus Non Est Disputandum.")
• A consumer's preferences can be described by the utility function U(t) = u[c(t), S(t), y(t)], where y is a non-addictive good, c is an addictive good, and S is the stock of addictive capital; t represents the time period. c(t), therefore, is consumption of the addictive good in time period t.
• Reinforcement: the higher the stock of addictive capital, the higher the current consumption of the addictive good. The stock of addictive capital S comprises past consumption of the addictive good, though this stock depreciates at a constant per period rate, so if someone chooses to go cold turkey, S would wither away over time.
• Rationality here, as elsewhere, means having fixed, forward-looking preferences.
• There exists a low-consumption, unstable steady state, as well as a high-consumption (addicted), stable steady state. At a steady state, each period's chosen (optimal) consumption of the addictive good (c) equals the depreciation in S, so that the next period, the consumer wakes up with an unchanged stock of addictive capital -- and hence once again chooses the same consumption c, and so on.
• Implications: long-run elasticities are greater in magnitude than short-run elasticities; current consumption responds to anticipated future price changes; past, current, and future consumption are mutually complementary. These implications can be tested, and the tests (generally on legal addictive behaviors such as smoking) tend to support rational addiction as opposed to myopic (non-forward-looking) behavior.
• The government cannot help rational addicts by making it harder for them to procure their drug of choice. They freely chose to become addicted, knowing the consequences of their behavior. If they had to do it all over again, they would make the same choices. Addicts might not be very happy, but their other choices were even less palatable to them than was becoming an addict. They probably have high discount rates, however, as the future costs of current consumption of the addictive good did not weigh heavily in their decisions.
Saturday, July 11, 2015
Burke, Luoto and Perez-Arce (2014) on Soft and Hard Commitments
Jeremy Burke, Jill E. Luoto and Francisco Perez-Arce, “Soft versus Hard Commitments: A Test on Savings Behaviors [pdf].” RAND Labor & Population WR-1055, July 2014.
• One approach to increase savings is to offer people “commitment accounts,” which make it hard to withdraw funds and which might even result in losses if the saver does not live up to her commitments. It is commonplace for most people to turn down the opportunity to enter into commitment savings accounts.
• This paper looks at a softer approach, where people are given a convenient way to save money, and are encouraged to do so, but without any commitment. The idea is that more people will find such accounts attractive, perhaps raising total savings relative to both commitment accounts and the status quo.
• An online experiment is conducted with US subjects who indicated that they wanted to save more. They know they will be given $50, $100, or $500 (usually $50!), which they can receive after a brief delay, or they can save some or all of the money over the subsequent six months. Before they know which amount they are given, they are asked to make decisions regarding saving the different amounts at an annualized interest rate of 30%.
• The savings options are not the same for everyone, however. Rather, the subjects are randomly selected to either the control – a standard savings account with no withdrawal restrictions – or to a soft or a hard account. The hard account allows no withdrawals until the six months have passed; the soft account is like the control, except that subjects receive subtle, active suggestions to save. Irrespective of the account they are selected for, the vast majority of subjects save some of their experimental windfall. Nonetheless, take-up is highest for the soft account, and the amount initially saved also is highest for the soft account – including among the most impatient savers.
• After six months, the soft account leads to higher savings than does the control account. Nonetheless, as money is withdrawn from the soft accounts over the six months, the hard account leads to even higher total savings at the end of six months.
• One approach to increase savings is to offer people “commitment accounts,” which make it hard to withdraw funds and which might even result in losses if the saver does not live up to her commitments. It is commonplace for most people to turn down the opportunity to enter into commitment savings accounts.
• This paper looks at a softer approach, where people are given a convenient way to save money, and are encouraged to do so, but without any commitment. The idea is that more people will find such accounts attractive, perhaps raising total savings relative to both commitment accounts and the status quo.
• An online experiment is conducted with US subjects who indicated that they wanted to save more. They know they will be given $50, $100, or $500 (usually $50!), which they can receive after a brief delay, or they can save some or all of the money over the subsequent six months. Before they know which amount they are given, they are asked to make decisions regarding saving the different amounts at an annualized interest rate of 30%.
• The savings options are not the same for everyone, however. Rather, the subjects are randomly selected to either the control – a standard savings account with no withdrawal restrictions – or to a soft or a hard account. The hard account allows no withdrawals until the six months have passed; the soft account is like the control, except that subjects receive subtle, active suggestions to save. Irrespective of the account they are selected for, the vast majority of subjects save some of their experimental windfall. Nonetheless, take-up is highest for the soft account, and the amount initially saved also is highest for the soft account – including among the most impatient savers.
• After six months, the soft account leads to higher savings than does the control account. Nonetheless, as money is withdrawn from the soft accounts over the six months, the hard account leads to even higher total savings at the end of six months.
Atalay et al. (2014) on Prize-Linked Savings Accounts
Kadir Atalay, Fayzan Bakhtiar, Stephen Cheung, and Robert Slonim, “Savings and Prize-Linked Savings Accounts [pdf].” Journal of Economic Behavior & Organization 107, Part A: 86-106, November 2014.
• Lotteries are popular in the US, and poorer households tend to spend a relatively larger share of their income on lotteries.
• A prize-linked savings (PLS) account is one that enters savers in a lottery, while protecting the principal, the amount of funds deposited by savers. The prize for the lottery typically is financed by paying lower interest on savings than would be paid in the absence of a lottery. It is as if a portion of all interest payments are confiscated and turned into a prize that goes to just one of the savers; the lottery usually features a probability of winning proportional to a saver’s share in overall deposits.
• PLS accounts are common in some countries, but essentially illegal in the US due to anti-gambling laws. The experiments described in this article aim to determine if US residents would find PLS accounts attractive, and whether PLS accounts would raise total savings (as opposed to just diverting savings from standard savings accounts).
• The experiments are web-based. Participants are asked about how they would allocate $100 between cash (to be received in 2 weeks’ time); standard lottery tickets (not a PLS); and standard savings (available in 10 weeks). The participants receive some recompense but not, for the most part, the actual results of their investment decisions – the starting $100 (largely) is imaginary. Later, they repeat their choices, with a PLS as an additional option.
• The experimental results suggest that the introduction of PLS accounts would increase total savings markedly. Further, much of the money invested in PLS accounts would be drawn from funds that otherwise would have gone to purchasing standard lottery tickets. These effects are particularly pronounced among lower-income people and those with little savings.
• Lotteries are popular in the US, and poorer households tend to spend a relatively larger share of their income on lotteries.
• A prize-linked savings (PLS) account is one that enters savers in a lottery, while protecting the principal, the amount of funds deposited by savers. The prize for the lottery typically is financed by paying lower interest on savings than would be paid in the absence of a lottery. It is as if a portion of all interest payments are confiscated and turned into a prize that goes to just one of the savers; the lottery usually features a probability of winning proportional to a saver’s share in overall deposits.
• PLS accounts are common in some countries, but essentially illegal in the US due to anti-gambling laws. The experiments described in this article aim to determine if US residents would find PLS accounts attractive, and whether PLS accounts would raise total savings (as opposed to just diverting savings from standard savings accounts).
• The experiments are web-based. Participants are asked about how they would allocate $100 between cash (to be received in 2 weeks’ time); standard lottery tickets (not a PLS); and standard savings (available in 10 weeks). The participants receive some recompense but not, for the most part, the actual results of their investment decisions – the starting $100 (largely) is imaginary. Later, they repeat their choices, with a PLS as an additional option.
• The experimental results suggest that the introduction of PLS accounts would increase total savings markedly. Further, much of the money invested in PLS accounts would be drawn from funds that otherwise would have gone to purchasing standard lottery tickets. These effects are particularly pronounced among lower-income people and those with little savings.
Schwartz et al. (2014), “Healthier By Precommitment”
Janet Schwartz, Daniel Mochon, Lauren Wyper, Josiase Maroba, Deepak Patel, and Dan Ariely, “Healthier by Precommitment.” Psychological Science 25(2) 538–546, 2014.
• People find it hard to generate the persistent willpower needed to keep actions consistent with intentions. Features such as diffuse, future rewards from resisting temptation might contribute to this difficulty.
• Financial incentives have been shown to help with weight loss, quitting smoking, and complying with a medical regimen. They might (but only might) even spur the formation of habits, giving temporary financial incentives a lasting impact.
• Sophisticated folks are aware of their self-control shortfalls, and might welcome commitment devices.
• The field experiment involved an existing healthy eating program in South Africa that gives 25% cash back at the end of the month for healthy food purchases. Some participants were given the option of forfeiting their cash back if they did not increase their healthy grocery component by five percentage points. The shoppers had no direct positive incentive to take part, they could only lose money relative to not participating (as in this earlier field experiment). A control group was informed about the possibility of such a commitment contract, but was not offered the option to make the commitment.
• More than one-third of the households offered the commitment contract chose to join the commitment scheme. They were allowed to drop out after one month, however, and about one in six did drop out.
• The committed shoppers did increase their consumption of healthier items relative to the non-committed shoppers. The commitment seems crucial for converting intentions into future action. But shoppers nonetheless failed to live up to their commitments on average: “…in any given month only one-third of the committed households met their goal.”
• The standard ethical query applies: Is it ethical to offer (relatively poor?) people a commitment device when it is more than conceivable that many of them will suffer a loss of funds because they will not fulfill their commitment?
• People find it hard to generate the persistent willpower needed to keep actions consistent with intentions. Features such as diffuse, future rewards from resisting temptation might contribute to this difficulty.
• Financial incentives have been shown to help with weight loss, quitting smoking, and complying with a medical regimen. They might (but only might) even spur the formation of habits, giving temporary financial incentives a lasting impact.
• Sophisticated folks are aware of their self-control shortfalls, and might welcome commitment devices.
• The field experiment involved an existing healthy eating program in South Africa that gives 25% cash back at the end of the month for healthy food purchases. Some participants were given the option of forfeiting their cash back if they did not increase their healthy grocery component by five percentage points. The shoppers had no direct positive incentive to take part, they could only lose money relative to not participating (as in this earlier field experiment). A control group was informed about the possibility of such a commitment contract, but was not offered the option to make the commitment.
• More than one-third of the households offered the commitment contract chose to join the commitment scheme. They were allowed to drop out after one month, however, and about one in six did drop out.
• The committed shoppers did increase their consumption of healthier items relative to the non-committed shoppers. The commitment seems crucial for converting intentions into future action. But shoppers nonetheless failed to live up to their commitments on average: “…in any given month only one-third of the committed households met their goal.”
• The standard ethical query applies: Is it ethical to offer (relatively poor?) people a commitment device when it is more than conceivable that many of them will suffer a loss of funds because they will not fulfill their commitment?
Karlan and Linden (2014) on Savings for Education in Uganda
Dean Karlan and Leigh L. Linden, “Loose Knots: Strong versus Weak Commitments to Save for Education in Uganda [pdf].” NBER Working Paper No. 19863, January 2014.
• Low threshold commitment devices (loose knots) might attract more participants, but not lead to as much behavior change as would stronger commitments.
• A field experiment involved 136 rural or near-rural primary schools in Uganda. Some schools were given a “strong” commitment savings product – withdrawals took the form of vouchers that could be used only to purchase school supplies; other schools received a weaker (loose knots) version, where withdrawals could be made in cash and spent on anything.
• Students deposited more money in the soft commitment account than in the hard commitment account, which led to more school supplies purchased and better test scores.
• The savings account employed a double-locked lock box kept at the school, with the funds transferred to a bank at the end of a trimester. The accounts earned no interest, despite significant inflation. The savings decisions were taken in public.
• When disbursements were made from the accounts, a small market was set up at each school to sell school supplies, tutoring, etc. With vouchers, this market provided the only legitimate use for the savings (other than re-deposit.)
• Parent outreach does not affect savings, but it helps direct saved funds towards education. The loose knot (cash) accounts with parent outreach formed the preferred treatment in terms of incentivizing the purchase of more school supplies.
• This commitment savings intervention was very costly to implement relative to the increased savings.
• Low threshold commitment devices (loose knots) might attract more participants, but not lead to as much behavior change as would stronger commitments.
• A field experiment involved 136 rural or near-rural primary schools in Uganda. Some schools were given a “strong” commitment savings product – withdrawals took the form of vouchers that could be used only to purchase school supplies; other schools received a weaker (loose knots) version, where withdrawals could be made in cash and spent on anything.
• Students deposited more money in the soft commitment account than in the hard commitment account, which led to more school supplies purchased and better test scores.
• The savings account employed a double-locked lock box kept at the school, with the funds transferred to a bank at the end of a trimester. The accounts earned no interest, despite significant inflation. The savings decisions were taken in public.
• When disbursements were made from the accounts, a small market was set up at each school to sell school supplies, tutoring, etc. With vouchers, this market provided the only legitimate use for the savings (other than re-deposit.)
• Parent outreach does not affect savings, but it helps direct saved funds towards education. The loose knot (cash) accounts with parent outreach formed the preferred treatment in terms of incentivizing the purchase of more school supplies.
• This commitment savings intervention was very costly to implement relative to the increased savings.
Wednesday, July 8, 2015
Giné, Karlan, and Zinman (2010) on Committing to Quit Smoking
Xavier Giné, Dean Karlan, and Jonathan Zinman, “Put Your Money Where Your Butt Is: A Commitment Savings Account for Smoking Cessation.” American Economic Journal: Applied Economics 2(4): 213-235, January 2010.
• Likely heavy smokers were randomly chosen to be offered a “Committed Action to Reduce and End Smoking” (CARES) savings account. After six months, a urine test was given to indicate whether the saver had given up smoking. A failed test meant that the money in the account was forfeited. A second treatment involved giving smokers aversive cue cards as opposed to the opportunity to open a CARES account.
• CARES accounts were accompanied by weekly visits from a bank employee to collect additional deposits. Participants were urged to save the money they otherwise would have spent on cigarettes. The deposit collection seemed to be important for getting people to take up CARES.
• Only eleven percent (a total of 83) smokers offered CARES signed up. Smokers randomly offered CARES were a bit more likely to pass a second, surprise urine test after one year. (That is, most did not quit smoking, but quit rates were about 1/3 higher than for smokers not offered CARES.) The cue cards didn’t help induce smoking cessation, though almost everyone who was offered the cards took them.
• About 2/3 of CARES clients lost their deposits by failing the urine test. They tended to have lower deposits, though, and may have cut down on their smoking (and spending on smoking), even if they didn’t quit. Is it ethical to offer relatively poor people a commitment savings account in the fore-knowledge that many of them will end up unable to collect their savings?
• Likely heavy smokers were randomly chosen to be offered a “Committed Action to Reduce and End Smoking” (CARES) savings account. After six months, a urine test was given to indicate whether the saver had given up smoking. A failed test meant that the money in the account was forfeited. A second treatment involved giving smokers aversive cue cards as opposed to the opportunity to open a CARES account.
• CARES accounts were accompanied by weekly visits from a bank employee to collect additional deposits. Participants were urged to save the money they otherwise would have spent on cigarettes. The deposit collection seemed to be important for getting people to take up CARES.
• Only eleven percent (a total of 83) smokers offered CARES signed up. Smokers randomly offered CARES were a bit more likely to pass a second, surprise urine test after one year. (That is, most did not quit smoking, but quit rates were about 1/3 higher than for smokers not offered CARES.) The cue cards didn’t help induce smoking cessation, though almost everyone who was offered the cards took them.
• About 2/3 of CARES clients lost their deposits by failing the urine test. They tended to have lower deposits, though, and may have cut down on their smoking (and spending on smoking), even if they didn’t quit. Is it ethical to offer relatively poor people a commitment savings account in the fore-knowledge that many of them will end up unable to collect their savings?
Ashraf, Karlan, and Yin (2006), on Commitment Savings Accounts
Nava Ashraf, Dean Karlan, and Wesley Yin, “Tying Odysseus to the Mast: Evidence From a Commitment Savings Product in the Philippines.” Quarterly Journal of Economics 121(2): 635-672, 2006.
• The authors conduct a natural field experiment to see if people will open a savings account that has no advantages except for barriers to withdrawal. The offered SEED accounts (“Save, Earn, Enjoy Deposits”) prevent depositors from accessing funds unless a target deposit amount or date is met. Most of the participants who opened accounts chose the date-based method.
• Individuals were randomly chosen to be offered a SEED account, and about 28% of those who received the offer accepted it. Others were offered nothing or were given encouragement to save more. All the people involved were bank clients who already had a regular savings account.
• All participants were given a survey aimed at identifying customers who had time inconsistent preferences. The survey indicated that 27.5% of respondents were hyperbolic, while a surprising 19.8% were reverse hyperbolic, more patient today than for future choices. Hyperbolic women (but not men) are more likely to take up the SEED offer.
• The Intent to Treat (ITT) effect reveals the impact of being offered (not necessarily accepting) the SEED account. The ITT effect involved a significant increase in savings. (The encouragement-to-save option did not increase savings.) The Treatment on the Treated effect reveals the increase in savings for those who opened a SEED account relative to controls who would have opened one if offered; here, it is about four times higher than the ITT effect.
• Is it ethical to offer relatively poor people a type of savings account whereby it is more than conceivable that they will never be able to recover their funds because they did not reach their savings goal? After one year, only 6 of the 62 participants who opened an amount-based account achieved their goal and hence could access their funds (page 657).
• The authors conduct a natural field experiment to see if people will open a savings account that has no advantages except for barriers to withdrawal. The offered SEED accounts (“Save, Earn, Enjoy Deposits”) prevent depositors from accessing funds unless a target deposit amount or date is met. Most of the participants who opened accounts chose the date-based method.
• Individuals were randomly chosen to be offered a SEED account, and about 28% of those who received the offer accepted it. Others were offered nothing or were given encouragement to save more. All the people involved were bank clients who already had a regular savings account.
• All participants were given a survey aimed at identifying customers who had time inconsistent preferences. The survey indicated that 27.5% of respondents were hyperbolic, while a surprising 19.8% were reverse hyperbolic, more patient today than for future choices. Hyperbolic women (but not men) are more likely to take up the SEED offer.
• The Intent to Treat (ITT) effect reveals the impact of being offered (not necessarily accepting) the SEED account. The ITT effect involved a significant increase in savings. (The encouragement-to-save option did not increase savings.) The Treatment on the Treated effect reveals the increase in savings for those who opened a SEED account relative to controls who would have opened one if offered; here, it is about four times higher than the ITT effect.
• Is it ethical to offer relatively poor people a type of savings account whereby it is more than conceivable that they will never be able to recover their funds because they did not reach their savings goal? After one year, only 6 of the 62 participants who opened an amount-based account achieved their goal and hence could access their funds (page 657).
Kube, et al. (2012) on Non-Monetary Gift Exchange in the Workplace
Sebastian Kube, Michel André Maréchal, and Clemens Puppe, “The Currency of Reciprocity: Gift Exchange in the Workplace.” American
Economic Review 102(4): 1644-1662, June 2012.
• The authors set up a field experiment, where people were hired for only three hours to type bibliographical information into a database. The workers did not know at the time that they were participating in an experiment.
• The workers were paid a pre-agreed wage, but at the start, most of them (all except for the baseline sample) were presented with an unexpected gift. The main issue was whether they received a monetary bonus of 7 euro, or a thermos bottle that cost 7 euro.
• The cash gift did not increase worker productivity relative to the “no gift” baseline, but the bottle brought about a 25% productivity hike. If workers could choose, they (18 of 22) took the cash, but having the choice still boosted their performance: as if they received the bottle without a choice. It didn’t matter if workers were informed or not of the price of the thermos bottle.
• The largest performance enhancement came when the gift was seven euro that was cleverly presented in the form of an origami shirt-person.
• Quality (accuracy) of typing performance (as opposed to quantity) also improved with the gift treatments, including with the cash gift. Origami cash also had the largest quality improvement, whereas the thermos bottle barely affected quality relative to no gift; cash (sans origami) was marginally better than the bottle on the quality dimension.
• Is it the thought that counts?
• The authors set up a field experiment, where people were hired for only three hours to type bibliographical information into a database. The workers did not know at the time that they were participating in an experiment.
• The workers were paid a pre-agreed wage, but at the start, most of them (all except for the baseline sample) were presented with an unexpected gift. The main issue was whether they received a monetary bonus of 7 euro, or a thermos bottle that cost 7 euro.
• The cash gift did not increase worker productivity relative to the “no gift” baseline, but the bottle brought about a 25% productivity hike. If workers could choose, they (18 of 22) took the cash, but having the choice still boosted their performance: as if they received the bottle without a choice. It didn’t matter if workers were informed or not of the price of the thermos bottle.
• The largest performance enhancement came when the gift was seven euro that was cleverly presented in the form of an origami shirt-person.
• Quality (accuracy) of typing performance (as opposed to quantity) also improved with the gift treatments, including with the cash gift. Origami cash also had the largest quality improvement, whereas the thermos bottle barely affected quality relative to no gift; cash (sans origami) was marginally better than the bottle on the quality dimension.
• Is it the thought that counts?
Monday, July 6, 2015
On Varying the Stakes in Ultimatum Games (2011)
Steffen Andersen, Seda Ertaç, Uri Gneezy, Moshe Hoffman, and John A.
List, “Stakes Matter in Ultimatum Games.” American Economic Review
101: 3427–3439, December 2011.
• A standard result is that varying the stakes does not lead to much of a change in the outcomes of ultimatum game (and related game) experiments. The ultimatum game is of interest in itself, but also because it seems to hold lessons for any “take-it-or-leave-it” bargaining situation.
• Andersen et al. (2011) challenge this standard result. In particular, they hope to see if “proposers” offer more “unfair” splits when the stakes are high, and if responders turn down unfair splits, even when the stakes are significant.
• In the reported experiments, conducted in villages in India, the stakes are altered by a factor of 1000. The highest-stake version is on the order of one-year’s income.
• The ultimatum game that the authors employ is structured in such a way as to nudge proposers into making “unfair” offers. Otherwise, the experimenters suspect that there will not be enough unfair offers to test reliably the willingness of responders to turn down unfair offers at high stakes. (The ultimatum game as it is typically implemented has its own share of nudge issues.)
• In the experiments, raising the stakes monotonically decreases the average percentage of the pie “offered,” though the absolute monetary amount offered increases. At the highest stakes, there is but one rejection in 24 trials. Nevertheless, at the second-highest level of stakes (about one-month's income), more than one-quarter of the proposals are rejected.
• Is it ethical to go to relatively poor villages and offer some people the potential for one year's or one month's income -- along with the (likely) prospect that some of those selected people will proceed to "lose" that stake, after being nudged towards an "unfair" offer that raises the probability of their receiving nothing? Behavioral economics experiments sometimes challenge the Kantian precept that people are to be treated as ends in themselves, not means to the ends of others.
• A standard result is that varying the stakes does not lead to much of a change in the outcomes of ultimatum game (and related game) experiments. The ultimatum game is of interest in itself, but also because it seems to hold lessons for any “take-it-or-leave-it” bargaining situation.
• Andersen et al. (2011) challenge this standard result. In particular, they hope to see if “proposers” offer more “unfair” splits when the stakes are high, and if responders turn down unfair splits, even when the stakes are significant.
• In the reported experiments, conducted in villages in India, the stakes are altered by a factor of 1000. The highest-stake version is on the order of one-year’s income.
• The ultimatum game that the authors employ is structured in such a way as to nudge proposers into making “unfair” offers. Otherwise, the experimenters suspect that there will not be enough unfair offers to test reliably the willingness of responders to turn down unfair offers at high stakes. (The ultimatum game as it is typically implemented has its own share of nudge issues.)
• In the experiments, raising the stakes monotonically decreases the average percentage of the pie “offered,” though the absolute monetary amount offered increases. At the highest stakes, there is but one rejection in 24 trials. Nevertheless, at the second-highest level of stakes (about one-month's income), more than one-quarter of the proposals are rejected.
• Is it ethical to go to relatively poor villages and offer some people the potential for one year's or one month's income -- along with the (likely) prospect that some of those selected people will proceed to "lose" that stake, after being nudged towards an "unfair" offer that raises the probability of their receiving nothing? Behavioral economics experiments sometimes challenge the Kantian precept that people are to be treated as ends in themselves, not means to the ends of others.
Levitt and List (2007) on Laboratory Experiments
Steven D. Levitt and John A. List, “What Do Laboratory Experiments Measuring Social Preferences Reveal about the Real World?” Journal of Economic Perspectives 21(2): 153-174, Spring, 2007.
• Model notation: action choice a; wealth W; stakes (or value) v; moral cost M; social norms n; and scrutiny s.
• The higher the negative financial externality an action imposes on others, the higher the moral cost M is taken to be. (Levitt and List also assume that this externality increases with the stakes v). M is higher the greater the deviation between action a and the social norm n. M also is raised by increased scrutiny s.
• Individual utility is U(a, v, n, s) = M(a, v, n, s) + W(a, v). Higher stakes v can raise W while raising M, too, but the authors assume that W rises more quickly with v. (Another interpretation might be that the norm changes with v, so that selfish behavior receives more social imprimatur.)
• Scrutiny is different and typically more intense in the lab, exaggerating pro-social behaviors. (Alternatively, scrutiny from one’s children or other family members has no lab parallel.) Further, lab participants might believe that an experiment demands some pro-social behavior.
• Behavior might be sensitive to factors that unavoidably vary between the lab and the real world: the experimenter cannot fully control the context. Participants bring context with them, and hence are playing a different game.
• Lab participants self-select, directly or indirectly, while market participants self-select, too.
Fehr and Gächter (2000) on Reciprocity
Ernst Fehr and Simon Gächter, “Fairness and Retaliation: The Economics of Reciprocity.” Journal of Economic Perspectives 14(3): 159-181, Summer, 2000.
• Positive reciprocity is when a kind act is met with kindness in return; negative reciprocity is when an unkind or unfair act is met with retaliation.
• The existence of a subset of reciprocal actors can enforce cooperative norms, though details of the environment will matter as to whether cooperation will out.
• In the Ultimatum Game, offers of less than 30% of the stake often get rejected, indicating that some types of negative reciprocity are common. But some 20 or 30 percent of folks do not reciprocate. People might be a little more likely to be negative reciprocators (punishing unfair acts) than positive reciprocators (rewarding good behavior).
• In some settings, the behavior of reciprocal people and self-interested people eventually becomes indistinguishable, whether for cooperating or free riding; that is, their motives are different, but their actual behaviors can be identical. Opportunities to punish free riders are key to sustaining cooperation.
• Reciprocity can promote contract enforcement.
• The provision of explicit incentives in a contractual relationship can engender mistrust and lead to lessened effort. As a result, firms might prefer incomplete contracts that lead to a sort of “gift exchange” and high effort.
• Positive reciprocity is when a kind act is met with kindness in return; negative reciprocity is when an unkind or unfair act is met with retaliation.
• The existence of a subset of reciprocal actors can enforce cooperative norms, though details of the environment will matter as to whether cooperation will out.
• In the Ultimatum Game, offers of less than 30% of the stake often get rejected, indicating that some types of negative reciprocity are common. But some 20 or 30 percent of folks do not reciprocate. People might be a little more likely to be negative reciprocators (punishing unfair acts) than positive reciprocators (rewarding good behavior).
• In some settings, the behavior of reciprocal people and self-interested people eventually becomes indistinguishable, whether for cooperating or free riding; that is, their motives are different, but their actual behaviors can be identical. Opportunities to punish free riders are key to sustaining cooperation.
• Reciprocity can promote contract enforcement.
• The provision of explicit incentives in a contractual relationship can engender mistrust and lead to lessened effort. As a result, firms might prefer incomplete contracts that lead to a sort of “gift exchange” and high effort.
Baumeister (2014) on Inhibition and Ego Depletion
Roy F. Baumeister, “Self-regulation, Ego Depletion, and Inhibition.”
Neuropsychologia 65: 313–319, 2014.
• Note that most moral rules concern inhibitions, “do nots”; most emotion regulation is about controlling bad feelings. The average person in a country like the US spends 3 to 4 hours per day inhibiting.
• Regulation seems to depend on a limited resource, like energy, and it can be depleted; in some experiments depleted people perform worse on follow-up tasks that require perseverance.
• Ego depletion is the name given to being in a state where the ability to inhibit desires, to sustain willpower, is weakened. Depleted people, therefore, fail to inhibit behaviors that they otherwise would inhibit: aggression, inappropriate sexual responses, overeating, and impulsive spending, for instance.
• Nonetheless, ego depletion doesn’t mean that the fuel tank is empty; rather, it is more like the body makes an effort to conserve a depletable resource that is only somewhat compromised, as our bodies do with muscles. Ego depletion, therefore, can be overcome with directed effort.
• Making choices also depletes willpower, whereas exercising self-control harms subsequent decision-making.
• Despite some previous findings suggesting the contrary, it does not appear to be the case that ego depletion is equivalent to a fall in blood sugar. Nonetheless, low levels of blood sugar “predict poor self-regulation.” And, depleted people who get a hit of glucose then behave as if they are not depleted.
• The ability to inhibit is partly a trait (recall the marshmallow test) and partly a state. Fatigue is a marker of being in a depleted state. Depletion doesn’t create new feelings, but intensifies existing feelings, while making it harder to inhibit acting on those feelings.
• Note that most moral rules concern inhibitions, “do nots”; most emotion regulation is about controlling bad feelings. The average person in a country like the US spends 3 to 4 hours per day inhibiting.
• Regulation seems to depend on a limited resource, like energy, and it can be depleted; in some experiments depleted people perform worse on follow-up tasks that require perseverance.
• Ego depletion is the name given to being in a state where the ability to inhibit desires, to sustain willpower, is weakened. Depleted people, therefore, fail to inhibit behaviors that they otherwise would inhibit: aggression, inappropriate sexual responses, overeating, and impulsive spending, for instance.
• Nonetheless, ego depletion doesn’t mean that the fuel tank is empty; rather, it is more like the body makes an effort to conserve a depletable resource that is only somewhat compromised, as our bodies do with muscles. Ego depletion, therefore, can be overcome with directed effort.
• Making choices also depletes willpower, whereas exercising self-control harms subsequent decision-making.
• Despite some previous findings suggesting the contrary, it does not appear to be the case that ego depletion is equivalent to a fall in blood sugar. Nonetheless, low levels of blood sugar “predict poor self-regulation.” And, depleted people who get a hit of glucose then behave as if they are not depleted.
• The ability to inhibit is partly a trait (recall the marshmallow test) and partly a state. Fatigue is a marker of being in a depleted state. Depletion doesn’t create new feelings, but intensifies existing feelings, while making it harder to inhibit acting on those feelings.
Friday, July 3, 2015
Baumeister (2013) on Ego Depletion and Willpower as a Muscle
Roy F. Baumeister, “Self-Control, Fluctuating Willpower, and Forensic Practice.” Journal of Forensic Practice 15(2): 85-96, 2013.
• Self-control and the ability to meet external standards of behavior depend on a limited resource (willpower) that fluctuates in quantity.
• Criminals seem to be poor at self-control. But it is not just ne’er-do-wells who succumb to moments of weakness – moments that combine a strong temptation with temporarily low willpower. A hardened criminal and a model citizen might differ by only a few moments of willpower lapses brought on by many potential stresses and hassles.
• Alcohol is the great unprovoker of self-control, in virtually all dimensions. In part the diminished control occurs by skewing self-monitoring. Alcohol probably doesn’t make someone more aggressive, but it undermines self-constraints when a situation of potential aggression arises.
• The energy to override impulses is key to self-control, and that energy can be run down: “ego depletion.” Experiments show that exerting willpower in one domain leaves less willpower available to override temptation in another domain.
• The stock of willpower has a general character; people who have self-control for one task or behavior have it more generally (which is not to say that it cannot be depleted).
• Energy depletion has physical dimensions. Blood glucose levels fall following the exertion of self-control. Low glucose levels seem to be the basis for low willpower; consuming energy-rich food restores glucose and willpower. [The connection between glucose and willpower remains controversial.]
• Significant amounts of criminal activity take place in a low glucose state. The poor diet of gang members can undermine their behavior.
• The immune system uses up lots of glucose when fighting off germs, but not much otherwise. So someone can suddenly see willpower shortfalls even before they know they are sick.
• Decision making leads to ego depletion! And the relationship is symmetric, in that after you exert self-control, you make bad decisions.
• Depleted people avoid or postpone decisions, and are bad at compromise; when they make decisions, those decisions tend to be impulsive.
• Stress harms self-control – even the belief that your life is stressful harms self-control. People who exhibit self-control tend not to be too stressed. Good habits and routines are markers of self-control, and help control stress, too.
• There seems to be a vicious cycle, whereby poor self-control leads to bad situations which cause stress and further undermine willpower….
• Good life outcomes are associated with intelligence and self-control. Increasing intelligence is hard, but self-control can be improved. Building up self-control in an arbitrary way – using your opposite hand to brush your teeth – seems to make more willpower available for meaningful activity…
• …but people are often motivated to work on self-control for something meaningful, like getting in shape; their exercise will have self-control benefits in many unrelated areas.
• Self-control and the ability to meet external standards of behavior depend on a limited resource (willpower) that fluctuates in quantity.
• Criminals seem to be poor at self-control. But it is not just ne’er-do-wells who succumb to moments of weakness – moments that combine a strong temptation with temporarily low willpower. A hardened criminal and a model citizen might differ by only a few moments of willpower lapses brought on by many potential stresses and hassles.
• Alcohol is the great unprovoker of self-control, in virtually all dimensions. In part the diminished control occurs by skewing self-monitoring. Alcohol probably doesn’t make someone more aggressive, but it undermines self-constraints when a situation of potential aggression arises.
• The energy to override impulses is key to self-control, and that energy can be run down: “ego depletion.” Experiments show that exerting willpower in one domain leaves less willpower available to override temptation in another domain.
• The stock of willpower has a general character; people who have self-control for one task or behavior have it more generally (which is not to say that it cannot be depleted).
• Energy depletion has physical dimensions. Blood glucose levels fall following the exertion of self-control. Low glucose levels seem to be the basis for low willpower; consuming energy-rich food restores glucose and willpower. [The connection between glucose and willpower remains controversial.]
• Significant amounts of criminal activity take place in a low glucose state. The poor diet of gang members can undermine their behavior.
• The immune system uses up lots of glucose when fighting off germs, but not much otherwise. So someone can suddenly see willpower shortfalls even before they know they are sick.
• Decision making leads to ego depletion! And the relationship is symmetric, in that after you exert self-control, you make bad decisions.
• Depleted people avoid or postpone decisions, and are bad at compromise; when they make decisions, those decisions tend to be impulsive.
• Stress harms self-control – even the belief that your life is stressful harms self-control. People who exhibit self-control tend not to be too stressed. Good habits and routines are markers of self-control, and help control stress, too.
• There seems to be a vicious cycle, whereby poor self-control leads to bad situations which cause stress and further undermine willpower….
• Good life outcomes are associated with intelligence and self-control. Increasing intelligence is hard, but self-control can be improved. Building up self-control in an arbitrary way – using your opposite hand to brush your teeth – seems to make more willpower available for meaningful activity…
• …but people are often motivated to work on self-control for something meaningful, like getting in shape; their exercise will have self-control benefits in many unrelated areas.
Kaur, Kremer, and Mullainathan (2010) on Self-Control at Work
Supreet Kaur, Michael Kremer, and Sendhil Mullainathan, “Self-Control and the Development of Work Arrangements.” American Economic
Review 100: 624-628, May 2010.
• The starting point is the notion that self-control shortcomings make it likely that workers will not work as hard as THEY would like – and workplace organization can counteract these self-control problems.
• Once again we are faced with the welfare question of whose side we are on, the patient long-run worker or the present-biased worker who makes all the current decisions.
• Work often involves a long lag between effort and reward; regular pay can reduce that lag for a worker.
• Having the work pace set by some outside force (the assembly line) is a type of commitment device.
• The production setting can involve cues such as uniforms that might promote work effort.
• Co-workers will be the source of peer effects, which can operate through various channels, including emulation and monitoring; the peer effects might or might not contribute to production efficiency.
• The article describes an experiment with Indian piece-rate workers engaged in data entry. Instead of the standard piece-rate, they could choose a target output, and be penalized (by losing half their wages) if they failed to reach it. About one-third of those offered these commitment contracts accepted them.
• The starting point is the notion that self-control shortcomings make it likely that workers will not work as hard as THEY would like – and workplace organization can counteract these self-control problems.
• Once again we are faced with the welfare question of whose side we are on, the patient long-run worker or the present-biased worker who makes all the current decisions.
• Work often involves a long lag between effort and reward; regular pay can reduce that lag for a worker.
• Having the work pace set by some outside force (the assembly line) is a type of commitment device.
• The production setting can involve cues such as uniforms that might promote work effort.
• Co-workers will be the source of peer effects, which can operate through various channels, including emulation and monitoring; the peer effects might or might not contribute to production efficiency.
• The article describes an experiment with Indian piece-rate workers engaged in data entry. Instead of the standard piece-rate, they could choose a target output, and be penalized (by losing half their wages) if they failed to reach it. About one-third of those offered these commitment contracts accepted them.
Wednesday, July 1, 2015
O’Donoghue and Rabin Again, This Time, “Incentives and Self-Control” (2006)
Ted O’Donoghue and Matthew Rabin, “Incentives and Self Control.” In Richard Blundell, Whitney Newey, and Torsten Persson, eds.,
Advances in Economics and Econometrics: Volume 2: Theory and Applications
(Ninth World Congress), Cambridge University Press, 2006, pp. 215-245 [pdf of pre-publication version here].
• Present biased people might gain through commitment, such as commitments to study or to exercise. Heterogeneity among individuals and uncertain future costs and opportunities, necessitate some flexibility in plans.
• The case against exponential discounting is similar to the case against expected utility theory: a slight bias for today versus next week, which seems perfectly reasonable, implies ridiculous decisions at longer time frames for exponential discounters.
• Sophisticates can predict their own future self-control problems whereas naïfs are (blissfully?) unaware.
• People do not use the same discount rate for all decisions; they both plan for the long-term and give way to short-term indulgence. They hold both savings and credit card balances.
• You can alter incentives to influence present-biased folks without affecting exponential discounters. You can manipulate defaults, as with Save More Tomorrow, or mandate active choice.
• If the choice is to quit a habit now or later, many people will postpone their quitting, but if the choice is now or never, they will quit now. Naïfs don’t choose an addicted life course – they just choose one more day of addiction, again and again and again.
• Present biased people might gain through commitment, such as commitments to study or to exercise. Heterogeneity among individuals and uncertain future costs and opportunities, necessitate some flexibility in plans.
• The case against exponential discounting is similar to the case against expected utility theory: a slight bias for today versus next week, which seems perfectly reasonable, implies ridiculous decisions at longer time frames for exponential discounters.
• Sophisticates can predict their own future self-control problems whereas naïfs are (blissfully?) unaware.
• People do not use the same discount rate for all decisions; they both plan for the long-term and give way to short-term indulgence. They hold both savings and credit card balances.
• You can alter incentives to influence present-biased folks without affecting exponential discounters. You can manipulate defaults, as with Save More Tomorrow, or mandate active choice.
• If the choice is to quit a habit now or later, many people will postpone their quitting, but if the choice is now or never, they will quit now. Naïfs don’t choose an addicted life course – they just choose one more day of addiction, again and again and again.
O’Donoghue and Rabin (2003) on Paternalism and Sin Taxes
Ted O’Donoghue and Matthew Rabin, “Studying Optimal Paternalism, Illustrated by a Model of Sin Taxes." American Economic Review 93(2): 186-191, May 2003.
• “Economists will and should be ignored if we continue to insist that it is axiomatic that constantly trading stocks or accumulating consumer debt or becoming a heroin addict must be optimal for the people doing these things merely because they have chosen to do it [page 186].”
• In the quasi-hyperbolic utility function, Beta < 1 implies a time-inconsistent preference for immediate gratification. In the welfare (efficiency) analysis, this preference for immediate gratification is treated as an error. That is, society sides with the long-run Dr. Jekyll, not the impatient short-run Mr. Hyde.
• In the model, potato chips have present benefits but future costs. The quasi-hyperbolic decision maker, or at least the Mr. Hyde component of that decision maker, undervalues those future costs.
• What is the most efficient way for the government to raise (a given amount of) revenue through taxes on the two goods, carrots and potato chips? If there were no present bias, both goods should be taxed equally. But if some consumers display present bias, efficiency suggests taxing the tempting good, potato chips, at significantly higher rates.
• High taxes on potato chips do not harm fully “rational” consumers much, but help present-biased people significantly by internalizing the “internality.”
• Offering commitment options to sophisticated present-biased people could help them, with little or no cost to those who are not biased. (A sophisticated present-biased person is someone who understands that she is present biased, and hence might be willing to pre-commit in such a way as to restrain her future choices.)
• More generally, policy might want to take into account the possibility of less-than-rational behavior. Some policies that might be valuable include mandatory cooling-off periods, required information disclosure, and careful selection of default options.
• “Economists will and should be ignored if we continue to insist that it is axiomatic that constantly trading stocks or accumulating consumer debt or becoming a heroin addict must be optimal for the people doing these things merely because they have chosen to do it [page 186].”
• In the quasi-hyperbolic utility function, Beta < 1 implies a time-inconsistent preference for immediate gratification. In the welfare (efficiency) analysis, this preference for immediate gratification is treated as an error. That is, society sides with the long-run Dr. Jekyll, not the impatient short-run Mr. Hyde.
• In the model, potato chips have present benefits but future costs. The quasi-hyperbolic decision maker, or at least the Mr. Hyde component of that decision maker, undervalues those future costs.
• What is the most efficient way for the government to raise (a given amount of) revenue through taxes on the two goods, carrots and potato chips? If there were no present bias, both goods should be taxed equally. But if some consumers display present bias, efficiency suggests taxing the tempting good, potato chips, at significantly higher rates.
• High taxes on potato chips do not harm fully “rational” consumers much, but help present-biased people significantly by internalizing the “internality.”
• Offering commitment options to sophisticated present-biased people could help them, with little or no cost to those who are not biased. (A sophisticated present-biased person is someone who understands that she is present biased, and hence might be willing to pre-commit in such a way as to restrain her future choices.)
• More generally, policy might want to take into account the possibility of less-than-rational behavior. Some policies that might be valuable include mandatory cooling-off periods, required information disclosure, and careful selection of default options.
Subscribe to:
Posts (Atom)