Showing posts with label confirmation bias. Show all posts
Showing posts with label confirmation bias. Show all posts

Monday, July 15, 2019

Gal and Rucker (2018) on the Loss of Loss Aversion

David Gal and Derek D. Rucker, “The Loss of Loss Aversion: Will It Loom Larger Than Its Gain?” Journal of Consumer Psychology 28(3): 497-516, July 2018.

• Social scientists seem to all but universally believe in loss aversion, the notion that losses “loom larger” psychologically than do similarly-sized gains.

 Gal and Rucker claim that the actual evidence does not support any general tendency for losses to loom larger than gains: everything depends upon the context.

• There’s a bit of circularity in the promotion of loss aversion: some phenomenon (like the equity premium puzzle or the endowment effect) is “explained” by loss aversion, and then the existence of the phenomenon (the equity premium, the endowment effect) is taken to be evidence that loss aversion is pervasive. 

• The status quo bias might reflect a preference towards inaction – and such a preference can exist in the absence of loss aversion, due to the lack of a motive for action, or economizing on processing costs, or the tendency to regret errors of commission more than errors of omission. 

 When asked to trade their original good for an essentially identical one, loss aversion is not implicated – but people still show a large status quo bias. Action v. inaction confounds the loss-gain story. 

 Is the endowment effect just a case of a status quo bias, and therefore does not require loss aversion? 

 The “retention paradigm” recasts endowment effect experiments as willingness-to-pay (WTP) to obtain an item v. WTP to retain an item – so now the “inaction” choice is not to have the good in both cases. (That is, the confounding of loss aversion with inaction is sidestepped.) 

 If loss aversion is active, then in the retention paradigm, the WTP to retain will be higher than the WTP to obtain. But in the experiments, there was no premium to retain a good or service. For “mundane” goods – mugs, notebooks – obtaining tended to have a higher WTP than retaining. 

 One has to be creative to come up with reasonable “retain” scenarios! Fixing a broken phone, perhaps? 

 Analogous experiments ask if you would like to receive $0 for a good you own, or exchange it for another good. The second condition swaps the owned and alternative good. Mug v. $5 shows no endowment effect – even though the standard exchange paradigm (that is, not the "retention" paradigm) with these goods shows a significant endowment effect. 

 For the standard “loss aversion ratio” test, not accepting the bet is the status quo. This test, too, can be recast: Would you rather receive $0 with a probability of 1 or take a 50-50 bet with the possibility of winning or losing $15? People seem to have a slight preference for the risky alternative. 

 When stakes are higher, preferences shift toward the sure thing – but this could reflect risk aversion, not loss aversion. (And, loss aversion is generally taken to be independent of the stakes.) 

 When you ask people directly about the psychological impact of winning or losing something  you ask them how they feel about these events  losing doesn't dominate in terms of the magnitude of feelings. How do you feel about losing a mug versus winning a mug? 

 How do you feel about losing $3 versus winning $3? What about $100? At the low stakes, people seem to care more about the gain. 

 Loss “frames” are not generally more motivating than “gain” frames (despite some evidence to the contrary in certain domains). 

 Why is loss aversion so popular given its questionable evidentiary base? Perhaps in part due to a status-quo bias among researchers(!), or confirmation bias within Kuhnian “normal science.” 

 Loss aversion holds intuitive appeal: we all feel some losses acutely. And the name “loss aversion” itself is persuasive.

Tuesday, June 25, 2019

Golman, Hagmann, and Loewenstein (2017) on Information Avoidance

Russell Golman, David Hagmann, and George Loewenstein, “Information Avoidance.” Journal of Economic Literature 55(1): 96-135, March 2017 [pdf].

• Why do you not want to see my outline of this article?


• Rational people in an individual decision-making situation – that is, situations where information possession holds no implications for inter-personal strategic concerns – should always be happy to receive accurate information if it is freely available. But people often actively avoid acquiring such information.

• Patients avoid medical information, teachers don’t read their evaluations, and, in down markets, investors don’t check the value of their portfolios. Golman et al. refer to “active information avoidance” as involving the avoidance of information even if the info is free and the person knows it is available.

• A studied ignorance might lend us moral wiggle room: for instance, if we “don’t know” the conditions on factory farms, we can live with our dining choices. (OK, that example is mine, not in the article.) We might want to maintain plausible deniability as a way to avoid unwelcome responsibilities.

• Even if you learn information, you might (strangely?) choose to ignore it.

• We might have protected beliefs, so we actively avoid information that might challenge those beliefs. We could go so far as to seek to silence dissenters from our views.

• Surely some information avoidance is good: if you will eat dessert anyway, perhaps it is best not to know its caloric load. Perhaps you can live a more pleasant life by not knowing of some looming medical problem, or about your partner’s infidelity.

• When new information becomes universally available, confirmation biases can lead to further polarization of opinions. New (controverting) information might even remind you of (and sustain) the reasons that you held your initial beliefs in the first place.

• Intelligence does not protect people against biased beliefs; rather, it might make it easier to justify one’s preferred beliefs.

• People can self-sabotage  undertake actions not fitted to their abilities, for instance  to avoid learning the quality of their talents.

• People might avoid information because they prefer that compound lotteries (that resolve over time) might best be enjoyed by only learning the final, overall outcome. (Do I want to know the score of the White Sox game in the third inning? If the Sox are behind, I will be sad, and if they are ahead, then the only new “news” I can get later occurs if they lose, and the loss will then be extra painful.)

• Sometimes people are just plain curious, they seek out information that is of no instrumental value to them.

• Humans often are mistaken in thinking that the receipt of bad news will unleash anxiety  and therefore they might be mistaken in trying to shield themselves from bad news. Sometimes the elimination of uncertainty can begin the process of adaptation to the new normal.

• Bad news attracts our attention, and so we might try to avoid potentially bad news as a method to prevent this “distraction” from occurring.

• Regret aversion might lead people to avoid information on how things would have turned out if they had taken a different path.

• People tend to be overly optimistic, and such optimism appears to hold benefits. Information might be avoided or ignored for the sake of sustaining optimism  within individuals as well as groups

• One reason people might try to maintain existing beliefs is that they are serviceable, even if flawed – you can only rebuild a leaky boat at sea piecemeal, you can’t tear it down and start fresh. 

• If you are committed to a belief, you will not want to acquire information challenging it. The belief might even be part of your self-image, your identity.  

• Information avoidance might help with self-control issues, such as side-stepping temptations or maintaining motivation. 

• Spoiler alert! People might avoid information to maintain pleasant suspense.

Monday, July 30, 2018

Golman, Loewenstein, Moene, and Zarri (2016) on Belief Consonance

Russell Golman, George Loewenstein, Karl Ove Moene, and Luca Zarri, “The Preference for Belief Consonance.” Journal of Economic Perspectives 30(3): 165-188, Summer 2016 [pdf].

• People like to have beliefs that accord with the beliefs of others. Sharing beliefs enhances our connection to our group. 

• Much of world conflict is about beliefs – often about rather subtle differences in beliefs. Recall that people protect beliefs in which they have invested heavily. 

• Conflict over small differences in beliefs might arise because our beliefs are most threatened by those who are otherwise similar to us. 

• People do not like to have their beliefs challenged, so media have incentives not to challenge beliefs.

• Beliefs might come first, and only then do we develop the “rational” reasons that we hold them; see Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion, Pantheon, 2012. 

• Belief consonance can be self-reinforcing: when someone stubbornly refuses to agree with me, I can attribute her stubbornness to her own interest in protecting her initial beliefs – and therefore I do not have to reconsider my own beliefs. 

• In the interest of possessing consonant beliefs, all people might believe X, but believe that everyone else believes “not X”: “pluralistic ignorance”. See Timur Kuran, Private Truths, Public Lies: The Social Consequences of Preference Falsification, Harvard, 1998. 

• In trust and dictator games, people are more generous when paired with members of their own political party.

• Two (complementary?) approaches to the enticements  of belief consonance: (1) desire to match beliefs with a group that you are in, or want to join; and (2) desire to maintain desirable beliefs about yourself. 

• On the whole, belief consonance probably is detrimental to society.

Friday, July 20, 2018

Bénabou and Tirole (2016) on Motivated Beliefs

Roland Bénabou and Jean Tirole, “Mindful Economics: The Production, Consumption, and Value of Beliefs.” Journal of Economic Perspectives 30(3): 141-164, Summer 2016.

• People value their beliefs, both directly and for instrumental reasons; therefore, beliefs can withstand counter-evidence, and people may prefer to steer clear of evidence.

• Positive beliefs can be nurtured by responding asymmetrically to good and bad news.

• Shared beliefs also can be resistant to evidence, possibly with dreadful consequences.

• Bénabou et Tirole treat beliefs as if they were standard economic goods or assets, which people consume, invest in, produce, and so on.

• Optimism can serve as a sort of commitment device to stick with long-term projects and to avoid temptation. False beliefs can also influence other people in a way that might serve your interests. Religious beliefs might contribute both to self-discipline and to improve your view of yourself and your future. 

• Methods to protect valued beliefs include “strategic ignorance, reality denial, and self-signaling [p. 144].” More educated people are not better shielded from employing these methods. 

• Confirmation bias allows people to believe that their previous views have been corroborated. They anticipate this future corroboration, and this allows them to take on risky projects. 

• Emotional responses to a challenge to your beliefs are a signal that your beliefs are something you are trying to protect. Standard rationality suggests that challenges should be welcomed -- shades of  J. S. Mill.

• There’s a trade-off between accepting bad news and optimizing decisions given reality versus living in blissful ignorance for awhile before the piper must be paid. 

• Because it is easier to recall our actions than it is to recall our motives, we might try to self-signal, by choosing actions that will allow us to later have a good (but distorted) view of ourselves. 

• In laboratory experiments, subjects have a harder time remembering (accurately) their past failures.

• Self-deception rises with the size of the sunk costs. People persuade themselves of the future value of these investments. 

• People are quite good at believing they had sound reasons for their bad behaviors. 

• Blind persistence in social projects in the face of bad news might make the situation more tolerable. But if persistence can lead to bigger losses, watch out for “Mutually Assured Delusion.” Unfortunately, it is in this situation that denial becomes contagious. 

• Organizational failures might be a result of bad beliefs as much as bad incentives. 

• “Just-world” beliefs and the extent of the welfare state are negatively correlated. 

• Constitutional guarantees of free speech and a free press might serve as a commitment device to allow dissent even in the future bad states – and reduce the protection of current beliefs, because of the increased likelihood that they will be challenged, anyway.

Monday, January 16, 2017

Morewedge and Giblin (2015) Look to Explain the Endowment Effect

Carey K. Morewedge and Colleen E. Giblin, “Explanations of the Endowment Effect: An Integrative Review.” Trends in Cognitive Sciences 19(6): 339-348, June, 2015 [pdf].

• Endowment effects tend to be demonstrated in one of two ways: (1) the “exchange paradigm,” where people who are endowed with a good are less willing to trade it for another good than pre-endowment preferences would suggest; and, (2) differences between the willingness to pay (wtp) for a good by a non-owner and the willingness to accept payment (wta) by an owner – differences which seem to materialize about the time the non-owner becomes an owner. 

• Loss aversion is a standard underlying explanation for the endowment effect; once you own a good (or even expect to), your ownership becomes your reference point, and situations where you no longer own it are evaluated as losses relative to the reference point. 

• The authors look at five underlying mechanisms that are claimed to give rise to loss aversion and endowment effects, and offer a more general framework: “attribute sampling bias.” 

• Buyers and sellers pay attention to reference prices (low for buyers, high for sellers) that will increase their transactional utility. Prices are just one example of how buying and selling activate different cognitive frames that direct attention and recall to different pieces of information. Buyers and sellers do not have the same information readily in mind, so the endowment effect does not arise because they value identical attributes differently. People even have trouble recalling frame-inconsistent information. 

• Psychological ownership raises wta for two reasons: (1) ownership creates a connection between your identity and the good; the better that you feel about yourself, the better you will feel about the good. Losing the good can seem like a threat to your identity. (2) ownership helps you remember good feelings about the good, which otherwise might be harder to access. 

• Their attribute sampling bias explanation resembles the well-known phenomenon where people seek out information that confirms their current beliefs. 

• Sadness tends to reduce or reverse the endowment effect – it creates an implicit interest in changing your circumstances, so non-owners are more willing to acquire a good and owners are more willing to sell a good. But most frames in the exchange setting (other than sadness) tend to bias people towards maintaining the status quo. 

• People are eager to trade away a “bad”, which can’t be explained by some other approaches to endowment effects, where whatever you own you value more highly. Further, endowment effects are higher for those goods (like environmental goods) that have vague attributes, giving more room for attention to a biased sample of attributes to take effect.