We ignore what doesn’t fit with our biases – even if it costs us

Authored by newscientist.com and submitted by mvea
image for We ignore what doesn’t fit with our biases – even if it costs us

We can’t help but be more welcoming of information that confirms our biases than facts that challenge them. Now an experiment has shown that we do this even when it means losing out financially.

Most research on confirmation bias has focused on stereotypes that people believe to be true, says Stefano Palminteri at École Normale Supérieure (ENS) in Paris. In such experiments, people hold on to their beliefs even when shown evidence that they are wrong. “People don’t change their minds,” says Palminteri.

But those kinds of beliefs tend not to have clear repercussions for the people who hold them. If our biases cost us financially, would we realise that they are not worth holding on to?

To find out, Palminteri and his colleagues at ENS and University College London set 20 volunteers a task that involved learning to associate made-up symbols with financial reward. In the first of two experiments, the volunteers were shown two symbols at a time and had to choose between them. They then received a financial reward that varied depending on their choice.

By repeating this multiple times, the volunteers found out how much some of the various symbols were worth. However, they could only see this information for symbols they had chosen.

In the second experiment, the same volunteers were again asked to choose between pairs of abstract symbols. This time, they were told the value of both the symbol they had chosen and the one they hadn’t.

The first experiment helped the volunteers learn which symbols were most valuable, but the second trial was designed to show them that the symbols they hadn’t chosen could be more valuable.

However, the second experiment did not change the participants’ preferences. Despite the lesson that certain symbols were more valuable, they continued to choose those they had learned to favour in the first experiment. This meant that they kept dismissing symbols that would pay them more.

Read more: Liberals are no strangers to confirmation bias after all

This suggests that people generally ignore new information that counters their beliefs, even though doing so costs them financially, says Palminteri. “It’s as if you don’t hear the voices in your head telling you that you’re wrong, even if you lose money,” he says.

Palminteri hopes that we can learn to be aware of our own biases, but says that will be hard – if a person believes they are not biased, it is difficult to shift this belief. And even if some people are aware they are biased, it is probably impossible to eliminate all their biases. “Complete objectivity is probably something we will never fully achieve,” says Palminteri.

Our faith in our biases can make us believe we are right even when we are wrong. “In the end, people will have the impression that they are performing better than they actually are,” says Palminteri. “That could increase self-confidence, and provide a motivational boost.”

Journal reference: PLoS Computational Biology, DOI: 10.1371/journal.pcbi.1005684

tacotaskforce on September 4th, 2017 at 13:57 UTC »

Maybe I am completely misunderstanding this experiment, but this sounds like it has to do with risk aversion, not bias.

runner-33 on September 4th, 2017 at 12:51 UTC »

This could also have an impact on science since bias prevents from interpreting experimental results open-ended.

To put it differently: Are the best scientists the ones with the lowest confirmation bias?

mvea on September 4th, 2017 at 11:23 UTC »

Journal reference:

Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing

Stefano Palminteri ,Germain Lefebvre, Emma J. Kilford, Sarah-Jayne Blakemore

PLoS Comput Biol 13(8): e1005684.

DOI: https://doi.org/10.1371/journal.pcbi.1005684

Link: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005684

Abstract

Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice.

Author summary

While the investigation of decision-making biases has a long history in economics and psychology, learning biases have been much less systematically investigated. This is surprising as most of the choices we deal with in everyday life are recurrent, thus allowing learning to occur and therefore influencing future decision-making. Combining behavioural testing and computational modeling, here we show that the valence of an outcome biases both factual and counterfactual learning. When considering factual and counterfactual learning together, it appears that people tend to preferentially take into account information that confirms their current choice. Increasing our understanding of learning biases will enable the refinement of existing models of value-based decision-making.