Skip nav to main content.

How the Human Brain Works – Putting Aside Bias as New Information Enters

From love and politics to health and finances, humans can sometimes make decisions that appear…

  • September 10, 2018

From love and politics to health and finances, humans can sometimes make decisions that appear irrational, or dictated by an existing bias or belief.

But a new study from Columbia University neuroscientists uncovers a surprisingly rational feature of the human brain: A previously held bias can be set aside so that the brain can apply logical, mathematical reasoning to the decision at hand.

These findings highlight the importance that the brain places on the accumulation of evidence during decision-making, as well as how prior knowledge is assessed and updated as the brain incorporates new evidence over time.

As an example, consider an oncologist who must determine the best course of treatment for a patient diagnosed with cancer.

Based on the doctor’s prior knowledge and her previous experiences with cancer patients, she may already have an opinion about what treatment combination (i.e. surgery, radiation and/or chemotherapy) to recommend — even before she examines this new patient’s complete medical history.

But each new patient brings new information, or evidence, that must be weighed against the doctor’s prior knowledge and experiences.

How do we handle new information that contradicts our biases?

The central question, the researchers of today’s study asked, was whether, or to what extent, that prior knowledge would be modified if someone is presented with new or conflicting evidence.

To find out, the team asked human participants to watch a group of dots as they moved across a computer screen, like grains of sand blowing in the wind.

Over a series of trials, participants judged whether each new group of dots tended to move to the left or right — a tough decision as the movement patterns were not always immediately clear.

As new groups of dots were shown again and again across several trials, the participants were also given a second task: to judge whether the computer program generating the dots appeared to have an underlying bias.

Without telling the participants, the researchers had indeed programmed a bias into the computer; the movement of the dots was not evenly distributed between rightward and leftward motion, but instead was skewed toward one direction over another.

The study, which was co-led by Zuckerman Institute Principal Investigator Daniel Wolpert, PhD, took two approaches to evaluating the learning of the bias. First, implicitly, by monitoring the influence of bias in the participant’s decisions and their confidence in those decisions.

Second, explicitly, by asking people to report the most likely direction of movement in the block of trials.

Both approaches demonstrated that the participants used sensory evidence to update their beliefs about directional bias of the dots, and they did so without being told whether their decisions were correct.

The researchers argue that this occurred because the participants’ brains were considering two situations simultaneously: one in which the bias exists, and a second in which it does not.

The researchers were amazed at the brain’s ability to interchange these multiple, realistic representations with an almost Bayesian-like, mathematical quality.

Find out more at zuckermaninstitute.columbia.edu.

Leave a Comment