Common test used to determine racism is flawed, says professor
For decades the psychology community has been using a test developed by their peers at Harvard to determine if someone has hidden prejudices.
Known as the Implicit Assessment Test, the tool is supposed to reveal hidden truths by delving into the secret unconscious. Test-takers can discover they are racist, among other things, even if they are unaware of their implicit bias.
The problem, according to Ulrich Schimmack, a psychology professor at U of T Mississauga, is that the test can't delve into our unconscious and reveal our hidden prejudices. Instead, the test is "basically a roulette wheel that comes up with a random number."
Yet the tests have been used in countless academic studies and have been employed by companies and institutions in their implicit bias training sessions. They have been featured in mainstream media and are freely available to anyone through Project Implicit, a website hosted by Harvard and marketed to the general public as a means of determining their implicit bias.
Schimmack says the test became popular after it was published some 20 years ago because psychologists had been fascinated with the unconscious but had no scientific method to measure it. The promise of the IAT was that it would "finally provide a scientific window to the unconscious."
Schimmack, who studies happiness and wellbeing, was curious if he could use the same method in his research. If he could, he would no longer have to rely solely on self-reporting, where it's harder for people to admit their life isn't good. He and his team undertook a study to see if the measure was valid.
"We found our happiness IAT was giving us random results. It wasn't useful at all to measure a person's happiness," Schimmack says. "Then you wonder, if it's not working for this, when does it work and what does it really measure?"
He published those first findings 10 years ago and continued his research without the use of IAT. More recently, though, he noticed another decade had passed and the test was still being widely used and taken at face value, so he took a closer look at the tool.
Those who take an IAT test are required to categorize as quickly as possible. In the racial bias test, participants are asked to link those with white European descent with words related to "bad" concepts and African-American's with those related to "good" concepts, and vice versa. Based on speed and accuracy, a score is provided and a person's implicit bias is determined.
Schimmack says many things can influence how fast people can do the task, which introduces other sources of variation that has nothing to do with attitude. The scores can also be different from one's actual behavior.
"Twenty years of research produced very little evidence that the IAT test predicts any real world behavior," Schimmack says. "On top of that, some of the articles that claim it does, on close inspection, fail to show that."
For example, one study claimed the IAT could predict some people would not vote for Barack Obama due to anti-black implicit bias. But on closer inspection, Schimmack says they found that was not true.
Schimmack says his colleague also found that, in simulations, IAT scores could not predict if a police officer was more likely to shoot a black suspect or a white suspect.
Others have also criticized the authors of the IAT for making bold claims without demonstrating that their measurement predicts any real world behavior of discrimination, Schimmack notes.
Racial bias is a reality, he continues, but the problem is there are a lot of discussions about this important issue often based on research findings that rely on the flawed IAT measures. For example, some argue implicit bias training is not useful because it doesn't change IAT scores. But if IAT scores are not valid, it's an ineffective evaluation tool. Implicit bias training might be effective in changing certain behaviors, but not in changing IAT scores.
"(IAT) shouldn't be used as a criterion to evaluate whether implicit bias training was successful. It shouldn't be used in a training context," Schimmack says, adding it's questionable, if not unethical, to put a test out there that claims to provide scientific feedback when that's not the case.
"For the most part it has been oversold and it offers very little," Schimmack says.