I’m a scientist, and I want to believe that I work in a meritocracy. Even more, I want to believe that I can trust the quality of my own judgment. I am a professionally trained objective evaluator of facts. So I can trust myself to evaluate facts objectively, right?
Wrong. Last week I attended a presentation called “The Fallacy of Fairness: Confronting Bias in Academic Science” by Jo Handelsman, the associate director for science at the White House Office of Science and Technology Policy. The lecture hammered home a few key points for me, but the most important was “you are not immune to this.”
“Most people intend to be fair,” Handelsman insisted. “If you ask them, ‘When you do this evaluation, are you planning to be fair?’ they will 100 percent say yes. But most of us carry these unconscious, implicit prejudices and biases that warp our evaluation of people or the work that they do.” The biases Handelsman is referring to are most readily measured in hiring studies, where hiring managers are asked to evaluate potential candidates for a job or a promotion. With astonishing reliability, the evaluators will assign higher scores to the exact same application if the name on the application is male versus female. These studies are “absolutely canonical” in the social psychology literature, and their results have remained shockingly consistent over the past four decades despite all of the social progress that this country has made.
I bet I can guess what you’re thinking right now. It’s the same thing I thought, and according to Handelsman, it’s the response she gets from every single audience when she shares her data: “It’s not like that here.” The scientists in particular claim, “We’re trained to be objective, so the bias studies don’t apply to us.” Well, scientists, take a look at the data. Turns out this applies to everyone.
But now the data get even more interesting. Studies have shown time and again that evaluators are biased in their decisions of whom to hire or promote based on the gender and race of the applicants. But get this: The gender or race of the evaluators themselves — that is, the gender or race of the person reading the application — does not affect the results at all. In other words, I am no less biased against women just because I’m a woman. As Handelsman assured her audience: “We all carry these biases, and it’s not some horrible plot by white men to keep everybody else out of the academy. This is just something cultural that happens to all of us.”
So what can you do to minimize the impact of your implicit biases when you’re evaluating resumes, applications and manuscripts? One big step in the right direction is blind review. If you don’t know the race or gender of the applicant, you can’t be biased by that information. The other important solution is to explicitly define quantitative criteria for judgment before you have seen any applications. That way, your initial biases can’t influence your interpretation of the information as you see it. As Handelsman warns, “Being aware of the issues of prejudice, meaning to be fair and inclusive, does not by itself fix the problem because this is something unconscious that we don’t intend.”
That said, the more we know, the more we can work to minimize the impact of bias in science and medicine. In Handelsman’s words, “we’re all in this together.”
Thank you for this article. You can test your implicit biases on the following website: https://implicit.harvard.edu/implicit/takeatest.html.
This was a good article. How can you get into the field?
So very sad but true. Hiring and promotions are based on much more than the resume, however. The interview is crucial to assess drive, which is the most important quality. Someone really driven (and capable) to pick up skills and perform will overcome almost any obstacle. The rest is icing. Gender, race, nationality are irrelevant. Must have some proficiency in written and spoken English, however.
Comments are closed.