By Published: Feb. 15, 2021

Banner image: Morgan Klaus Scheuerman, an Information Science PhD student, has been awarded the Microsoft Research Fellowship for 2021. He studies how and why facial recognition technologies get it wrong. Credit: Casey Cass/Ƶ boulder

Growing up in a traditional blue-collar family in one of the most conservative counties in Maryland, Morgan Klaus Scheuerman knew early on what it’s like to feel marginalized.

He opted to wait until his mid-20s to come out. High school, he recalls, was filled with dark days.

“There were things said, without knowing I was queer, that were really upsetting. I felt pretty hopeless at times.”

Four-year college seemed out of reach, as no one in his family had gone before and money was tight. So, after graduation, Scheuerman took a customer service job at the local Best Buy, started saving, and tried to imagine a future beyond a hometown that many peers never left.

“I honestly thought that would be me,” he says.

Instead, Scheuerman, now 29, is among the most coveted young minds in the field of social computing. With stints at Google and Facebook already under his belt, and his facial analysis research earning international accolades, he was just awarded Microsoft’s prestigious 2021 Research Fellowship. That includes two paid years to finish his PhD in Information Science at Ƶ Boulder and a chance to collaborate with Microsoft researchers.

Morgan seated in his office

Information Science student Morgan Klaus Scheuerman | Credit: Casey Cass/Ƶ Boulder

We have labor laws and advertising laws and housing laws against racial and gender discrimination, but there are no laws specific to embedding discrimination into algorithms.”
–Morgan Klaus Scheuerman

His work, as he puts it, seeks one fundamental goal: to show tech companies marginalized people matter.

To do so, he studies, literally, how computers see us, focusing on the facial analysis software ubiquitous in everything from cell phones and computers to surveillance cameras at airports and malls. Already, his work and that of others has found such platforms frequently misidentify those who are not white, male and cisgender. He wants to understand why.

Where in the making of such products do things go wrong? Can they be improved? And should, he dares to ask, some technologies not be made at all?

“We have labor laws and advertising laws and housing laws against racial and gender discrimination, but there are no laws specific to embedding discrimination into algorithms,” Scheuerman says. “The only way we discover such discrimination is when it happens to us.”

When the security camera gets it wrong

In January 2020, Detroit police pulled into the driveway of a Black man named Robert Williams, handcuffed him in front of his two young daughters and hauled him to jail. Police determined later that a facial recognition service had incorrectly matched his driver’s license photo to a still image from a security video of a shoplifting incident.

When police asked him if the video image was him, Williams responded: “No. You think all Black men look alike?”

All charges were dropped. But according to press reports, at least three Black men are known to have been wrongly arrested in the United States based on glitches in facial recognition software, and countless others have been profiled or harassed.

“The fear that a lot of people have had over this technology is already being realized,” says Scheuerman. While facial recognition software can be remarkably accurate when assessing the gender of white men, it misidentifies women of color one-third of the time, recent research shows.

In one particularly egregious case of mistaken identity, Google’s photo-categorization software began in 2015 began to label Black people as “gorillas.” Google promptly apologized and took steps to fix the problem.

Scheuermann’s own researchhas shown, although the systems are adept at identifying cisgender women (those assigned female at birth and identifying as such) or cisgender men, they falter when faced with people who don’t neatly fit those binary categories.

“While there are many different types of people out there, these systems have an extremely limited view of what gender looks like,” says Scheuerman, who identifies as male.

Notably, when submitting his own picture—his long, blue-tinted hair framing his high cheekbones—to several facial recognition platforms, half got his gender wrong. Such mistakes could potentially lead to real harm, he warns. A match-making app could set someone up on a date with the wrong gender. A mismatch between what a facial recognition program sees and the documentation a person carries could prevent someone from clearing airport security.

Facial analysis software could also be used as a tool for discrimination. For instance, the Chinese government has reportedly been using facial analysis to identify and target members of the Uighur ethnic minority group.

Then, there are more subtle effects, Scheuerman notes.

“These systems run the risk of reinforcing stereotypes of what you should look like if you want to be recognized as a man or a woman, and that impacts everyone.”

Diversifying the pipeline

So, how does an artificial intelligence platform come to recognize one person as a Black woman while pegging another as a white man? Why do they get it wrong? “It is a very human problem,” Scheuerman explains.

Through online recruiting services like Amazon Turk, tech companies often crowdsource workers to sort photo after photo into categories, including Black or white, male or female.

That data is then used to train computer algorithms that in turn teach our smartphones.

But each of those human annotators comes with his or her own cultural biases and sometimes the so-called “training data” itself lacks diversity, his research suggests.

Elsewhere in the process, computer scientists and engineers make other decisions, considering what technology should be developed, how it could be used, and who might be hurt by it.

A graphic showing how facial analysis software works

Facial recognition software tends to work better in identifying white cisgender individuals, research shows. Illustration: Morgan Klaus Scheuerman

Often, marginalized communities are left out of those discussions.

“I want to know when and where decisions are made in the pipeline and how researchers, practitioners and policymakers can intervene to shape a more equitable future,” he says.

‘Queer is your superpower’

Scheuerman’s advisor Jed Brubaker, an assistant professor in the Department of Information Science who recruited Scheuerman to join his Identity Lab, says that people from marginalized communities tend to bring wholly unique perspectives to the field of social computing.

“In a way, queerness is your superpower,” Brubaker said. “Having been on the outside lets you see things from multiple angles and consider things that are invisible to most people.”

Scheuerman’s unique blend of technical knowledge and social science expertise has also served him well, Brubaker says.

“It is a rare student who can bridge both.”

After Best Buy, Scheurman worked as a barista to help pay his way through college. He credits the early classes he took in gender studies at Goucher College in Baltimore for emboldening him to come out to his family. They surprised him with their warm acceptance.

His master’s work at University of Maryland, studying how different computer systems impact teens, the visually impaired and racial minorities, led him in 2018 to Ƶ Boulder, home to one of the earliest information science departments in the country.

Only 10 students nationwide, out of more than 1,000 applicants,were awarded the Microsoft Fellowship.

“This entire area just felt so inaccessible to me growing up,” he says. “I just feel so thankful.”

Scheuerman stresses that his intent is not to do away with technology. He uses facial analysis software every day when he logs into his phone or tags people on social media. But he would like to play some small role in bringing a new mindsight to the field.

“I would like to see companies begin to prioritize people over technological innovation,” he says. “I don’t want the systems we design to hurt anyone—whether it’s intentional or not.”

This article is featured in the Spring 2021 digital issue ofCMCI Nowmagazine See more stories from CMCI Now