As algorithms and chatbots flood the system, a few crucial questions have emerged. Is AI safe to use? Is it ethical? What protections could help ensure privacy, transparency, and equity as these tools are increasingly used across society?
Psychologists may be among the most qualified to answer those questions, with training on various research methodologies, ethical treatment of participants, psychological impact, and more.
“One of the unique things psychologists have done throughout our history is to uncover the harm that can come about by things that appear equal or fair,” said Adam Miner, PsyD, a clinical assistant professor of psychiatry and behavioral sciences at Stanford University, citing the amicus brief filed by Kenneth Clark, PhD, and Mamie Phipps Clark, PhD, in Brown v. Board of Education.
[Related: AI in hiring: More research required]
When it comes to AI, psychologists have the expertise to question assumptions about new technology and examine its impact on users. Psychologist Arathi Sethumadhavan, PhD, the former director of AI research for Microsoft’s ethics and society team, has conducted research on DALL-E 2, GPT-3, Bing AI, and others.
Sethumadhavan said psychologists can help companies understand the values, motivations, expectations, and fears of diverse groups that might be impacted by new technologies. They can also help recruit participants with rigor based on factors such as gender, ancestry, age, personality, years of work experience, privacy views, neurodiversity, and more.
With these principles in mind, Sethumadhavan has incorporated the perspectives of different impacted stakeholders to responsibly shape products. For example, for a new text-to-speech feature, she interviewed voice actors and people with speech impediments to understand and address both benefits and harms of the new technology. Her team learned that people with speech impediments were optimistic about using the product to boost their confidence during interviews and even for dating and that synthetic voices with the capability to change over time would better serve children using the service. She has also applied sampling methods used frequently by psychologists to increase the representation of African Americans in speech recognition data sets.
“In addition, it’s important that we bring in the perspectives of people who are peripherally involved in the AI development life cycle,” Sethumadhavan said, including people who contribute data (such as images of their face to train facial recognition systems), moderators who collect data, and enrichment professionals who label data (such as filtering out inappropriate content).
Psychologists are also taking a close look at human-machine interaction to understand how people perceive AI and what ripple effects such perceptions could have across society. One study by psychologist Yochanan Bigman, PhD, an assistant professor at the Hebrew University of Jerusalem, found that people are less morally outraged by gender discrimination caused by an algorithm as opposed to discrimination created by humans (Journal of Experimental Psychology: General, Vol. 152, No. 1, 2023). Study participants also felt that companies held less legal liability for algorithmic discrimination.
In another study, Bigman and his colleagues analyzed interactions at a hotel in Malaysia employing both robot and human workers. After hotel guests interacted with robot workers, they treated human workers with less respect (working paper).
“There was a spillover effect, where suddenly we have these agents that are tools, and that can cause us to view humans as tools, too,” he said.
Many questions remain about what causes people to trust or rely on AI, said Sethumadhavan, and answering them will be crucial in limiting harms, including the spread of misinformation. Regulators are also scrambling to decide how to contain the power of AI and who bears responsibility when something goes wrong, Bigman said.
“If a human discriminates against me, I can sue them,” he said. “If an AI discriminates against me, how easy will it be for me to prove it?”