Findings

AI and unintended consequences

What will researchers miss by relying on it?

Harvey ’20MPH, ’31MD/PHD
Alex Eben Meyer

Alex Eben Meyer

View full image

Artificial intelligence (AI) is increasingly used to maximize productivity in scientific research. But a new paper, coauthored by Yale anthropologist Lisa Messeri, suggests there may be risks in relying too much on AI for research. Messeri—also the author of In the Land of the Unreal—says such overreliance on AI may lead to a “narrowing of the kinds of science being done.”

The concerns aren’t new. Generative AI tools have already caused controversy among scientists. ChatGPT, a popular AI chatbot, has been listed as an author on research papers. AI tools could even replace human subjects in providing data for research.

So, what are the implications for scientific research going forward? That’s what Messeri and coauthor Molly Crockett, a psychologist at Princeton, have tackled in their recent paper. They proposed four archetypes to explain how AI fits into the scientific research process. One archetype, “AI as Quant,” details the potential of AI to analyze data too complex for humans. The other three proposed archetypes are “AI as Oracle,” “AI as Surrogate,” and “AI as Arbiter,” each of which has a role in the research process. Messeri and Crockett argue that scientists who rely too much on AI are susceptible to “illusions of understanding” that actually limit understanding, rather than increase it. (The paper was published in the March 7 edition of Nature.)  

Scientists and institutions have a responsibility to question AI, Messeri says. Scientists can harness AI to replace busy work, but should ensure that it is not replacing their understanding of complex processes. As for institutions, Messeri worries that they may overinvest in AI in the pursuit of productivity. This could result in other types of knowledge production—such as qualitative work that AI cannot do—being overlooked.

Describing science as a deeply human endeavor, Messeri explains that she is motivated by her own love for science. Many exciting discoveries will come from AI in science, but she encourages scientists to shift the question from “Can we use AI in science?” to “Should we use AI in science?” Her work highlights the importance of considering the practical aspects of AI in science—but also the longer-term consequences.

Post a comment