The journal Perspectives on Psychological Science, one of the flagship publications of the Association for Psychological Science, has published two new special issues, both edited by SFI researchers and both taking an unusual approach for disciplinary journals: featuring research from a range of disciplines to address two especially timely topics. One of the special issues dives into the power and perils of human collectives, while the other takes on the rapid advances in AI. Both issues feature voices from diverse disciplines spanning psychology, anthropology, economics, computer science, social science, politics, and physics. The editors hope the special issues will encourage discussion and lead to more interdisciplinary collaborations.
Two’s a couple. Three’s a collective.
From orchestras to organized labor, service clubs to soldiers, congregations to crews, humans working in groups can accomplish much more than the simple sum of their members. In recent years that super-power of human collectives has been amplified by the Internet and social media, which allow people to form, join, and act as collectives with unprecedented ease.
Collectives are already known to behave differently from individuals in important ways, write editors SFI Professor Mirta Galesic, SFI External Professor Henrik Olsson (Complexity Science Hub), and David Garcia, professor at the University of Konstanz in Germany, in the editorial for the special issue “The Psychology of Collectives.”
Consider, for example, how it feels to watch a sports event alone on TV versus in a stadium with other fans. At home, you could feel disappointed if your team loses. At a stadium, you could come away with hope of future wins and pride in the fan unity. The quality of the experience changes completely. Collectives are also known to shape the emotions and beliefs of individuals, as can be readily seen in religious, political, and many other types of collectives.
Because psychologists have long focused on individuals, they have some catching up to do on collective behaviors, explains Galesic. To help, this special issue includes articles written by scholars from different disciplines that explore how collectives can increase polarization, create echo chambers, affect an individual’s willingness to cooperate, and strengthen views that reinforce racial segregation. The publication also offers articles on collective memory, modeling of crowds, how individual minds interact, and technology’s influence on collectives.
Algorithms: friends and/or foes?
These days we’re swimming in algorithms, those complex sets of rules or instructions that allow our phones, computers, cars, social media, etc., to operate. Like a lot of new technologies, they have created problems and raised some big, fundamental questions, says Galesic, who is also a co-editor of the special issue “Algorithms in Our Lives,” along with SFI Professor Melanie Mitchell and Sudeep Bhatia, an associate professor at the University of Pennsylvania’s Wharton School. These big questions include: How do algorithms make decisions? How do humans make decisions? And how do we put these very different kinds of decision-making together?
The chapters in this issue explore the contrasting but interacting nature of how algorithms influence human psychology and, vice-versa, how our psychology influences algorithms. Among the topics explored are how algorithms can be combined with human judgments to improve forecasting of geopolitical events; what makes false, divisive and infuriating content go viral on social media; what children can do that AI can’t do (yet); and whether it’s reasonable to use psychological tests designed for humans to assess AI.
In these early days of AI, it’s common for machine and human decisions to be at odds because humans and algorithms often have different objectives, the editors write. The algorithms for many social media platforms, for example, are designed to maximize clicks regardless of the quality of the content. That can leave people less informed. Other, more consequential, mismatches have been found in the AI used in criminal justice and by mortgage companies that mirror and reinforce society’s racial biases and fuel discrimination.
We have a long way to go and more dialogue between disciplines is vital, says Galesic. But for starters, “Psychologists and computer scientists need to come together to better understand the possibilities and challenges of emerging AI technologies,” she says. “We need to design algorithms that support healthy development of both individuals and collectives.”