Santa Fe
Institute
  • Research
    • Themes
    • Projects
    • SFI Press
    • Researchers
    • Publications
    • Library
    • Sponsored Research
    • Fellowships
    • Miller Scholarships
  • News + Events
    • News
    • Newsletters
    • Podcasts
    • SFI in the Media
    • Media Center
    • Events
    • Community
    • Journalism Fellowship
  • Education
    • Programs
    • Projects
    • Alumni
    • Complexity Explorer
    • Education FAQ
    • Postdoctoral Research
    • Education Supporters
  • People
    • Researchers
    • Fractal Faculty
    • Staff
    • Miller Scholars
    • Trustees
    • Governance
    • Resident Artists
    • Research Supporters
  • Applied Complexity
    • Office
    • Applied Projects
    • ACtioN
    • Applied Fellows
    • Studios
    • Applied Events
    • Login
  • Give
    • Give Now
    • Ways to Give
    • Contact
  • About
    • About SFI
    • Engage
    • Complex Systems
    • FAQ
    • Campuses
    • Jobs
    • Contact
    • Library
    • Employee Portal

Science for a Complex World

Events

Here's what's happening

Give

You make SFI possible

Subscribe

Sign up for research news

Connect

Follow us on social media

© 2026 Santa Fe Institute. All rights reserved. This site is supported by the Miller Omega Program.

Home / News

Language — and AI — offers window into human minds, societies, and biases

Open books. (image: Patrick Tomasso/unsplash)
June 2, 2022

The Turkish word o is a non-gendered pronoun that translates as either “he” or “she.” Yet for a long time, if you plugged the sentence O bir doktor into Google Translate, it would come back as, “He is a doctor.” Switch doktor to hemşire—nurse—and the translation would read, “She is a nurse.”

That was a bias in the Google Translate algorithm, and it stemmed from perceptions embedded in language and human minds. While this particular Google problem has been fixed, many others remain.

“Human beings are biased,” says SFI External Professor Mahzarin Banaji. “So if you use the output from human minds to train an artificial system, it will by necessity learn the biases inherent in the human data.” 

It’s an issue up for discussion at a two-day SFI working group meeting titled, “Language as a window into mind and society.” Banaji, a Harvard psychologist, organized the meeting as an opportunity for computer scientists, psychologists, and linguists to learn from each other’s work.

The purpose of language is communication — but it’s also much more. “We can elevate our mental states by the poems and novels we read,” Banaji says. “We can also do terrible things with language. We can hurt people, we can lie and deceive.” 

Thanks to databases as wide-ranging as the Internet, researchers can now quantify such biases and harms by analyzing billions of words and sentences to determine how society associates certain groups of people based on race, ethnicity, gender, and other characteristics. For example, men are widely associated with engineering, technology, power, religion, sports, war, and violence, whereas women are associated with sex, lifestyle, appearance, toxic language, and profanities. 

“This poses a very challenging socio-technical problem,” says University of Washington computer scientist Aylin Caliskan, who will present her research on gender bias in word embeddings at the SFI meeting. 

Machines use algorithms embedded with implicit bias to make crucial decisions that affect people’s lives — everything from job candidacy and university entrance to recidivism prediction. 

Caliskan gives an example of a woman applying for a tech job. If her resume contains words that reflect gender — a reference to a women’s college or sports team, perhaps — machines may perceive her as a less-than-ideal fit for the job, which historically is associated with men. 

“These are not very optimistic research findings,” Caliskan says, although awareness of the problem is increasing. 

As Banaji says, there is an aspiration that one day we will design machines that make better decisions than humans do. After all, language is a reflection of humanity’s wondrous potential. 

“Some of the gifts that evolution has given our species, such as language, are so basic and so familiar to us that we just fail to be gobsmacked by it as we should be,” she says. “We should be just astounded by the capacity, and its role in improving judgment and decisions.”





Share
  • Sign Up For SFI News
News Media Contact

Santa Fe Institute

Office of Communications
news@santafe.edu
505-984-8800



  • Tags
  • SFI News Release
  • Events


  • Related Projects
  • Algorithmic justice


More SFI News

View All News

Looking at AGI through the lens of natural intelligence

A simple baseline for AI forecasting in machine learning

Constantino Tsallis to co-chair the 2027 Nobel Symposium on Statistical Mechanics

How novelty arrives: Review of “The Origins of the New”

Working group asks, what’s the benefit of a brain?

Measuring irreversibility in gene transcription

ACtioN Academy engages industry leaders on AI and complexity

Arguing for a complex adaptive power grid

Mark Newman Awarded 2026 SIAM John von Neumann Prize

Review: Nonesuch, by SFI Miller Scholar Francis Spufford

Laurent Hébert-Dufresne to receive Young Scientist Award

What does it mean to compute?

Reassessing the scientific method

SFI External Professor Santiago Elena elected to the American Academy of Microbiology

From cells to companies: Study shows how diversity scales within complex systems

SFI Press launches “The Economy as an Evolving Complex System IV”

New dataset reveals how U.S. law has grown more complex over the past century

Boldness is key to avoiding self-censorship, model shows

SFI welcomes Program Postdoctoral Fellow Jordan Kemp

Disentangling the Boltzmann brain hypothesis: Memory, entropy, and time