Rosetta Stone, 2016. Courtesy of Wikimedia Commons
Zoom
CounterBalance Seminar
  US Mountain Time
Speaker: 
Simon DeDeo and Juliet Shen

Our campus is closed to the public for this event.

Overview: 

During the next year, SFI’s CounterBalance seminars will run a special series of meetings, called Deconstructing Meaning, looking at the technical and ethical complexities of content parsing in the age of AI. Co-hosted by Siegel Family Endowment, this series is being organized in collaboration between the Santa Fe Institute, the Trust & Safety Professional Association, and Google. Past events in this series can be accessed by members of the CounterBalance community through this link, using the email that you registered under.

Background:

The proliferation of digital content presents significant challenges for technology platforms who bear the onus of curating content responsibly. Accurate parsing of this content is key to upholding content responsibility, a broad term used to describe the maintenance of healthy online communities, protecting users, and preserving platforms’ reputations. Failure to accurately parse content can lead to the proliferation of misinformation, hate speech, and harmful content. Additionally, regulators striving to prevent societal harms increasingly scrutinize tech platforms’ content moderation practices. Reliable and robust parsing techniques help to demonstrate due diligence and compliance with emerging regulations surrounding online content. Accurate content parsing is a complex challenge that has proven elusive, despite the advances made by content moderation techniques and technologies; as distinguishing between intent, sentiment, and context poses intricate technical and ethical dilemmas. The rise of generative AI further complicates this landscape, with its ability to produce human-quality text that can both illuminate and obfuscate meaning. 

Natural language processing (NLP) and machine learning form the backbone of content parsing. Challenges arise due to the nuances of human expression and the increasing sophistication of generative AI models. Intent, for example, may be shrouded in sarcasm or disguised as humor, while sentiment can be multifaceted and easily misconstrued. Understanding context demands significant knowledge about cultural references, current events, and individual backgrounds. Additionally, real-time content parsing poses substantial computational demands. 

Addressing the complex challenges of content parsing, with its intertwined hardware, software and ethical considerations, holds profound significance for tech platforms, regulators, and the future of every online participant’s experience. 

 

Third Session

The third virtual 85-minute session will take place on will take place on January 16 at 9AM US Mountain Time, and will explore explore the technological considerations that enable (or hinder) machines from achieving human-like comprehension of text, such as the difficulties in understanding context, sarcasm, irony, and cultural nuances. The discussion will also highlight potential solutions and emerging technologies that aim to bridge this gap, including advanced machine learning techniques, knowledge graphs, and multimodal approaches. The session is structured as a salon discussion with initial remarks by speakers Simon DeDeo (SFI & Carnegie Mellon University) and Juliet Shen (Columbia), followed by a roundtable discussion with a panel of experts.

Speakers

Simon DeDeoSimon DeDeoSFI External Professor & Associate Professor at Carnegie Mellon University
Juliet ShenJuliet ShenProduct Lead at SIPA’s Trust and Safety Tools Consortium, Columbia

Panelists

Laura AlemanyLaura AlemanyProfessor at Universidad Nacional de Córdoba and Board Member at AI League for Good
Ian EisenbergIan EisenbergHead of AI Governance Research at Credo AI
Timothy QuinnTimothy QuinnFounder at Dark Data Project

Moderator

Sujata MukherjeeSujata MukherjeeTrust and Safety Leader

Organizing Committee

Jason DjangJason DjangSenior Program & Strategy Lead, Americas, Trust & Safety Global Engagements at Google
Jan EissfeldtJan EissfeldtGlobal Head, Trust & Safety at Wikimedia Foundation
Amanda MenkingAmanda MenkingResearch and Program Director at Trust and Safety Foundation
Sujata MukherjeeSujata MukherjeeTrust and Safety Leader
William TracyWilliam TracyVice President for Applied Complexity, SFI
Charlotte WillnerCharlotte WillnerExecutive Director at Trust and Safety Professional Association

More SFI Events