Interactions with AI tools change the way their users understand the world. As AI becomes more proficient and more ubiquitous, these tools will increasingly disrupt the distribution of interpretive authority in society.
At present, AI tools seem to shift interpretive authority towards the firms that provide AI services. These firms typically rely on their trust and safety departments to minimize their AI's potential to offend or harm users. This “do no harm” ethos is an admirable stance for technology firms. However, there is a risk that these AIs are shifting interpretive authority away from epistemic institutions that have historically guided their communities' understanding of the world. (In this context, examples of epistemic institutions include churches, universities, unions, and clubs.)
Over the last year, different groups of researchers, coders, and nonprofits have developed tools that allow epistemic institutions to use AI while maintaining their own interpretive authority. These tools include orchestration layers, user-level APIs, and system prompt editors.
This 90-minute virtual meeting will showcase some of these tools. We will also discuss how epistemic institutions can deploy such tools.
Speakers
Umang BhattAssistant Professor in Trustworthy Artificial Intelligence at the University of Cambridge
Bernie BoscoeAssistant Professor of Computer Science at Southern Oregon University
Joshua JosephChief AI Scientist at Berkman Klein Center for Internet & Society at Harvard University
Vinay RaoChief Technology Officer at ROOST (Robust Open Online Safety Tools)
Vincent ShaTechnical Lead and Founding Member at Open Forum for AI, Carnegie Mellon UniversityOrganizers
Umang BhattAssistant Professor in Trustworthy Artificial Intelligence at the University of Cambridge
Allison StangerProfessor of International Politics + Economics at Middlebury College; Science Board Member + External Professor at SFI
William TracyVice President for Applied Complexity, SFI