“CyborgMeld”by Randy Adams

In his 1950 paper “Computing Machinery and Intelligence,” Alan Turing posed his famous test: If a computer can engage in conversation in such a way that a human judge cannot distinguish the computer from a human being, it passes the test and we say it can think. While Turing’s test met with no end of objections, it became a catalyst for many of the advances in artificial intelligence and machine learning that we see today. To a striking degree, computers can do many of the things we previously thought only intelligent minds could do — they can translate between languages, recognize faces, beat human champions at complex games, and drive cars in traffic-filled cities.

Yet one area of human intelligence is notably unyielding to machines: the meaning of things. While humans are able to understand situations they encounter, artificially intelligent systems do not possess the same understanding. When Google translates for us, it does not grasp the meaning of what it displays. The outputs that machines can learn do display do not contain the rich meanings that humans see in them.

In 1986 mathematician Gian-Carlo Rota wondered “whether or when artificial intelligence will ever crash the barrier of meaning.” Rota’s reflection continues to challenge scientists today, and it inspires SFI’s workshop, "Artificial Intelligence and the 'Barrier of Meaning,'" which takes place in Santa Fe from October 9-11, 2018. The workshop examines what the key impediments are to building machines that understand meaning. It asks what meaning would look like for artificial intelligence, and how far understanding is necessary for artificially intelligent machines to approach human-level abilities in language, perception, and reasoning.

The rise of big data in the past decade has meant that computers are increasingly successful at performing tasks that we usually assume require intelligence, according to SFI Science Board Member Melanie Mitchell (Portland State University), who is co-organizing the workshop with Science Board Member Barbara Grosz (Harvard University). But in some cases, Mitchell remarks, modern computers are quite vulnerable. One clear sign that computers do not function like intelligent beings — even in cases where they perform tasks that appear intelligent — is that they can be tricked by what are called “adversarial examples” into making mistakes that humans would not make. Researchers at Carnegie Mellon, for example, developed glasses that would make facial recognition systems misidentify the person wearing them.

Mitchell asks, “Does the lack of understanding inevitably render these systems fragile, unreliable, and vulnerable to attacks?”

The workshop brings together a diverse group of researchers across multiple disciplines including psychology, biology, social science, information theory, and artificial intelligence. Mitchell hopes that it will be the first of many conferences. Computers have made impressive advances, Mitchell remarks, but the challenge to get computers to, say, make sense of a text is still really hard. Workshop participants will ask questions about new developments and re-visit questions that often leave them divided, like the old cognitive science question that persists: How much is intelligence innate to humans?