
May 22-23, Gurley Forum
Freeman Dyson wrote that “A good scientist is a person with original ideas. A good engineer is a person who makes a design that works with as few ideas as possible.” Dyson's premise is that ideas that work need to be simple enough to implement and powerful enough to solve problems. Emergent Engineering expands this idea to consider good designs that work within or against, evolving systems. In these cases, the functional outcomes of the system are not preordained but emergent; the intention of the engineers are deformed by the unanticipated responses of adaptive agents.
These include attempts to Engineer-X, where X = {environments, minds, societies, markets, political institutions, and companies}. The question that we are asking is why has finding "designs (or technologies) that work with as few ideas as possible" so difficult in complex systems. And, of course, what might an effective science of Emergent Engineering look like?
Based on recent deliberations at SFI, we are organizing these questions into a meeting aimed at engaging practitioners with four themes:
When should an emergently engineered technology be complicated and when should it be simple? The game of cricket has 42 laws. The U.S. Constitution has 4,543 words. The Uber application contains 10 millions of lines of code. A 3nm chip contains around 300 million transistors per square millimeter and requires more than 10 millions lines of (Register-Transfer Level) code to represent the flow of digital signals between hardware registers and the logical operations.
Questions:
How should we think about the complicationof a technology in relation to the adaptive system and functions that it seeks to support?
Is the complication of a technology inversely related to the complication of the system it seeks to regulate, and if not what is the relationship that we might expect?
John Rawls described the veil of ignorance as willful ignorance required to achieve just outcomes. Ignorance provides a mechanism for avoiding a slew of biases and neutralizing ineffective heuristics. Auctions are conducted behind a veil of ignorance to achieve fair allocations of goods.
Modern LLMs are veiled by complexity. Trillions of parameters neutralize human comprehensibility. The inability to interrogate the decision process of an LLM limits its credibility and its use in consequential scenarios from legal process to drug development.
Questions:
Veils of ignorance are engineered whereas veils of complexity tend to be accidental? Is there a means of seeing through a veil of complexity?
When might a veil of complexity be desirable? One such example might be caching and automaticity in the nervous system where direct access to complicated policies is obscured.
Companies, corporations, and cities all seek to achieve large-scale positive emergent returns. Data from urban scaling supports the idea that wealth production scales super-linearly with population size. Unfortunately, so does the incidence of crime and the rate of disease transmission.
Invention in science does not scale with the size of teams, and the contrary relation holds such that per capita novelty falls off as groups grow in size. Chess is not a rule-system that works with teams and a sport like football is not rule systems that work with pairs.
Questions:
Is there any principled way of knowing when a technology (mechanical or cultural) will scale-up in an adaptive setting?
Are there emergent outcomes that can only be achieved at the limits of very small or very large implementation?
Traditional technology is highly non-adaptive and the time scales of engineering are considerably slower than the time scales of the adaptive response to technology. This presents significant problems for engineering operating systems, and mission critical software, from flight control systems to stock exchanges, which need to respond to changing demographic regularities.
On the horizon a number of adaptive technologies are being planned to interact with their adaptive target populations: learning robots, agentic AIs, block-chain based smart contracts, anti-tumor viruses, and personalized software tutors.
Questions:
How might we optimize the time scales of adaptive engineered systems to coordinate effectively with an emergent response?
Open ended evolution will provide for novel solutions but will also generate unanticipated outcomes. How might we preempt or engineer kill switches for adaptive technology?
This event is supported by the Robert Wood Johnson Foundation Grant #81366, as part of a three year research program at SFI on Emergent Engineering, and by the Siegel Family Endowment.