Summertime Frenzy 1: Singapore and Santa Fe

Tanner Lund
11 min readSep 18, 2023

--

This is part 1 of a 3-part series spanning my travels across 3 continents this summer and the lessons I learned about collective intelligence.

Music to set the mood for the trip

Intro

This summer I had 3 conferences, a summer camp, and a hackathon to attend across 3 continents, while balancing work where possible and slipping in my first actual vacation time in a while along the way.

Singapore — SREcon23 APAC

I had two talks to give at SREcon APAC, the same as the last time I attended in 2019. Being an industry conference, it was largely focused on practical concerns, including how-to guides for various technologies, insights into how various organizations (like Sony PSN) manage their operations, and stories about lessons SREs have learned the hard way while operating in complex, dynamic socio-technical environments.

Anyway, here’s my two talks:

Functional Resonance Analysis: Diagramming Your System

Nobody’s system works exactly the way they think it does. On top of that, systems of people and software are constantly changing, resulting in a regular need to update our limited understanding of how things actually work — where the sources of our success are, where our risks are, and how things behave.

The video for the other talk is still private at this time…

Patterns, Not Categories: Learning Across Incidents

Outage pattern analysis is hard! There have been many attempts to learn across multiple incidents. Folks look for categories, tags, causes, etc. to identify what’s brittle or risky in their system, sometimes even using statistical models to help make sense of the data. However, their results often prove unsatisfying, non-actionable, or don’t tell you anything you didn’t already know from other sources.

I do really think patterns are the path forward for people trying to figure out commonalities across complex system failures. They form a nice mid-point between “laws” and “anecdotes”. Laws need to be general enough to apply to all situations. Anecdotes are hardly transferrable to another situation. Complex systems have lots of interactions and therefore are full of special, anecdotal cases…but there are still universal laws. To express learning between the two extremes you need something like patterns.

I did meet some nice folks in Singapore. It was a good reminder that the nature of in-person interactions is quite different — and more prone to serendipity — than the usual online ones. I bonded with folks over food, learned about several companies, systems, and countries, and just generally was in a more pensive mood.

When you do not walk the same paths as usual, you are less likely to think the same thoughts. If you like your situation, stay remote or keep doing what you’re doing. However, if you need a change you should go for a walk. Go meet someone. Go open yourself up to serendipity.

You never know who or what you might find around the corner

Santa Fe — Santa Fe Institute’s Collective Intelligence Symposium & Short Course

The Santa Fe Institute (SFI) is a big name in the complexity science space and is pretty well known with those who study systems, multi-agent simulations, and emergent phenomena. This was the first year that they decided to put on a symposium like this, drawing folks from various academic disciplines and a non-trivial number of interested participants from various industries (including software) as well.

I used it as an opportunity to ponder.

To the ancients, alabyrinth was a way to trap monsters or evil spirits. Medieval Christians turned them into places to walk, ponder, and pray. I walked this one a couple times during the week.

What is Collective Intelligence?

Much of the symposium was spent musing upon how Collective Intelligence (CI) should be defined. There is no consensus on this, which poses a challenge that the organizers wanted to address. Here are some of the suggested definitions and attributes of CI as well as example systems:

“Adaptive behavior of groups”

Intelligence can be tied to adaptability. To some extent I find this direction promising, as adaptation is more necessary in less predictable environments, even when you include things like pre-adaptation (planning?). Guy Theraulaz argues that CI is “adaptability and solving generic tasks”. Related:

  • Synchronicity — looking for patterns when causality is not clear (careful here)
  • Swarming — complex collective behavior that derives from simple individual decisions
  • Stigmergy — indirect coordination, often through a mediator, such as ants coordinating through pheromone trails on the ground. Contrast this with niche construction?

Examples of animal coordination include:

  • Wasps use 3 or so rules to decide where to build parts of their nest
  • Ants follow simple rules for nest construction as well. The evaporation rate of their pheromones due to the environment affects the resulting structure’s shape
  • Fish follow a basic “boid”-like model for school swimming — and fish size matters
  • Swarming vs schooling vs milling fish

Something brought up very frequently at SFI is the idea of “the edge of chaos”, which is attributed to Chris Langton. I think Wolfram also discussed it? Supposedly life, or lifelike behavior, is most generally found on the boundary between orderly and chaotic behavior. That is what makes it unpredictable but not “just random”.

It’s important to be able to filter; to know what is signal and what is not.

Universal Laws of Collective Intelligence

Physicists seek to “see things simply” and find “unified principles”. First principles are superior to analogy, etc. Yet, how can you do that in the sort of systems that gave rise to chaos theory? If each system is sensitive to initial conditions and is unique in its expression of the many interactions between its parts, how can we find and define laws about them?

Random quotes and assertions:

“We’ve got no money so we’ve got to think!”

“Standing on the shoulders of giants is Collective Intelligence”

“Economies of scale are Collective Intelligence”

Part of the argument for economies of scale is that each human’s energy use is equivalent to that of a 30,000kg gorilla. We are able to expend far more energy per person due to our collective action.

There was a side discussion about collective stupidity for a bit here. I found it arrogant — participants were eager to give examples of collective action they thought was stupid, as if they were the arbiters of what is a “good decision”. That is part of the whole problem! What even is intelligence? What is a “good decision”? Society is far from settled on the answers to those questions, but don’t let it stop you from calling people names.

Similarly, it was asserted (and taken at face value) that “more intelligent people have fewer children”. That’s…not the causal variable. It is a good way to pat yourself on the back for your life decisions though.

Back to Laws

Laws can be universal or contingent.

Universal: physics, geometry, logic. Invariant.

Contingent: Emergent. biological. effective. based on history.

“Evolution discovers physics”. Contingent laws then must have some relationship to universal laws. Of what nature are the laws governing intelligence, individual or collective?

“New physics emerges”

Some argue that we ought to start our search for definitions by considering computation. Specifically, things like Generalized Landauer Bound.

  • Ribosomes are pretty close to the theoretical limit of energy efficiency for what they do
  • All principles are bounded, so considering bounds and limits is fundamental to definition

There was an argument at some point that there are 3 levels of life:

  • Materials
  • Constraints
  • Optimizations

This sounds interesting, but I ask you: what variable is optimized in a basketball game? I fear we emphasize optimization far too much. It plays a part yes, but do you really think it’s the guiding principle?

“Inventing Temperature” — It is possible to come to measure something that once was thought impossible to define. I don’t think current attempts have been much good, but we can learn to quantify, as we did with information.

Observations on Life

  • Equilibrium may be equivalent to death. If you have no change you are not living.
  • Life has broken symmetry. It is chiral. It’s spatiotemporal. It’s temporally irreversible.

Herbert Simon argued that life had internal simplicity but external complexity. Humans are simple, so the complexity comes from our environment.

While most simulations and formulations of artificial life definitely have simple, ludic environments and complexity therein would significantly affect their behavior, this view is not very “agentic”. Humans are quite complex individually. We start acting similarly when in groups, but that is a group property then, not an individual one.

Mikhail Bongard spoke of recognition as compression. We predict what people are going to say to us. it helps comprehension and makes up for holes in language — a form of tacit knowledge.

“Language is a cultural technology”

  • This one is under fire in the age of GPT.

There was far too much time dedicated to discussing GPT. I’m not oppossed to such discussion, and in fact I think it is necessary. LLMs and other recent advances represent a new type of intelligence, along a slightly different axis than we are used to. It is just that everything I heard at SFI belied limited experience with LLMs and more or less consisted of things I’d already seen many times before on Twitter. SFI should be able to provide a richer discussion than that, no?

Jessica Flack discussed ideas on coarse graining, which has to do with getting macro variables in complex environments. “Putting out of focus irrelevant details”.

Do we see predictable relationships among macro variables? (see inventing temperature again). A major part of this is termed “downward causation”, or the macro behaviors driving micro behavior.

I’m not sure I see evidence that the arrow of causality goes in that direction in the examples she gives, but I’m open to the idea that just as behaviors can emerge on a system level that are not shown in the parts, this emergent behavior can result in things that reinforce/continue that behavior. I’d need to think about it more.

Variable Scaling, Small to Large

Supposedly, energy-dominated variables tend to scale sublinearly. Information-dominated variables tend to scale superlinearly. If this is the case, then when we look at system or collective behavior we should be asking: what kind of variables are you looking at and what are their attributes?

  • Inter-species metabolism?

Information bottlenecks can arguably be quite valuable, something my labmate Michael Crosscombe would probably agree with. On the near side of a bottleneck, the world is more predictable, which maybe helps survival. The constraints of bottlenecks (and the interesting things that happen when you open them up and mix formerly isolated things together) result in a lot of diverse life and behaviors.

Constraints breed creativity. Creativity is tied to intelligence. Constraints are where intelligence makes the difference?

I don’t know the artist nor the collective of people who supported and influenced them

“Evolution has stupid, pass/fail feedback”

Tools for system modeling!

I haven’t used either of them but they look interesting.

Iain Couzin has spent a lot of time studying collectives. He asks: “what is the relationship between individual and collective intelligence?” Based on his years of research, he asserts:

  • Individuals become less sensitive in groups — it results in fewer false positives
  • Uninformed individuals promote democratic consensus in animal groups
  • All flocking models are wrong
  • Human and sociotechnical systems are not under the same selection pressure as schools of fish (and other collective systems), so we must be measured in what we assume holds true for them
  • “Approach marvelous systems with curiosity”. Respect and awe ought to be commonplace when looking at such majestically orchestrated wonders.

Real decision making does not reflect common game theoretic or Bayesian principles. They are driven by:

  • Differences between options
  • Time
  • Geometry/Energy/Cost

Decisions are more or less done as bifurcations: complex choices become a series of binary choices made one after another.

Interestingly, this is more or less in harmony with Naturalistic Decision Making as established by Gary Klein et al. It’s a field that studies macro-cognition, often by looking at human expertise and decision making in real life situations. Statistical intelligence is cool and powerful and interesting, but it is not the human method of thinking.

Pondering

Francois Chollet’sOn The Measure of Intelligence” is another interesting take from an AI mainstay on what intelligence really is and how to find/create it. I agree with a lot of his criticisms and his general sense, but I find his alternatively proposed metrics to also fall short of what I imagine.

David Krakauer had the following to say:

  • IQ is BS (already established by Taleb but good to see in this context). It’s driven by “physics envy”: the desire to look rigorous like physics that leads to all kinds of window dressing that doesn’t actually advance the science.
  • Intelligence is more like architecture, at least according to the Frank Lloyd Wright definition of architecture:

That in which all the parts are related to the whole and the whole is related to the parts.

The “collective” part of collective intelligence is:

  • communication
  • coordination
  • consensus
  • computation?

At different scales of collectives, either the computation or the other three are the dominant part of intelligence. I would assume that has to do with the costs of and constraints at each scale.

He also briefly mentioned “dual coding theory”, which suggests that the brain uses audio and visual channels for information processing. If so, then the information is inherently tied to the way it is received. I suppose that makes sense. If so, then information considered as a pure, quantifiable concept is insufficient to describe intelligence or life.

Smart people do a lot with very little

Does this perhaps relate to energy efficiency? Informational efficiency? Better prediction or encoding of the environment’s advantages in one’s bones? Is this where adaptability comes into play?

We are forced into abstractions, general principles, and science because we forget stuff. They are all coping mechanisms for our limited memory, he argues. I find that interesting, as I’m pretty sure forgetting is beneficial. That doesn’t make him wrong, but it implies a chain of compensation and adjustments (downsides of remembering everything →forget stuff to help decision making/manage mental constraints →develop abstractions to compensate for the downsides of *not* remembering everything).

I wonder if there is a cleaner way to deal with the downsides of remembering everything?…

(You will note that in all of what I’ve mentioned, there is nothing about risk taking. It’s a glaring omission!)

“Reframing Superintellignece” by Eric Drexler takes a different approach to defining what everybody has been talking about lately by looking at things in terms of systems of advanced intelligent capabilities. Instead of “a genius AI”, we have a suite of intelligences good at various things. The reframing, in some ways, is simply specifying the details others don’t bother to mention. After all, even you and I are made of specialized sub-components. Perhaps all intelligence is collective?

One thing someone else proposed is that we can forget about evaluating if something is intelligent or not entirely. Instead, we can look at system states and say “How did we get here?” or “Where are we going?” instead. This is closer to the epistemologically pluralistic system studying methods of Resilience Engineering that I have deep familiarity with. They also mentioned that you can use Ising models to study systems too.

Don’t forget that trust drives collective intelligence. It’s hard to work with someone you don’t trust. We’re in this together.

--

--