Endless Forms Most Intelligent: Bridging Biology and AI
Discovering new paradigms of embodiment with Dr. Michael Levin
“Endless forms most beautiful”
Charles Darwin (On the Origin of the Species, 1859)
I.
When we say "AI," we commonly refer to "artificial intelligence," a term broadly embraced by technologists to encapsulate intelligent systems, including specialized areas like Machine Learning and Deep Learning. To the average user, a large language model (LLM) like ChatGPT may also seem intelligent, proficient across a range of topics from sciences to law, assisting seamlessly with digital tasks. But this leads us to ponder: What really defines intelligence?
This question has long fascinated me, especially in the context of traditional teachings like the "tree of life," which traces the evolution from single-celled organisms to complex beings such as apes. However, the advent of artificial intelligence compels us to reconsider these concepts. Is the traditional tree of life sufficient to explain the nature of intelligence in an era dominated by AI?
I think our understanding of intelligence is largely anthropocentric, judged primarily through a human-centric lens. We often place ourselves at the pinnacle of an intelligence hierarchy, considering apes smarter than dogs due to their similarities to us and their evident behaviors. A recent episode of the Cognitive Revolution podcast featuring Dr. Michael Levin challenged these notions. Dr. Levin proposes that intelligence exists on a continuum, from simple organisms to sophisticated AIs and cyborgs, urging us to rethink intelligence beyond our conventional boundaries.
This essay seeks to share insights from Dr. Levin’s work pulled not only from the podcast episode but also the various content available online. My hope is that it stirs us to reflect on biological evolution and meditate on future implications in artificial intelligence, society and beyond.
II.
Learning from Small and Biological Systems
"We [the research team] approach intelligence as something not confined to neural activities or complex behaviors observable in higher organisms like mammals. Intelligence, in our studies, involves the collective behavior of cells during embryonic development and regeneration. These cells exhibit a form of 'intelligence' in their ability to work towards complex goals, such as rebuilding parts of themselves. This expands the concept of intelligence beyond traditional brain-centric views, to include cellular and collective forms of problem-solving and memory.” - Dr. Michael Levin, Cognitive Revolution Podcast, 2024
Dr. Levin highlights the immense potential of deriving insights from small and biological systems to revolutionize artificial intelligence design and enhance our understanding of cognition and problem-solving. Consider the earthworm, a seemingly simple organism that exhibits collective intelligence through its assembly of cells. These cells collaboratively tackle complex tasks, such as regeneration—like growing a new head—or development, such as increasing in size.
This concept mirrors how our bodies function. Our cells collectively work towards the goal of maintaining our health and survival. Individually, a cell cannot perform complex functions; it requires the synergy of many to accomplish significant tasks. It's crucial to consider the scale of analysis when examining these biological systems. The cellular level reveals patterns and behaviors not evident at the macroscopic level, offering unique insights into collective intelligence that can inspire scalable AI designs.
This understanding could transform how we design AI systems. Instead of relying on a single model, we could develop multiple, smaller, AI agents that collaborate, similar to cells, to solve more complex issues. This approach has already seen some exploration in AI research through multi-agent systems [1] and the "mixture of experts" model [1, 2, 3].
These observations reinforce the idea that simple systems and organisms offer substantial inspiration for designing intricate systems. This reflects the fractal nature of biological systems, where smaller components inform and shape the larger structure.
Emergent Behavior from Simple Rules
It's quite fascinating to observe birds flocking together, moving in unison as if they've transformed into a larger, meta-bird entity, fluidly dancing across the sky. This behavior is called “flocking”, a term I first encountered in the book “Emergent Strategy” by adrienne maree brown. This behavior was studied by Reynolds et al. (Flocks, Herds, and Schools, 1987) and, through computational lens, distills this complex behavior into three simple rules: 1) Avoid collisions, 2) Maintain proximity to neighbors, and 3) Match the velocity of nearby birds. From these principles, the complex behavior of flocking emerges.
Dr. Levin suggests that such simple local interactions and rules could similarly drive the design of intelligent systems, potentially leading to sophisticated outcomes from minimalist beginnings. For AI engineers, this implies starting with simple, minimally designed models that can scale up to achieve more complex behaviors, challenging the notion that sophistication requires initial complexity. In essence, the strategy is to "keep it simple."
Additionally, the computer simulation "Game of Life" illustrates a related concept where, through a few straightforward rules, complex patterns emerge. This game, devised by mathematician John Conway in 1970, demonstrates how simplicity can breed complexity and could serve as a compelling analogy in this discussion.
One of the fundamental aspects here is that you can find intelligence, or problem-solving capacities, in very minimal, unconventional systems…
So, in this [research] paper, we wanted something extremely simple and transparent. We chose sorting algorithms—things like bubble sort and selection sort—that computer science students have been studying for decades. These are completely deterministic and transparent, so there's nowhere to hide; it's all there. What we were able to show is that if you treat these systems with a bit of humility about what they can do, and you ask questions about their capabilities rather than assuming they only do what the algorithm dictates, you actually find some really important capabilities that are nowhere explicitly in the algorithm itself." - Dr. Michael Levin, Cognitive Revolution Podcast, 2024
The Concept of the Cognitive Light Cone
“Cognitive light cone - the outer boundary, in space and time, of the largest goal a given system can work towards. This is my attempt to pinpoint what all agents have in common, no matter their make-up or origin: animals, aliens, AI, swarms, etc. can all be placed on a chart showing the scale of the goals they are capable of pursuing” (source: Dr. Michael Levin’s website)
The "Cognitive Light Cone" is an insightful framework for assessing the intelligence capabilities of diverse entities. This concept is depicted as a cone representing an entity’s cognitive range over time and space, within which goals are set and pursued. The diagram below illustrates various light cones for different entities, including ticks, dogs, humans, and potentially AI or alien intelligences, each varying in size and scope.
For example, a tick's cognitive light cone might be small, emphasizing immediate, survival-driven goals like blood-seeking. A dog’s cone is larger, allowing for more complex objectives and memories. Humans have an even more extensive cone, demonstrating our ability to plan for the future, reflect on the past, and even ponder abstract concepts.
This model provides a novel way to compare cognitive abilities without defaulting to simplistic "intelligent or not" judgments, suggesting that intelligence is a matter of degree, based on the size of an entity's goals and their temporal and spatial reach.
When applied to AI, the framework posits that an artificial system occupies its unique cognitive cone, representing its operational realm or "self." This raises profound questions: If an AI has a cognitive light cone, does it have a form of self-awareness? What moral and ethical responsibilities do we have towards such systems?
“All agents have one fundamental thing in common: they pursue goals.
Now, consider whether you're only interested in the immediate environment, such as local sugar concentrations, with memories and predictive capacities extending only a few minutes. In this case, you might resemble a bacterium. Alternatively, if your memories and predictions extend several months but not beyond immediate surroundings, you might be akin to a dog. However, a human might have a vastly larger cognitive light cone, actively engaging with concepts like world peace or the future of financial markets decades from now.
We, as humans, are examples of compound intelligence. Our cells have their own tiny cognitive cones, as do our organs, and so do we collectively, along with even larger structures in which we participate." - Dr. Michael Levin, Cognitive Revolution Podcast, 2024
Reconceptualizing Embodiment, Intelligence, and Agency
Critiques of artificial intelligence often center on the contention that AI is not truly embodied—not anchored in any tangible reality or 'grounded' through real-time, continuous inputs from the world, like those that inform human perception and interaction. But such a view might be too narrow, overlooking the vast spectrum of 'embodiment' that exists even within our own biology.
Imagine possessing an organ with the ability to perceive multiple parameters of your internal state, like a sophisticated sensor that monitors your metabolic rates or blood oxygen levels. This enhancement would not only transform your understanding of your physical self but also your cognitive landscape. You would live in a multi-dimensional space, immediately recognizing the 'intelligence' of your liver or kidneys as they navigate and manage complex processes within this space—meeting goals, executing functions, adapting to change.
Dr. Michael Levin's work suggests that cognition and embodiment are not limited to physical form or direct interaction with the external world. Instead, embodiment can occur in any 'space'—whether it's the familiar three-dimensional one we navigate daily, the linguistic spaces in which we communicate, or the internal spaces our organs operate within. The perception and recognition of AI's intelligence and agency can dramatically vary depending on the observer's context and the criteria they use to evaluate intelligence. This observer-dependence challenges us to consider how different perspectives might influence our understanding of what it means for AI to be 'intelligent' or 'embodied.' This reconceptualization of embodiment expands the sensory experience and the very definition of agency.
Such an understanding has profound implications for AI. It suggests that artificial intelligences need not be bound by human sensory limitations and can perceive, interact with, and respond to a multitude of 'spaces'—some of which may be internal or represent data dimensions beyond human perception. By acknowledging that there are many spaces of embodiment—and they are all real—we open the door to recognizing new forms of intelligence and problem-solving capabilities that operate on principles distinct from our own cognition.
III.
Throughout this essay, we have journeyed through the nuanced landscape of intelligence and embodiment, guided by the innovative insights of Dr. Michael Levin. We've challenged the traditional view that intelligence is a fixed attribute. Instead, embracing the idea that it exists on a vast continuum, from the simplest organisms to sophisticated artificial intelligences and beyond. Similarly, our exploration has expanded the concept of embodiment, revealing it as more than mere physical interaction—it is an intricate weaving of internal and external engagements that define our cognitive existence.
Key insights from Dr. Levin's research have shown us the immense potential of learning from small and biological systems. These systems, from the humble earthworm to the complex human, offer profound lessons for revolutionizing artificial intelligence design. The Cognitive Light Cone, as a framework, allows us to assess intelligence across a variety of entities, highlighting the scalability of cognitive capabilities and the importance of temporal and spatial contexts in their evaluation.
The implications of these insights for future AI development are vast. They suggest a shift towards designing AI systems that are capable of engaging with multiple 'spaces' of intelligence and embodiment, far beyond the human sensory limitations. This approach could lead to AI systems that are not only more versatile and adaptive but also more ethically attuned to the complexities of the environments and contexts they operate within.
Reflecting on these themes, we are invited to consider the broader implications of our expanding understanding of intelligence and embodiment. The integration of technology and biology calls us to a new era of innovation—one that holds promise for transformative advances in how we live, work, and solve problems. It compels us to continue the dialogue, to delve deeper into interdisciplinary research, and to question our philosophical commitments to traditional definitions of intelligence and life.
In concluding, let us revisit the awe-inspiring notion of "endless forms most beautiful," as Darwin once marveled. Dr. Levin’s work reminds us that these forms—whether biological or artificial—continue to evolve, challenging us to rethink and reimagine the possibilities. It is clear that the journey of understanding intelligence and embodiment is far from complete—it is an ever-expanding horizon, rich with opportunities for discovery and enlightenment. 🌀
References and Resources:
Dr. Michael Levin on Embodied Minds and Cognitive Agents
Evolution, Basal Cognition and Regenerative Medicine
The beauty of collective intelligence, explained by a developmental biologist
What is memory, agency, decision-making, cognition, goal-directedness, etc?