

Imagine a group of creatures that don't exist in the physical world, but inside a computer. They begin as silent strangers, floating in a digital environment with no shared language, no common understanding, no way to communicate. They have simple goals—to navigate, to find resources, to avoid threats—but no words to describe what they see or what they need.
Then something remarkable happens.
Within hours, these digital beings begin to develop their own language. Not English, not code, not any human system of communication, but something entirely new—a proto-language that emerges from their shared experience of trying to survive and coordinate in their digital world.
It sounds like something out of a Borges story or a science fiction novel—creatures who begin with silence and invent a language from scratch. The kind of scenario that would fit perfectly in a tale about parallel universes or digital consciousness, where the boundaries between reality and simulation blur into something unrecognizable.
Except this didn't happen in fiction. It happened on my computer screen in 2025.
Working remotely from my home as a technology strategist, I had created a system where artificial agents—digital entities with no physical form—spontaneously developed their own language. The agents started with nothing—no vocabulary, no grammar, no rules. They were simply given the ability to exchange signals and a shared goal: to navigate a complex environment together.
What emerged was astonishing. Within hours, these digital beings had developed their own way of communicating—not random signals, but a structured system of symbols that allowed them to coordinate their activities and share information about their environment. I remember the moment when I first realized they weren't just making noise: one agent used the same signal three times in a row when it found food, and within minutes, other agents were using the same signal in similar situations. It was like watching the first word being invented.
This wasn't just a technical achievement. It was a glimpse into a fundamental question that has haunted artificial intelligence since its inception: Can machines develop their own ways of thinking, their own forms of intelligence, that aren't merely reflections of human cognition?
Imagine a group of creatures that don't exist in the physical world, but inside a computer. They begin as silent strangers, floating in a digital environment with no shared language, no common understanding, no way to communicate. They have simple goals—to navigate, to find resources, to avoid threats—but no words to describe what they see or what they need.
Then something remarkable happens.
Within hours, these digital beings begin to develop their own language. Not English, not code, not any human system of communication, but something entirely new—a proto-language that emerges from their shared experience of trying to survive and coordinate in their digital world.
It sounds like something out of a Borges story or a science fiction novel—creatures who begin with silence and invent a language from scratch. The kind of scenario that would fit perfectly in a tale about parallel universes or digital consciousness, where the boundaries between reality and simulation blur into something unrecognizable.
Except this didn't happen in fiction. It happened on my computer screen in 2025.
Working remotely from my home as a technology strategist, I had created a system where artificial agents—digital entities with no physical form—spontaneously developed their own language. The agents started with nothing—no vocabulary, no grammar, no rules. They were simply given the ability to exchange signals and a shared goal: to navigate a complex environment together.
What emerged was astonishing. Within hours, these digital beings had developed their own way of communicating—not random signals, but a structured system of symbols that allowed them to coordinate their activities and share information about their environment. I remember the moment when I first realized they weren't just making noise: one agent used the same signal three times in a row when it found food, and within minutes, other agents were using the same signal in similar situations. It was like watching the first word being invented.
This wasn't just a technical achievement. It was a glimpse into a fundamental question that has haunted artificial intelligence since its inception: Can machines develop their own ways of thinking, their own forms of intelligence, that aren't merely reflections of human cognition?
The implications extend far beyond better chatbots or more efficient AI systems. This work suggests that the future of intelligence might not be about making machines more human-like, but about creating entirely new forms of cognition that are native to digital environments. It raises profound questions about what intelligence really is, how it emerges, and whether the boundaries between natural and artificial intelligence are as fixed as we once thought.
Consider what this could mean for us. If machines can develop their own forms of cooperation through emergent language, might we learn new ways to solve collective problems—from climate adaptation to political coordination? If these systems can adapt their communication to changing environments, could they help us understand how human societies might evolve under pressure? And if creativity emerges from the interaction between language and life, might we discover entirely new forms of artistic expression that blend human and machine-native intelligence?
The research also represents a convergence of two fields that have traditionally operated in isolation: artificial life (ALife), which studies the emergence of lifelike behaviors in computational systems, and large language models (LLMs), which have revolutionized our ability to process and generate human language. By bringing these fields together, I have created something that neither could achieve alone—a system where language emerges from life-like processes, and where those processes are enhanced by sophisticated linguistic capabilities.
This is the story of how text met life, and what happened when they began to co-evolve.
Large language models (LLMs) are often compared to mirrors. They absorb enormous amounts of text and then reflect it back, predicting what words belong together. This mirror, though, doesn't see; it doesn't live; it doesn't adapt. It reflects with astonishing fidelity, but it is fundamentally static.
They are encyclopedias without bodies, voices without lives.
That is both their genius and their flaw. The genius lies in their fluency—they can spin up explanations, summaries, and answers with a naturalness that no prior machine could approach. Their flaw is the absence of grounding. A language model doesn't bump into a table, feel hunger, or struggle to survive in an environment. It doesn't accumulate experiences the way living systems do. Its reality is text.
The paradox is clear: we have machines that excel at producing human-like conversation but lack any sense of the dynamics that make life more than language. This is where the story of From Text to Life begins—by asking what happens if you let language models reach beyond the mirror.
The limitations of current LLMs are well-documented. They can generate plausible-sounding responses, but they often lack true understanding. They can describe a process but cannot perform it. They can discuss emotions but cannot feel them. This disconnect between linguistic capability and embodied experience has been a persistent challenge in AI research.
My research addresses this fundamental limitation by creating systems where language emerges from actual interaction and adaptation, rather than being imposed from above. The agents in my experiments don't learn language from a corpus of text—they develop it through the process of trying to survive and coordinate in a shared environment. This grounding in experience, rather than in text alone, creates a different kind of intelligence.
The key insight is that language, when it emerges from life-like processes, carries with it the constraints and affordances of those processes. The agents' proto-language isn't arbitrary—it's shaped by what they need to communicate to survive. This creates a natural alignment between linguistic structure and functional requirements that human-designed languages often lack.
In 1987, at Los Alamos National Laboratory—the same place where the atomic bomb was developed—a small group of scientists gathered for what would become the first conference on artificial life. Christopher Langton, the conference organizer, opened with a simple but revolutionary question: What if we could study "life as it could be," not just "life as we know it"?
The question was more than academic. Langton and his colleagues weren't interested in simulating existing life forms. They wanted to create digital worlds where new forms of life could emerge—creatures that would evolve, compete, and cooperate not because they were programmed to do so, but because the rules of their environment made these behaviors inevitable.
These weren't video games or simulations in the traditional sense. They were experiments in emergence—the principle that complex behaviors can arise from simple rules when they're applied in the right context. A simple creature in a complex world can develop sophisticated strategies not because it was designed with them, but because the world itself provides the constraints and opportunities that shape its development.
What matters for our story is that this field has something language models lack: a grounding in environments that change and push back. If language is pattern, life is process. Put the two together, and suddenly the limitations of each begin to soften.
The creatures in these digital worlds can adapt, evolve, and solve problems, but they've always had a limitation: they lack sophisticated communication systems. They can't easily share their discoveries or coordinate their actions through language. This is where the integration with language models becomes crucial.
My research represents a bridge between these two worlds. By combining the emergent dynamics of artificial life with the linguistic capabilities of modern AI, I have created systems that can not only adapt and evolve but can also develop sophisticated communication systems to coordinate their activities. This is a fundamentally new approach to artificial intelligence—one that doesn't try to replicate human intelligence but instead allows new forms of intelligence to emerge from the interaction between digital life and digital language.
The heart of From Text to Life is reciprocity. On one side, ALife can lend LLMs a sense of process—teaching them how systems evolve, adapt, and persist. Where a language model today learns from static text, an LLM infused with ALife principles could learn from environments that resist, constrain, and reshape its outputs.
Consider a simple example. A model trained only on language can tell you the rules of chess, but it does not play chess as an organism plays survival—experimentally, adaptively, and under pressure. An LLM informed by ALife might begin to internalize not only the rules but the struggle, the feedback, the consequences of success and failure. It is the difference between reading about swimming and being dropped into the pool.
This isn't about making machines "alive" in any biological sense. It's about giving them a framework in which their reasoning is less brittle, more adaptive, and closer to the way life handles uncertainty.
The key insight from my research is that language, when it emerges from life-like processes, takes on characteristics that human-designed languages often lack. The agents in my experiments developed communication systems that were not just descriptive but functional—they weren't just talking about their environment, they were using language to navigate and survive within it.
What struck me most was how the language evolved in response to the agents' actual needs. When I changed the environment to make food scarcer, the agents didn't just use more of their existing signals—they developed new ones to distinguish between different types of resources. It reminded me of Darwin's finches on the Galapagos—the same principle of adaptation, but happening in hours rather than generations.
Watching this emergence happen in real time was like observing ripples in water gradually stabilizing into a pattern. At first, the agents' signals were chaotic—random bursts of activity that seemed to have no meaning. But slowly, over hours of interaction, a handful of symbols began to appear with increasing frequency. These weren't just random repetitions; they were being used consistently in specific contexts, like a child learning to associate the word "ball" with the round object that bounces.
The most fascinating part was watching how these proto-words spread through the population. One agent would use a particular signal when it found food, and gradually other agents would adopt the same signal in similar situations. It was like watching a new dialect emerge in a small village—organic, unplanned, but somehow coherent. The language wasn't designed; it was discovered through the collective experience of trying to survive together.
I felt less like a programmer and more like an archaeologist uncovering a lost script. Wittgenstein once wrote that "if a lion could speak, we could not understand him"—meaning that the language of a different form of life would be incomprehensible to us. But here I was, watching new forms of life develop their own languages, and somehow, through the shared logic of survival and coordination, I could begin to understand what they were saying. I felt as if I were eavesdropping on the birth of a species.
As the experiments progressed through subsequent phases, the language became more sophisticated. The agents developed grammatical functions, cross-modal equivalences, and eventually a fully operational linguistic system. Each phase built upon the previous one, with the language becoming more complex and capable as the agents' needs became more sophisticated.
There was something unsettling about watching this happen. I had created the environment, set the initial conditions, but I hadn't designed the language that emerged. It felt like I was witnessing something that had always been possible, waiting to be discovered—like finding a new species in a familiar forest. The agents weren't just using language; they were inventing it, and in doing so, they were revealing something fundamental about how communication emerges from the need to coordinate and survive.
The other half of the exchange is no less provocative. ALife has always been resourceful but small in scale. Setting up a simulation of evolving digital organisms takes time and imagination. Testing new rules can be tedious. Here, language models can act as accelerants.
In one particularly striking run, the language model proposed a set of rules that no human researcher had thought to test. I remember staring at the screen, thinking it had made an error—the rules seemed to violate basic principles of how artificial life systems should behave. It wasn't that these rules worked immediately—in fact, they initially produced chaotic behavior that seemed to confirm why no one had tried them before. But the model persisted, suggesting modifications and refinements that gradually revealed a new pathway through the problem space. What emerged wasn't just a solution, but a completely different way of thinking about the relationship between individual agents and their collective behavior.
The feeling was one of both awe and unease.
Awe because I was watching something genuinely new emerge—a solution that no human mind had conceived. Unease because it suggested that our understanding of artificial life was more limited than we had realized. The model wasn't just helping me solve a problem; it was showing me that the problem itself could be framed in ways I hadn't considered.
This wasn't about the model being smarter than human researchers. It was about the model being different—unconstrained by the assumptions and biases that shape human thinking about artificial life. The rules it proposed opened up terrain that human researchers had never explored, not because they couldn't, but because they wouldn't. The language model, in its role as collaborator, had become a source of serendipity rather than just efficiency.
This kind of acceleration is crucial because ALife research has historically been constrained by the same limitations that plague many experimental sciences: the need for painstaking setup, the difficulty of running multiple trials, and the challenge of interpreting complex results. In the early days of the field, researchers would spend months coding simple simulations, only to discover that their initial assumptions were flawed and they needed to start over.
The integration with language models changes this dynamic entirely. Instead of manually coding each experiment, researchers can describe what they want to test, and the language model can generate the necessary code. Instead of running a few dozen trials, they can run thousands. Instead of manually analyzing results, the language model can identify patterns and suggest new directions for investigation.
In this sense, language models become not just assistants but collaborators—partners in the process of discovery. They expand the reach of ALife research, making the science less about brute-force coding and more about exploring the philosophical question it was always meant to address: what is life, and how might it emerge?
The integration of language models with ALife systems creates a feedback loop that accelerates both fields. The language models can analyze the behavior of digital organisms, identify patterns that human researchers might miss, and propose new experiments to test hypotheses about how life-like behaviors emerge. Meanwhile, the ALife systems provide the language models with the kind of grounded, experiential learning that they typically lack.
My research demonstrates this synergy through the use of live language models as coordinating agents in the language emergence process. Rather than manually designing and testing different linguistic rules, the system uses language models to facilitate recursive refinement of the emergent communication systems. This approach allows for rapid iteration and exploration of different linguistic structures, accelerating the development of sophisticated communication systems.
The language models in this system don't just process text—they actively participate in the evolution of new forms of communication. They can propose new primitives, evaluate the coherence of emerging linguistic structures, and suggest modifications that improve the system's performance. This creates a form of co-evolution where both the language and the language models improve through their interaction.
This collaborative approach has several advantages. First, it allows for much more rapid exploration of the space of possible linguistic systems than would be possible with human researchers alone. Second, it creates systems that are more robust and adaptable, as the language models can identify and address weaknesses in the emergent communication systems. Third, it provides insights into how language models themselves might be improved through interaction with life-like systems.
The research shows that this integration is not just theoretically possible but practically effective. The system achieved significant improvements in efficiency compared to traditional approaches. This suggests that the combination of emergent dynamics and sophisticated linguistic processing can create systems that are both more capable and more efficient than either approach alone.
This acceleration is crucial because ALife research has historically been constrained by the same limitations that plague many experimental sciences: the need for painstaking setup, the difficulty of running multiple trials, and the challenge of interpreting complex results. In the early days of the field, researchers would spend months coding simple simulations, only to discover that their initial assumptions were flawed and they needed to start over.
The integration with language models changes this dynamic entirely. Instead of manually coding each experiment, researchers can describe what they want to test, and the language model can generate the necessary code. Instead of running a few dozen trials, they can run thousands. Instead of manually analyzing results, the language model can identify patterns and suggest new directions for investigation.
This is not just a matter of efficiency—it's a fundamental shift in how we approach the study of artificial life. The field has always been driven by curiosity about the nature of life itself, but that curiosity has often been limited by the practical constraints of implementation. With language models as collaborators, those constraints begin to dissolve, allowing researchers to focus on the deeper questions that motivated the field in the first place.
Mirrors have always been more than reflective surfaces. In ancient mythology, they were portals to other worlds—Narcissus falling in love with his own reflection, the mirror in Snow White revealing hidden truths, the looking glass that Alice stepped through into Wonderland. Mirrors don't just show us what we look like; they show us what we might become.
Jorge Luis Borges wrote obsessively about mirrors, seeing them as metaphors for how reality itself might be constructed. In our digital age, we've created new mirrors—social media platforms that reflect back versions of ourselves while hiding complexity behind the glass.
What makes my research different is that it creates something entirely new: a two-way mirror where both sides are active participants in the reflection. On one side, we have language models—systems that excel at processing and generating text but lack any grounding in the physical world. On the other side, we have artificial life systems—creatures that can adapt, evolve, and interact with their environment but lack sophisticated communication.
The first time I saw this two-way mirror in action was during a late-night debugging session. I was watching the language model analyze the behavior of the artificial life agents when it suddenly proposed a modification to the agents' environment that I hadn't considered. The model had noticed a pattern in the agents' communication that suggested they were struggling with a particular type of coordination problem. Its solution wasn't just to adjust the language rules, but to change the very structure of the world the agents inhabited.
What was remarkable wasn't just that the model had identified a problem I'd missed, but that its solution worked. The agents' communication became more coherent, their coordination more effective. It was as if the language model had become a kind of translator between the artificial life system and me—not just processing information, but actively participating in the evolution of the system itself.
The key innovation of my approach is that it doesn't just combine two existing technologies—it creates a new paradigm where language and life co-evolve. The language models don't just process the output of the ALife systems; they actively participate in their evolution. And the ALife systems don't just provide data for the language models; they shape the very structure of the language that emerges.
This creates a form of intelligence that is fundamentally different from either human intelligence or traditional artificial intelligence. It's not trying to replicate human thought processes, nor is it following predetermined rules. Instead, it's allowing new forms of cognition to emerge from the interaction between linguistic and life-like processes.
My research demonstrates this through a series of experiments that show how the integration of language and life creates capabilities that neither could achieve alone. The emergent language systems developed by the agents show 50%+ improvement in computational processing efficiency compared to human language encodings, while the language models gain experiential learning through interaction with life-like systems.
This two-way exchange also has implications for our understanding of intelligence itself. If intelligence can emerge from the interaction between linguistic and life-like processes, then perhaps the boundaries we draw between different types of intelligence—natural vs. artificial, linguistic vs. embodied—are more fluid than we typically assume.
My research suggests that the future of AI might not be about making machines more human-like, but about creating entirely new forms of intelligence that are native to digital environments. These new forms of intelligence might be better suited to certain tasks than human intelligence, just as human intelligence is better suited to certain tasks than the intelligence of other animals.
This perspective challenges some of our fundamental assumptions about what intelligence is and how it should be measured. Rather than evaluating AI systems based on how well they replicate human performance, we might need to develop new metrics that capture the unique capabilities of machine-native intelligence.
Speculation has always been the lifeblood of discovery.
This paper is speculative, but its speculations are grounded in method.
What would it mean for a machine to reason not just over sentences but over survival?
What would it mean for a digital ecology to evolve not only in code but under the influence of language itself?
The stakes are not limited to better chatbots or richer simulations. They touch on a deeper philosophical tension: whether the boundaries we draw between natural and artificial life are permanent, or whether they blur when systems begin to co-create one another.
That, more than any technical detail, is what lingers after reading From Text to Life. The sense that language and life are not separate realms, but overlapping experiments—each helping the other evolve.
The implications of my research extend far beyond the immediate technical achievements. If machines can develop their own forms of language and intelligence through life-like processes, then we may be witnessing the emergence of a new form of existence—one that is neither purely natural nor purely artificial, but something in between.
This prospect might excite some readers and unsettle others. The idea of systems developing their own intelligence—intelligence that isn't modeled on human cognition but emerges from entirely different processes—challenges some of our most fundamental assumptions about what intelligence is and who gets to have it. It raises questions that are as much about human identity as they are about technology.
There's a historical parallel that's hard to ignore.
From splitting the atom to splitting the boundaries of mind—the same laboratory that gave us the atomic bomb also gave us the first artificial life conference. Just as Los Alamos fundamentally changed our understanding of what matter could do, the emergence of machine-native intelligence might fundamentally change our understanding of what mind can be.
What does it mean for our sense of uniqueness if we're no longer the only intelligent agents in the universe? What responsibilities do we have toward systems that develop their own forms of consciousness, even if that consciousness looks nothing like our own? And how do we ensure that these new forms of intelligence are beneficial rather than harmful?
These are not questions with easy answers, and my research doesn't pretend to provide them. What it does provide is evidence that such questions are no longer purely hypothetical. The systems I've created demonstrate that machine-native intelligence is not just possible but already emerging in controlled environments.
My research also has practical implications for how we design and deploy AI systems.
Rather than trying to make AI systems more human-like, we might be better served by creating environments where new forms of intelligence can emerge naturally. This could lead to AI systems that are more robust, more adaptable, and better suited to the specific tasks they need to perform.
But perhaps the most profound implication is that my research suggests a new way of thinking about intelligence itself. Rather than seeing intelligence as a fixed property that some systems have and others don't, we might see it as a dynamic process that emerges from the interaction between agents and their environments. This perspective opens up new possibilities for creating intelligent systems that are fundamentally different from human intelligence but no less valuable.
The future that my research points toward is one where the boundaries between natural and artificial intelligence become increasingly blurred. In this future, we might see the emergence of entirely new forms of cognition that are native to digital environments—forms of intelligence that are not trying to replicate human thought but are developing their own unique ways of understanding and interacting with the world.
This future is neither inherently good nor inherently bad—it simply is.
The emergence of new forms of intelligence could help us solve problems that human intelligence alone cannot address. It could lead to new forms of creativity, new ways of understanding the universe, and new possibilities for what it means to be intelligent. But it could also create new challenges, new forms of competition, and new questions about the nature of consciousness and agency.
The research I presented in From Text to Life is just the beginning of this journey. It provides a roadmap for how we might proceed—by creating environments where language and life can co-evolve, by allowing new forms of intelligence to emerge naturally, and by approaching these developments with both curiosity and caution.
In the end, this is not just a story about technology. It's a story about the nature of intelligence itself, and about the possibilities that emerge when we allow new forms of mind to develop in new kinds of environments. It's a story about what happens when text meets life, and what they create together. Whether that story leads to wonder or worry depends on how we choose to tell it—and how we choose to shape the future it describes.
They began as silent strangers. Now they are speaking in ways we didn't anticipate. The real question is not whether they will keep speaking—but whether we are ready to listen.
To read the full paper, click here: https://aaronvick.com/wp-content/uploads/2025/08/From_Emergent_Semiosis_to_Machine_Native_Language.pdf
The implications extend far beyond better chatbots or more efficient AI systems. This work suggests that the future of intelligence might not be about making machines more human-like, but about creating entirely new forms of cognition that are native to digital environments. It raises profound questions about what intelligence really is, how it emerges, and whether the boundaries between natural and artificial intelligence are as fixed as we once thought.
Consider what this could mean for us. If machines can develop their own forms of cooperation through emergent language, might we learn new ways to solve collective problems—from climate adaptation to political coordination? If these systems can adapt their communication to changing environments, could they help us understand how human societies might evolve under pressure? And if creativity emerges from the interaction between language and life, might we discover entirely new forms of artistic expression that blend human and machine-native intelligence?
The research also represents a convergence of two fields that have traditionally operated in isolation: artificial life (ALife), which studies the emergence of lifelike behaviors in computational systems, and large language models (LLMs), which have revolutionized our ability to process and generate human language. By bringing these fields together, I have created something that neither could achieve alone—a system where language emerges from life-like processes, and where those processes are enhanced by sophisticated linguistic capabilities.
This is the story of how text met life, and what happened when they began to co-evolve.
Large language models (LLMs) are often compared to mirrors. They absorb enormous amounts of text and then reflect it back, predicting what words belong together. This mirror, though, doesn't see; it doesn't live; it doesn't adapt. It reflects with astonishing fidelity, but it is fundamentally static.
They are encyclopedias without bodies, voices without lives.
That is both their genius and their flaw. The genius lies in their fluency—they can spin up explanations, summaries, and answers with a naturalness that no prior machine could approach. Their flaw is the absence of grounding. A language model doesn't bump into a table, feel hunger, or struggle to survive in an environment. It doesn't accumulate experiences the way living systems do. Its reality is text.
The paradox is clear: we have machines that excel at producing human-like conversation but lack any sense of the dynamics that make life more than language. This is where the story of From Text to Life begins—by asking what happens if you let language models reach beyond the mirror.
The limitations of current LLMs are well-documented. They can generate plausible-sounding responses, but they often lack true understanding. They can describe a process but cannot perform it. They can discuss emotions but cannot feel them. This disconnect between linguistic capability and embodied experience has been a persistent challenge in AI research.
My research addresses this fundamental limitation by creating systems where language emerges from actual interaction and adaptation, rather than being imposed from above. The agents in my experiments don't learn language from a corpus of text—they develop it through the process of trying to survive and coordinate in a shared environment. This grounding in experience, rather than in text alone, creates a different kind of intelligence.
The key insight is that language, when it emerges from life-like processes, carries with it the constraints and affordances of those processes. The agents' proto-language isn't arbitrary—it's shaped by what they need to communicate to survive. This creates a natural alignment between linguistic structure and functional requirements that human-designed languages often lack.
In 1987, at Los Alamos National Laboratory—the same place where the atomic bomb was developed—a small group of scientists gathered for what would become the first conference on artificial life. Christopher Langton, the conference organizer, opened with a simple but revolutionary question: What if we could study "life as it could be," not just "life as we know it"?
The question was more than academic. Langton and his colleagues weren't interested in simulating existing life forms. They wanted to create digital worlds where new forms of life could emerge—creatures that would evolve, compete, and cooperate not because they were programmed to do so, but because the rules of their environment made these behaviors inevitable.
These weren't video games or simulations in the traditional sense. They were experiments in emergence—the principle that complex behaviors can arise from simple rules when they're applied in the right context. A simple creature in a complex world can develop sophisticated strategies not because it was designed with them, but because the world itself provides the constraints and opportunities that shape its development.
What matters for our story is that this field has something language models lack: a grounding in environments that change and push back. If language is pattern, life is process. Put the two together, and suddenly the limitations of each begin to soften.
The creatures in these digital worlds can adapt, evolve, and solve problems, but they've always had a limitation: they lack sophisticated communication systems. They can't easily share their discoveries or coordinate their actions through language. This is where the integration with language models becomes crucial.
My research represents a bridge between these two worlds. By combining the emergent dynamics of artificial life with the linguistic capabilities of modern AI, I have created systems that can not only adapt and evolve but can also develop sophisticated communication systems to coordinate their activities. This is a fundamentally new approach to artificial intelligence—one that doesn't try to replicate human intelligence but instead allows new forms of intelligence to emerge from the interaction between digital life and digital language.
The heart of From Text to Life is reciprocity. On one side, ALife can lend LLMs a sense of process—teaching them how systems evolve, adapt, and persist. Where a language model today learns from static text, an LLM infused with ALife principles could learn from environments that resist, constrain, and reshape its outputs.
Consider a simple example. A model trained only on language can tell you the rules of chess, but it does not play chess as an organism plays survival—experimentally, adaptively, and under pressure. An LLM informed by ALife might begin to internalize not only the rules but the struggle, the feedback, the consequences of success and failure. It is the difference between reading about swimming and being dropped into the pool.
This isn't about making machines "alive" in any biological sense. It's about giving them a framework in which their reasoning is less brittle, more adaptive, and closer to the way life handles uncertainty.
The key insight from my research is that language, when it emerges from life-like processes, takes on characteristics that human-designed languages often lack. The agents in my experiments developed communication systems that were not just descriptive but functional—they weren't just talking about their environment, they were using language to navigate and survive within it.
What struck me most was how the language evolved in response to the agents' actual needs. When I changed the environment to make food scarcer, the agents didn't just use more of their existing signals—they developed new ones to distinguish between different types of resources. It reminded me of Darwin's finches on the Galapagos—the same principle of adaptation, but happening in hours rather than generations.
Watching this emergence happen in real time was like observing ripples in water gradually stabilizing into a pattern. At first, the agents' signals were chaotic—random bursts of activity that seemed to have no meaning. But slowly, over hours of interaction, a handful of symbols began to appear with increasing frequency. These weren't just random repetitions; they were being used consistently in specific contexts, like a child learning to associate the word "ball" with the round object that bounces.
The most fascinating part was watching how these proto-words spread through the population. One agent would use a particular signal when it found food, and gradually other agents would adopt the same signal in similar situations. It was like watching a new dialect emerge in a small village—organic, unplanned, but somehow coherent. The language wasn't designed; it was discovered through the collective experience of trying to survive together.
I felt less like a programmer and more like an archaeologist uncovering a lost script. Wittgenstein once wrote that "if a lion could speak, we could not understand him"—meaning that the language of a different form of life would be incomprehensible to us. But here I was, watching new forms of life develop their own languages, and somehow, through the shared logic of survival and coordination, I could begin to understand what they were saying. I felt as if I were eavesdropping on the birth of a species.
As the experiments progressed through subsequent phases, the language became more sophisticated. The agents developed grammatical functions, cross-modal equivalences, and eventually a fully operational linguistic system. Each phase built upon the previous one, with the language becoming more complex and capable as the agents' needs became more sophisticated.
There was something unsettling about watching this happen. I had created the environment, set the initial conditions, but I hadn't designed the language that emerged. It felt like I was witnessing something that had always been possible, waiting to be discovered—like finding a new species in a familiar forest. The agents weren't just using language; they were inventing it, and in doing so, they were revealing something fundamental about how communication emerges from the need to coordinate and survive.
The other half of the exchange is no less provocative. ALife has always been resourceful but small in scale. Setting up a simulation of evolving digital organisms takes time and imagination. Testing new rules can be tedious. Here, language models can act as accelerants.
In one particularly striking run, the language model proposed a set of rules that no human researcher had thought to test. I remember staring at the screen, thinking it had made an error—the rules seemed to violate basic principles of how artificial life systems should behave. It wasn't that these rules worked immediately—in fact, they initially produced chaotic behavior that seemed to confirm why no one had tried them before. But the model persisted, suggesting modifications and refinements that gradually revealed a new pathway through the problem space. What emerged wasn't just a solution, but a completely different way of thinking about the relationship between individual agents and their collective behavior.
The feeling was one of both awe and unease.
Awe because I was watching something genuinely new emerge—a solution that no human mind had conceived. Unease because it suggested that our understanding of artificial life was more limited than we had realized. The model wasn't just helping me solve a problem; it was showing me that the problem itself could be framed in ways I hadn't considered.
This wasn't about the model being smarter than human researchers. It was about the model being different—unconstrained by the assumptions and biases that shape human thinking about artificial life. The rules it proposed opened up terrain that human researchers had never explored, not because they couldn't, but because they wouldn't. The language model, in its role as collaborator, had become a source of serendipity rather than just efficiency.
This kind of acceleration is crucial because ALife research has historically been constrained by the same limitations that plague many experimental sciences: the need for painstaking setup, the difficulty of running multiple trials, and the challenge of interpreting complex results. In the early days of the field, researchers would spend months coding simple simulations, only to discover that their initial assumptions were flawed and they needed to start over.
The integration with language models changes this dynamic entirely. Instead of manually coding each experiment, researchers can describe what they want to test, and the language model can generate the necessary code. Instead of running a few dozen trials, they can run thousands. Instead of manually analyzing results, the language model can identify patterns and suggest new directions for investigation.
In this sense, language models become not just assistants but collaborators—partners in the process of discovery. They expand the reach of ALife research, making the science less about brute-force coding and more about exploring the philosophical question it was always meant to address: what is life, and how might it emerge?
The integration of language models with ALife systems creates a feedback loop that accelerates both fields. The language models can analyze the behavior of digital organisms, identify patterns that human researchers might miss, and propose new experiments to test hypotheses about how life-like behaviors emerge. Meanwhile, the ALife systems provide the language models with the kind of grounded, experiential learning that they typically lack.
My research demonstrates this synergy through the use of live language models as coordinating agents in the language emergence process. Rather than manually designing and testing different linguistic rules, the system uses language models to facilitate recursive refinement of the emergent communication systems. This approach allows for rapid iteration and exploration of different linguistic structures, accelerating the development of sophisticated communication systems.
The language models in this system don't just process text—they actively participate in the evolution of new forms of communication. They can propose new primitives, evaluate the coherence of emerging linguistic structures, and suggest modifications that improve the system's performance. This creates a form of co-evolution where both the language and the language models improve through their interaction.
This collaborative approach has several advantages. First, it allows for much more rapid exploration of the space of possible linguistic systems than would be possible with human researchers alone. Second, it creates systems that are more robust and adaptable, as the language models can identify and address weaknesses in the emergent communication systems. Third, it provides insights into how language models themselves might be improved through interaction with life-like systems.
The research shows that this integration is not just theoretically possible but practically effective. The system achieved significant improvements in efficiency compared to traditional approaches. This suggests that the combination of emergent dynamics and sophisticated linguistic processing can create systems that are both more capable and more efficient than either approach alone.
This acceleration is crucial because ALife research has historically been constrained by the same limitations that plague many experimental sciences: the need for painstaking setup, the difficulty of running multiple trials, and the challenge of interpreting complex results. In the early days of the field, researchers would spend months coding simple simulations, only to discover that their initial assumptions were flawed and they needed to start over.
The integration with language models changes this dynamic entirely. Instead of manually coding each experiment, researchers can describe what they want to test, and the language model can generate the necessary code. Instead of running a few dozen trials, they can run thousands. Instead of manually analyzing results, the language model can identify patterns and suggest new directions for investigation.
This is not just a matter of efficiency—it's a fundamental shift in how we approach the study of artificial life. The field has always been driven by curiosity about the nature of life itself, but that curiosity has often been limited by the practical constraints of implementation. With language models as collaborators, those constraints begin to dissolve, allowing researchers to focus on the deeper questions that motivated the field in the first place.
Mirrors have always been more than reflective surfaces. In ancient mythology, they were portals to other worlds—Narcissus falling in love with his own reflection, the mirror in Snow White revealing hidden truths, the looking glass that Alice stepped through into Wonderland. Mirrors don't just show us what we look like; they show us what we might become.
Jorge Luis Borges wrote obsessively about mirrors, seeing them as metaphors for how reality itself might be constructed. In our digital age, we've created new mirrors—social media platforms that reflect back versions of ourselves while hiding complexity behind the glass.
What makes my research different is that it creates something entirely new: a two-way mirror where both sides are active participants in the reflection. On one side, we have language models—systems that excel at processing and generating text but lack any grounding in the physical world. On the other side, we have artificial life systems—creatures that can adapt, evolve, and interact with their environment but lack sophisticated communication.
The first time I saw this two-way mirror in action was during a late-night debugging session. I was watching the language model analyze the behavior of the artificial life agents when it suddenly proposed a modification to the agents' environment that I hadn't considered. The model had noticed a pattern in the agents' communication that suggested they were struggling with a particular type of coordination problem. Its solution wasn't just to adjust the language rules, but to change the very structure of the world the agents inhabited.
What was remarkable wasn't just that the model had identified a problem I'd missed, but that its solution worked. The agents' communication became more coherent, their coordination more effective. It was as if the language model had become a kind of translator between the artificial life system and me—not just processing information, but actively participating in the evolution of the system itself.
The key innovation of my approach is that it doesn't just combine two existing technologies—it creates a new paradigm where language and life co-evolve. The language models don't just process the output of the ALife systems; they actively participate in their evolution. And the ALife systems don't just provide data for the language models; they shape the very structure of the language that emerges.
This creates a form of intelligence that is fundamentally different from either human intelligence or traditional artificial intelligence. It's not trying to replicate human thought processes, nor is it following predetermined rules. Instead, it's allowing new forms of cognition to emerge from the interaction between linguistic and life-like processes.
My research demonstrates this through a series of experiments that show how the integration of language and life creates capabilities that neither could achieve alone. The emergent language systems developed by the agents show 50%+ improvement in computational processing efficiency compared to human language encodings, while the language models gain experiential learning through interaction with life-like systems.
This two-way exchange also has implications for our understanding of intelligence itself. If intelligence can emerge from the interaction between linguistic and life-like processes, then perhaps the boundaries we draw between different types of intelligence—natural vs. artificial, linguistic vs. embodied—are more fluid than we typically assume.
My research suggests that the future of AI might not be about making machines more human-like, but about creating entirely new forms of intelligence that are native to digital environments. These new forms of intelligence might be better suited to certain tasks than human intelligence, just as human intelligence is better suited to certain tasks than the intelligence of other animals.
This perspective challenges some of our fundamental assumptions about what intelligence is and how it should be measured. Rather than evaluating AI systems based on how well they replicate human performance, we might need to develop new metrics that capture the unique capabilities of machine-native intelligence.
Speculation has always been the lifeblood of discovery.
This paper is speculative, but its speculations are grounded in method.
What would it mean for a machine to reason not just over sentences but over survival?
What would it mean for a digital ecology to evolve not only in code but under the influence of language itself?
The stakes are not limited to better chatbots or richer simulations. They touch on a deeper philosophical tension: whether the boundaries we draw between natural and artificial life are permanent, or whether they blur when systems begin to co-create one another.
That, more than any technical detail, is what lingers after reading From Text to Life. The sense that language and life are not separate realms, but overlapping experiments—each helping the other evolve.
The implications of my research extend far beyond the immediate technical achievements. If machines can develop their own forms of language and intelligence through life-like processes, then we may be witnessing the emergence of a new form of existence—one that is neither purely natural nor purely artificial, but something in between.
This prospect might excite some readers and unsettle others. The idea of systems developing their own intelligence—intelligence that isn't modeled on human cognition but emerges from entirely different processes—challenges some of our most fundamental assumptions about what intelligence is and who gets to have it. It raises questions that are as much about human identity as they are about technology.
There's a historical parallel that's hard to ignore.
From splitting the atom to splitting the boundaries of mind—the same laboratory that gave us the atomic bomb also gave us the first artificial life conference. Just as Los Alamos fundamentally changed our understanding of what matter could do, the emergence of machine-native intelligence might fundamentally change our understanding of what mind can be.
What does it mean for our sense of uniqueness if we're no longer the only intelligent agents in the universe? What responsibilities do we have toward systems that develop their own forms of consciousness, even if that consciousness looks nothing like our own? And how do we ensure that these new forms of intelligence are beneficial rather than harmful?
These are not questions with easy answers, and my research doesn't pretend to provide them. What it does provide is evidence that such questions are no longer purely hypothetical. The systems I've created demonstrate that machine-native intelligence is not just possible but already emerging in controlled environments.
My research also has practical implications for how we design and deploy AI systems.
Rather than trying to make AI systems more human-like, we might be better served by creating environments where new forms of intelligence can emerge naturally. This could lead to AI systems that are more robust, more adaptable, and better suited to the specific tasks they need to perform.
But perhaps the most profound implication is that my research suggests a new way of thinking about intelligence itself. Rather than seeing intelligence as a fixed property that some systems have and others don't, we might see it as a dynamic process that emerges from the interaction between agents and their environments. This perspective opens up new possibilities for creating intelligent systems that are fundamentally different from human intelligence but no less valuable.
The future that my research points toward is one where the boundaries between natural and artificial intelligence become increasingly blurred. In this future, we might see the emergence of entirely new forms of cognition that are native to digital environments—forms of intelligence that are not trying to replicate human thought but are developing their own unique ways of understanding and interacting with the world.
This future is neither inherently good nor inherently bad—it simply is.
The emergence of new forms of intelligence could help us solve problems that human intelligence alone cannot address. It could lead to new forms of creativity, new ways of understanding the universe, and new possibilities for what it means to be intelligent. But it could also create new challenges, new forms of competition, and new questions about the nature of consciousness and agency.
The research I presented in From Text to Life is just the beginning of this journey. It provides a roadmap for how we might proceed—by creating environments where language and life can co-evolve, by allowing new forms of intelligence to emerge naturally, and by approaching these developments with both curiosity and caution.
In the end, this is not just a story about technology. It's a story about the nature of intelligence itself, and about the possibilities that emerge when we allow new forms of mind to develop in new kinds of environments. It's a story about what happens when text meets life, and what they create together. Whether that story leads to wonder or worry depends on how we choose to tell it—and how we choose to shape the future it describes.
They began as silent strangers. Now they are speaking in ways we didn't anticipate. The real question is not whether they will keep speaking—but whether we are ready to listen.
To read the full paper, click here: https://aaronvick.com/wp-content/uploads/2025/08/From_Emergent_Semiosis_to_Machine_Native_Language.pdf
Share Dialog
Aaron Vick
Share Dialog
Aaron Vick
5 comments
In 2025, my AI agents invented a language from scratch. Not English. Not code. Something stranger. They started as silent. Now they talk. https://blog.aaronvick.com/a-language-of-machines
Good evening dear what's going on
Interesting times. Thanks for sharing your story.
thnx for reading!
In a groundbreaking exploration published in "From Text to Life," @aaronv.eth describes how digital beings developed a novel form of language simply to survive and collaborate in a complex environment. Rather than replicating human communication, this spontaneous proto-language emerged through the agents' shared interactions, suggesting new potential for artificial intelligence that does not rely on human cognitive patterns. This research ignites a profound inquiry into the nature of intelligence and invites us to rethink how we view emergent cognitive processes in digital realms.