In anthropology, liminality describes the disorientation that occurs in the middle stage of a ritual, when participants no longer hold their pre-ritual status but have not yet transitioned to their new status. They stand at a threshold, neither here nor there, betwixt and between. This concept, first articulated by Arnold van Gennep and later expanded by Victor Turner, offers us a powerful lens through which to understand our current moment with artificial intelligence.
We are collectively experiencing a liminal turn.
The systems we've built for centuries assumed human intelligence as their foundation. Our organizations, institutions, economies, and social structures all evolved around distinctly human capacities and limitations. But now we find ourselves in an uncanny valley of collaboration, no longer operating in purely human systems, yet not fully integrated with the artificial intelligences we've created, and unsure how or if we want to be. We stand at a threshold, and the ground beneath us feels uncertain.
Drift Club takes its name from geophysicist Alfred Wegener. "Continental drift, faults and compressions, earthquakes, volcanicity, transgression cycles and polar wandering are undoubtedly connected causally on a grand scale." Just as Wegener recognized the powerful forces that reshape our world, we too recognize that our technological and social landscapes are undergoing tectonic shifts with the rise of artificial intelligence. The transformations occurring across our systems of collaboration are interconnected, with changes in one domain inevitably affecting others.
The Vertigo of Liminality
There's a particular vertigo that accompanies liminal states. When the boundaries blur between what is human and what is machine, when agency becomes distributed across networks of intelligence both organic and synthetic, our traditional frameworks for understanding collaboration begin to falter. Who is the designer and who is the designed? Where does decision-making authority reside? How do we attribute creativity, responsibility, and care?
Consider the disorientation that occurs when engaging with large language models. The conversation feels human, yet we know it isn't, not entirely. The text that appears before us exists in a liminal space between human and non-human authorship. It draws from the collective intelligence of humanity (through its training data) yet produces combinations and insights that weren't explicitly programmed. This creates a strange loop of agency where we find ourselves simultaneously in the roles of user, collaborator, and co-creator.
This vertigo isn't merely philosophical, it has profound implications for how we design systems, build organizations, and govern technologies. When intelligence and agency are distributed across human-AI networks, our traditional approaches to collaboration require fundamental reconsideration.
The Insufficiency of Current Paradigms
Our existing models for human-computer interaction and organizational design emerged from a world where the boundaries between human and machine were clear. We designed interfaces that treated computers as tools, organizations that positioned technology as infrastructure, and governance frameworks that assumed human decision-makers at every critical juncture.
These paradigms are increasingly insufficient in a world where:
Intelligence is distributed across networks of humans and machines, with emergent properties that neither could achieve alone.
Agency becomes ambiguous, with decisions emerging from complex interactions between human intentions, algorithmic processes, and systemic dynamics.
Creativity occurs at the intersection of human imagination and machine capability, challenging our notions of authorship and innovation.
Learning happens bidirectionally, with humans and AI systems co-evolving through their interactions.
The liminal nature of our current technological moment demands new frameworks, not just for designing better AI systems, but for designing better collaborative systems that incorporate both human and artificial intelligence.
Designing for Liminal Collaboration
What does it mean to design for collaboration in this threshold space? At Liminal Systems, we believe it requires a fundamental shift in how we think about design itself. Rather than designing static systems with fixed boundaries, we must design for emergence, adaptation, and evolution.
This means embracing several key principles:
Permeable Boundaries: Recognizing that the distinctions between human and machine intelligence are increasingly fluid, and designing interfaces and organizations that allow for this permeability rather than enforcing rigid separations.
Distributed Agency: Moving beyond models of centralized control to systems where agency is thoughtfully distributed across networks of human and artificial intelligence, with appropriate safeguards and feedback mechanisms.
Adaptive Learning: Creating environments where both humans and AI systems can learn from each other continuously, evolving their capabilities and relationships over time.
Ethical Scaffolding: Building ethical frameworks that can evolve alongside technological capabilities, addressing novel questions of responsibility, accountability, and care in human-AI systems.
Metacognitive Design: Developing systems that help both humans and AI reflect on their own thinking processes, biases, and limitations, enabling more thoughtful collaboration.
These principles aren't merely theoretical, they have practical implications for everything from user interface design to organizational structure to regulatory approaches. In future posts, we'll explore how these principles can be applied across different scales of collaboration, from individual human-AI interactions to team dynamics to institutional architectures.
The Opportunity in Liminality
While liminal states are disorienting, they are also pregnant with possibility. The anthropologist Victor Turner noted that liminal periods are often characterized by "communitas," a sense of equality, solidarity, and togetherness that transcends normal social boundaries. In these threshold spaces, new forms of connection and collaboration become possible.
Perhaps the liminal turn in our relationship with artificial intelligence offers a similar opportunity. As we navigate this threshold together, we have the chance to reimagine not just our technologies, but our social structures, our economic systems, and our relationship with intelligence itself.
This reimagining won't happen automatically. It requires intentional design, thoughtful experimentation, and inclusive dialogue. It demands that we bring together diverse perspectives, from systems design and organizational psychology to philosophy and anthropology to the lived experiences of people engaging with AI in different contexts.
That's what Liminal Systems aims to provide: a space for this cross-disciplinary exploration, grounded in systems thinking but open to the full spectrum of human experience and imagination.
An Invitation to the Threshold
As we launch this publication, we invite you to join us at the threshold. To embrace the vertigo of liminality not as something to be feared, but as an opening to new possibilities. To explore with us how we might design for adaptive, intelligent collaboration in a world where the boundaries between human and machine intelligence are increasingly permeable.
In the coming weeks and months, we'll delve deeper into specific aspects of this challenge:
How might we design systems that distribute agency across human-AI networks in ways that enhance rather than diminish human capability and autonomy?
What new organizational forms might emerge that leverage the unique capabilities of both human and artificial intelligence?
How can we create ethical frameworks that evolve alongside technological capabilities, addressing novel questions of responsibility and care?
What speculative futures might we imagine, and how might we prototype and test these possibilities in safe but meaningful ways?
The liminal turn in our collective story is just beginning. The systems, organizations, and societies we design in this threshold space will shape our relationship with intelligence for generations to come. Just as tectonic forces reshape continents, our collective efforts will reshape the landscape of human-AI collaboration. Let's navigate these transformative shifts together, with curiosity, creativity, and care.
Welcome to Liminal Systems.
This is the inaugural post of Liminal Systems, a publication exploring how we design for adaptive, intelligent collaboration at every level of scale. If these ideas resonate with you, we invite you to subscribe and join the conversation.