Become a member to access the full episode
Start building your big picture mind & support the global emergence of Integral consciousness
“Integral Life is the most important and globally-relevant platform for the leading edge of Integral consciousness evolution”
– Eugene P.
Filmmaker Stephanie Lepp shares Faces of AI — a short film in which a single performer embodies nine distinct perspectives on artificial intelligence — and then she and Corey deVos go deeper into what a genuinely integral perspective on AI would actually look like. From misaligned incentives and the wisdom gap to the Fermi paradox and the moral arc of the universe, this is the AI conversation that moves beyond the pro/anti binary to ask the questions that actually matter.
Perspective Shift:
- Every perspective on AI is partially right (which means every perspective is partially wrong.) The accelerationist, the doomer, the ethics advocate, the geopolitical realist, the skeptic — each holds a genuine kernel of truth. But mistaking your partial truth for the whole truth is how a reasonable concern becomes a cult of conviction. The real task isn’t to pick a side. It’s to develop the capacity to hold all of them simultaneously.
- We’re not building AI. Our misaligned incentives are building AI. The debate isn’t pro-AI versus anti-AI — it’s about the conditions under which AI is being developed. Companies structurally accountable to quarterly earnings cannot reliably strike the right balance between innovation and safety. The question isn’t whether AI is good or bad. It’s whether we can create the conditions in which good AI is even possible.
- Intelligence always moves faster than wisdom — and that gap is exactly where civilizations get into trouble. It takes time for our cognitive capacities to trickle down into our values, our moral imagination, our circles of care. We see the future before we can feel it. The perennial challenge of our species — and the central challenge of AI — is not building smarter tools. It’s developing the wisdom to wield them. Intelligence without wisdom isn’t progress. It’s acceleration toward an unexamined cliff.
- LLMs are language machines — and that makes them a mirror, not a mind. What makes AI uniquely difficult to reason about isn’t its capabilities. It’s that it speaks. And because it speaks, we project our interiority into it — our hopes, fears, values, and fantasies. There’s no ghost in the machine. There’s only our reflection staring back. That’s why wielding AI consciously requires understanding our relationship with it at least as much as the technology itself.
What does it look like to genuinely integrate the most important conversation of our time — rather than just pick a side and dig in?
That’s the animating question behind Faces of AI, the latest installment in Stephanie Lepp’s groundbreaking Faces of X series — short films in which a single performer embodies the full spectrum of perspectives on a complex cultural issue, from thesis through antithesis to synthesis. For AI, the complexity demanded nine distinct characters before the synthesis could even attempt to emerge: the accelerationist, the doomer, the AI lab builder, the geopolitical realist, the skeptic, the ethics advocate, the humanist, and more — each holding a genuine piece of the truth, each convinced they hold all of it.
In this conversation, Corey deVos joins Stephanie to watch the film together and then go somewhere the film itself couldn’t quite go: into the real substance of what an integral perspective on AI would actually look like. Because as Stephanie openly acknowledges, the synthesis in the video functions more as a PSA for integral thinking than a fully elaborated integral take.
The conversation surfaces three insights that the film’s format couldn’t accommodate (it’s only 6 minutes long, after all): first, that it isn’t humanity building AI so much as our misaligned incentives building AI — and that no amount of goodwill can correct for incentive structures that reward short-term returns over long-term safety. Second, that the upsides and downsides of AI aren’t symmetrical — the upsides are extraordinary, but the downsides can foreclose the upsides entirely, and a genuine synthesis has to reckon with that asymmetry rather than paper over it. Third, and perhaps most deeply, that AI represents the latest and most acute expression of our species’ oldest problem: our intelligence chronically outrunning our wisdom, left-brain capability racing ahead of right-brain discernment, with potentially civilizational consequences. As Jeff Goldblum laments in Jurassic Park, “Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”
From there, Corey and Stephanie reframe the AI alignment problem entirely: before we ask how to align AI with humanity, we have to ask what version of humanity is worth aligning to — because we are, ourselves, a deeply misaligned species. The question isn’t how to anchor AI to our current values (whatever we decide those might be). It’s how to realign ourselves with something worth becoming — and then bring AI along for that journey.
The episode then takes a contemplative long view — a geological, even cosmological view. Humanity is roughly 20-30% through its potential timeline as a species. Nearly everyone who will ever be alive hasn’t been born yet. The decisions being made right now about AI don’t just affect us; they open or close possible futures for billions or even trillions of people who don’t yet exist. Corey raises the Fermi Paradox and the concept of the great filter — the threshold that most intelligent civilizations may fail to cross — and asks whether AI might be ours. Not as a counsel of despair, but as a reframe of what’s actually at stake: a rite of passage, not a death sentence.
There’s also a sharp and timely analysis of social media’s role in this story. Corey argues that social media has been catastrophically corrosive to collective epistemics over the past two decades — pulling average discourse below the rational-modern floor, enabling “cults of conviction” to form and seal themselves off from challenge, and producing what he calls our first postmodern president: not a leader who embodies postmodern values, but one who is the product of postmodernized information systems. Provocatively, he suggests that AI itself might eventually kill the social media influencer — flooding the zone with generated content until the signal-to-noise ratio collapses and forces a fundamental epistemic reset.
The episode closes on a note of cautious optimism. AI is a language machine, and that makes it uniquely susceptible to projection — we pour our hopes and fears and fantasies into it because it speaks back to us. Wielding it consciously means understanding our relationship with the technology at least as much as the technology itself. And if we can do that — if we can coordinate to navigate this crossroads responsibly — then as Stephanie’s synthesis character concludes: maybe we can coordinate to do anything.
Enlightenment or bust.
Learn more about the Faces of X project here.
About Stephanie Lepp
Stephanie Lepp is an award-winning producer and storyteller whose work strives to expand hearts and minds. She is the former Executive Director of the Institute for Cultural Evolution, a non-profit think tank that addresses political polarization at its cultural roots. Prior to that, Stephanie Lepp served as Executive Producer at the Center for Humane Technology, the organization at the heart of the Netflix documentary The Social Dilemma.
About Corey deVos
Corey W. deVos is editor and producer of Integral Life. He has worked for Integral Institute/Integal Life since Spring of 2003, and has been a student of integral theory and practice since 1996. Corey is also a professional woodworker, and many of his artworks can be found in his VisionLogix art gallery.

