ANOTHER SIDE OF PARADISE, PART 2: PLUR1BUS (2025)
- David Bertoni
- Nov 14
- 4 min read
Updated: Nov 16

As I mentioned in my last post, my gut tells me that Pluribus isn’t so much science fiction as it is a slightly dramatized user manual for where we’re heading technologically. Something like a singularity, maybe, with the lingering (and increasingly awkward) question of what becomes of individuality once the species starts merging its neurons with its notification settings.
Let’s follow this thread.
It’s not difficult to imagine a time, and not a distant one, when every person has instant, always-on access to the collective knowledge of the entire species. Search engines were the digital starting line. AI is the first real acceleration. Sure, AI hallucinates, but so do people. If you think it’s tricky to separate fact from nonsense when you’re scraping the internet, try scraping a few billion human minds. That’s a compost heap of confabulation.
Maybe we can take this as a given: Creativity, exploration, innovation all rely on tension, on the friction between one mind and another. What becomes of that when everyone thinks, remembers, and improvises in lockstep? What happens when humanity stops being a species and becomes an organism, effectively an eight-billion-node nervous system with a single meta-self?
We haven’t seen the Pluribus collective slip into that kind of memory salad (yet), but give it time. Any mind, alien, human, or hive, eventually creates out of whole cloth, I would think.
Now add brain-to-machine interfaces, the kind DARPA is almost certainly quietly testing in hangars that don’t officially exist. Yes, eventually, they probably do want to remove pilots entirely, but set that aside. Imagine seamless neuronal access to a planetary knowledge base.
Then add AI. as a kind of cognitive prosthesis. Borrowing processing power from the machine the same way athletes borrow knees from carbon fiber. Speed of thought becomes literal. The bottleneck, as always, is meaning: will this help us understand anything? Or will we simply be wrong more efficiently and, eventually, universally. A world where everyone agrees on something, setting aside whether it's true, false, or neither.
Once AI is woven into cognition, every human being essentially becomes a hall of mirrors, consulting themselves, consulting the collective, consulting an algorithm, consulting an archive. And all of it will be so fast the seams vanish. Knowledge becomes an internal monologue with too many voices. History, memory, and misinformation begin blending like colors in a cheap paint set. And what of the myriad of religious beliefs? Does the collection of countless mystical experiences lead to the generation of one true religion, or does the collective decide Islamism is the end stop as the Koran teaches? And does it really matter if we're the equivalent of a fungal network or a leaf-cutter ant colony?
We already know memory is unreliable; the misinformation effect has been tormenting psychologists for decades. With a direct cognitive link to a global data stream, it’s not hard to imagine forming vividly detailed, sensorially rich “memories” of Napoleon’s third heartbreak at the hands of Josephine, complete with candlelight, embroidered drapery, and perhaps a tasteful strings section. All, of course, with no witnesses to challenge it.
If the collective says it happened, and your neurons light up on cue,
who’s to argue? And expertise? That collapses too. In Pluribus, the boy casually offers technical opinions about medical instruments. We can already get halfway there with YouTube tutorials; the only difference is execution. (I suspect we’re still a few versions away from Brain Surgery for Dummies: Neural Download Edition, but never underestimate the entrepreneurial spirit.)
Imagine a future where you can install the neuronal equivalent of a decade of martial arts training, cardiothoracic surgery, or Eddie Van Halen’s fingerwork. Your body might lag behind your firmware upgrade, but give the biotech industry a few minutes and they’ll sort that out.
At some point, the distinctions between people start dissolving. If billions of us are connected to the same digital storehouse of memory, knowledge, skills, and emotional templates, and if we can download each other’s abilities without the dull intermission of practice, what’s left of the individual? What’s the point of personal achievement if expertise is a menu selection rather than a mountain to climb?
Which brings us back to Pluribus. The show toys with a provocative idea: maybe the loss of individuality isn’t the horror we think it is. Maybe an all-knowing, all-caring, interdependent hive isn’t a dystopia. Maybe it’s the next step. A tidy alternative to the messy solo-brain model we’ve been running for 300,000 years.
But creativity, exploration, innovation all rely on tension, on the friction between one mind and another. What becomes of that when everyone thinks, remembers, and improvises in lockstep? What happens when humanity stops being a species and becomes an organism, effectively an an eight-billion-node nervous system with a single meta-self?
If Pluribus is peeking at our future, it’s may not be warning us about aliens. It may be warning us about us. And maybe encouraging us to decide, while we still can, whether the end of individuality is transcendence, or just the world’s most polite apocalypse.
“Memory says, ‘I did that.’ Pride replies, ‘I could not have done that.’ Eventually, memory yields.” —Nietzsche, Beyond Good and Evil, Part IV, “Apophthegms and Interludes,” Aphorism 68
"For years and years I struggled just to love my life. And then the butterfly rose, weightless, in the wind. 'Don't love your life too much,' it said, and vanished into the world.” —Mary Oliver, One or Two Things



Comments