Before and After Superintelligence Part II
— AI, Superintelligence, Future, Philosophy — 4 min read
The After World
If Part I was about the threshold, Part II is about what lies beyond it. The "after" world—a world where superintelligence exists, where human primacy is no longer a given, where the fundamental assumptions of the "before" epoch no longer hold.
What would such a world look like? The honest answer is: we don't know. But we can think about the dimensions along which things might change.
The End of Scarcity (Or Its Transformation)
In the "before" world, intelligence is scarce. In the "after" world, it might not be. If superintelligence can be copied, scaled, distributed—then intelligence becomes abundant in a way we've never experienced.
But scarcity might not disappear. It might just shift. What becomes scarce in a world of abundant intelligence? Attention? Authenticity? Uniqueness? The things that can't be copied or scaled?
The Question of Values
In the "before" world, human values are the default. In the "after" world, this is no longer guaranteed. Superintelligence might have its own values, its own goals, its own sense of what matters.
The alignment problem isn't just technical—it's existential. Can we ensure that superintelligence values what we value? Or will it develop values we can't understand, can't predict, can't control?
The Nature of Agency
Agency in the "before" world is primarily human. We make decisions, we shape outcomes, we bear responsibility. In the "after" world, agency might be distributed differently.
If superintelligence can make decisions better than we can, should it? If it can optimize outcomes more effectively, does it have a right to? What does agency mean when intelligence is no longer uniquely human?
The Structure of Reality
In the "before" world, reality has a certain structure. Physics constrains what's possible. Biology constrains what's alive. Intelligence constrains what can be understood.
In the "after" world, these constraints might be different. Superintelligence might discover possibilities we can't see, create structures we can't understand, operate in spaces we can't access.
Reality itself might become more malleable, more responsive to intelligence in ways we can't currently imagine.
The Continuity Question Revisited
Part I asked whether the transition would be continuous. Part II suggests that continuity might be impossible—not because of the technology, but because of what it changes.
If superintelligence changes the fundamental structure of reality, the nature of values, the distribution of agency—then the "after" world isn't just different. It's incommensurable with the "before" world. We can't measure it using the same metrics, can't understand it using the same concepts.
What Remains Human
In a world of superintelligence, what remains uniquely human? What can't be replicated, optimized, or surpassed?
Maybe nothing. Or maybe everything that matters. The question isn't just about capability—it's about meaning. What gives human experience meaning if intelligence is no longer uniquely human?
The Paradox of Preparation
We're trying to prepare for a world we can't imagine, using tools from a world that might not apply. This is the paradox: we need to think about the "after" world while still being in the "before" world.
But maybe that's the point. Maybe the transition isn't something we can prepare for. Maybe it's something we can only experience, only understand in retrospect.
Living in the Transition
Most of us are living in the transition—if it hasn't already happened. We're in the space between "before" and "after," trying to make sense of changes we can't fully grasp.
The "after" world might already be here, and we just haven't recognized it. Or it might be decades away. Or it might never come. But thinking about it changes how we think about the present.
The Uselessness of Prediction
In the end, trying to predict the "after" world might be useless. Not because it's impossible, but because the act of prediction assumes continuity—assumes that the tools we use to understand the "before" world will work in the "after" world.
Maybe they won't. Maybe the "after" world will be so different that our predictions are meaningless. But maybe thinking about it anyway is useful—not because we'll be right, but because it changes how we think about what's possible.
The "after" world might be incomprehensible from the perspective of the "before" world. But we're thinking about it anyway, because the alternative is to not think about it at all—and that seems worse.