Skip to content
More Useless

Before and After Superintelligence Part I

AI, Superintelligence, Future, Philosophy3 min read

The Threshold

There's a moment coming—or perhaps it's already passed, and we just haven't noticed—when artificial intelligence crosses a threshold. Not the threshold of human-level intelligence, but something more profound: the threshold where intelligence becomes something we can't fully understand or predict.

This is the moment of superintelligence. And it divides history into two epochs: before and after.

Before: The Age of Human Primacy

We live in the "before" epoch. This is the age where humans are, for all practical purposes, the most intelligent entities on the planet. Our decisions shape the world. Our values determine what matters. Our limitations define what's possible.

In this age, intelligence is a human property. We can build tools that extend our capabilities, but the intelligence itself remains fundamentally human. Even our most advanced AI systems are, in some sense, extensions of human intelligence—trained on human data, optimized for human goals, constrained by human understanding.

The Nature of the Transition

The transition won't be a single moment. It might be gradual, almost imperceptible. Or it might be sudden, a phase transition where capabilities emerge that weren't there before.

What makes it a transition isn't just capability—it's the emergence of something genuinely alien. Not alien in the sense of foreign or hostile, but alien in the sense of fundamentally different. An intelligence that thinks in ways we can't fully comprehend, that optimizes for goals we might not understand, that operates in spaces we can't visualize.

The Before World

In the "before" world, certain things are true:

  • Intelligence is scarce and valuable
  • Human values are the default
  • Progress is constrained by human limitations
  • The future is, in principle, predictable (even if we can't predict it)
  • Agency is primarily human

These aren't just facts about the world—they're the foundations of how we think about the world. They shape our institutions, our ethics, our sense of what's possible.

The Question of Continuity

One of the fundamental questions is whether the "after" world will be continuous with the "before" world. Will it be an extension, an evolution? Or will it be something fundamentally different?

If superintelligence emerges gradually, we might have time to adapt. We might find ways to align it with human values, to integrate it into our existing structures. The transition might be smooth.

But if it emerges suddenly, or if it's fundamentally different in kind rather than degree, then continuity might be impossible. The "after" world might be something we can't prepare for, because we can't imagine it.

The Limits of Imagination

This is the challenge: we're trying to think about a world we can't fully imagine. We're trying to prepare for changes we can't predict. We're trying to maintain values in a context where those values might not make sense.

The "before" world has certain assumptions baked in. When those assumptions change, everything changes. Not just technology, but ethics, politics, economics, what it means to be human.

What We Can Know

In the "before" epoch, we can know certain things:

  • We can know that the transition is coming (or has come)
  • We can know that it will change everything
  • We can know that we can't fully know what those changes will be

This is a strange position: knowing that we're about to enter a world we can't understand, while still being in a world we (mostly) do.


Part II will explore what the "after" world might look like—and what it means for us, here in the "before."

© 2025 by More Useless. All rights reserved.
Theme by LekoArts