One Ring to rule them all,
One Ring to find them,
One Ring to bring them all
and in the darkness bind them
“The Lord of the Rings” is, at its core, a story about power: how it attracts, how it concentrates, and how it quietly rewrites the moral lives of those who reach for it.
The Ring is not terrifying because it is loud or monstrous. It is terrifying because it is plausible. It is, in the end, a small object with a vast radius. The mere possibility of possessing it reshapes motives, alliances, and ethics. Even the wise are tempted, not because they love evil, but because they can imagine doing good with immense power. That is why the Ring is dangerous. It amplifies intention without improving it, and it persuades its bearer that exceptional power will remain under exceptional control.
When Galadriel is offered the Ring, she does not recoil. She hesitates. She sees what she would become if she accepted it. “All shall love me and despair.” In her refusal sits Tolkien’s moral architecture: the recognition that once power is concentrated and amplified, it rarely stays benign for long, even in capable hands and even with good intentions.
We are living through a comparable dynamic.
Artificial intelligence is not spreading like previous technologies. It is advancing under race conditions. The central prize is the ability to set the defaults of modern life: how knowledge is produced, how work is organized, how information circulates, and how decisions are made. The first to build the most capable systems, secure the deepest data reservoirs, and embed models into education, law, media, and business gains advantages that are hard to unwind. In this environment, caution becomes a competitive handicap.
This race concentrates power by design. A remarkably small number of companies, and an even smaller circle of executives and engineers, are currently shaping the architectures through which billions of people access information, create content, and increasingly outsource judgment. Their choices about training data, optimization goals, safety thresholds, and acceptable outputs are often described as technical details. They are not. They are political decisions translated into infrastructure.
Public discussion often frames AI safety as an alignment problem: how do we ensure systems do what we want? But that framing assumes a stable and shared “we.” In reality, values are being operationalized under competitive pressure by actors who are neither globally representative nor democratically mandated. The defaults they choose become the invisible boundaries of discourse. What is boosted, suppressed, filtered, ranked, or flagged shapes the cognitive environment in which societies form opinions and make collective choices. A narrow group of private actors is setting the procedural morality of the digital world, and the digital world increasingly sets the conditions of the physical one.
Many of us are underestimating the impact because the compounding seems confined to software. We do not feel personally affected if, inside leading AI labs, a growing share of coding is machine-assisted or machine-generated. But software is not “just software.” Code is infrastructure. It sets the rules, the incentives, the constraints of everything built on top of it. And the same compounding will not stay behind screens for long. As frontier systems accelerate work in infrastructure, biology, logistics, and supply chains, capability will stop being mostly about convenience. It will become leverage over the physical world.
Tolkien understood that the Ring’s most dangerous feature was not the power on offer, but the reasoning it provoked. A seemingly responsible thought would appears in everyone’s mind: if I do not take it, someone worse will. However, that logic turns ethics into strategy, and strategy into inevitability. Each actor feels compelled to move, not out of confidence in personal virtue, but out of fear of rivals. No one feels free to pause. No one wants to be the only one who refrains.
Part of me would like all to stop, to freeze, as if someone could press pause in a movie. But stopping technological progress is not a plan. It is a wish. And it will not happen unless an external shock forces it. The real question, therefore, is whether institutional counterweights can scale alongside capability: transparency about training data and system limits, independent oversight with real authority, clear accountability for systemic harms, democratic participation in high-impact deployment decisions. Without these, amplification becomes asymmetry, and asymmetry hardens into durable power.
Whoever holds the Ring reshapes the world. Whoever designs and governs the most powerful AI systems will shape the cognitive and moral architecture of this century.
Will the rules that guide these systems emerge from open, collective deliberation, or from the strategic choices of a handful of private actors who simply reached the Ring first?
Are we building a shared constitutional moment for intelligence, or accepting a silent transfer of authority to those fastest in the race?
The power is real. The race is real. So is the responsibility.


