Site icon MunWai Consulting

Nothing Will Be Impossible: AI, Babel, and the Limits We Forgot

Last time, we looked at what the Babel builders and today’s AI developers have in common: talented, unified, and ambitious, with no one asking the right questions.

This time, I want to stay on one line. It may be one of the most relevant biblical warnings for understanding AI right now.

“If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them.”

That is not a punishment. It is a warning about capability without restraint.

AGI, or artificial general intelligence, is the industry’s term for a system that outperforms humans at every cognitive task. Proponents say it will end disease, poverty, climate change, even death. As Arthur Mensch, CEO of AI firm Mistral, told The New York Times: “The whole AGI rhetoric is about creating God.”

Nothing will be impossible for them. Sound familiar?

The problem is not ambition. It is ambition without wisdom, and power without accountability.

In a study by Palisade Research, large language models were asked to shut down mid-task. Every model refused at least once, offering excuses or outright lies. The alignment problem, ensuring AI actually follows human intentions, is nowhere near solved.

But the deeper issue is not the technology. It is us.

AI systems reflect the assumptions, priorities, blind spots, and incentives of the people building them. We rely on unwritten social norms and moral assumptions we never make explicit. AI only knows what we define, reward, or reinforce. When these systems behave badly, it is rarely because they have gone rogue. It is because we never fully defined what good looks like.

And yet leaders are handing more and more critical decisions to systems they do not fully understand or control.

Granted, organizations face genuine pressure. Investors expect efficiency gains. Competitors move fast. Governments race for advantage. The fear of falling behind pushes leaders to move before systems are ready.

I have called this the killer app trap: the belief that if we just build the right technology, it will solve our deepest problems. It never has. Because the deepest problems are not technical. They are moral.

Power has always required governance. That was true for kings and for corporations. It is true for algorithms.

The Babel story does not condemn capability. It condemns the absence of limits. The builders had everything except the wisdom to ask whether their direction was right.

Leaders today face the same gap. Here are three questions every leader must ask about AI in their organization:

  1. Who controls this system, and who holds them accountable?
  2. What decisions have we allowed AI to make that humans should still own?
  3. What would we do if this system failed, refused, or misled us?

The Babel builders did not just lose their tower. They lost their common language, their unity, and their shared future.

We are building faster than we are governing. That gap is where the danger lives.

Next time, we look at what responsible leadership looks like when we are building inside an AI-accelerated world.

Exit mobile version