If We Can’t Slow AI, We Must Accelerate Responsibility

AI is not incremental progress. It is a structural shift.

Will it make the world better?

We don’t know. Anyone claiming certainty (on either side) is guessing. History hasn’t been written yet.

Instead of reacting emotionally, by swinging between hype and fear, I believe we should approach this the way we approach markets:

In scenarios.

On the stock market, you don’t only look at what has already happened. You don’t just digest last quarter’s numbers. You model possible futures. Optimistic cases. Base cases. Downside risks. You prepare for volatility before it hits.

Right now, many people are still processing what AI has already changed. But equally important questions are:

Where are we heading?
What could the second- and third-order effects look like?
And how do we manage and prepare for the risks and structural transitions ahead?

Structural Future #1: AI overtakes most work

AI radically increases productivity. One person orchestrates fleets of agents. Small teams build what once required hundreds. Creativity expands. Innovation is unleashed.

Maybe we work less. Maybe work becomes optional. Maybe we spend our time only on what intrinsically excites us.

I feel that upside. It’s real. I’ve already offloaded parts of the “boring” work to AI. And this is just the beginning.

But what if AI can and does do everything?

Even the most optimistic structural scenario brings disruptive questions:

If agents do the work, what do humans do?
Where do meaning and contribution come from?

Will I sit in the sun, playing with my kids without the need to work? What will my kids do?
Do they still need to learn? To go to school?

These are second- and third-order effects. Even the most positive scenario implies disruption. And we know disruption is rarely painless.

Some argue universal basic income is the answer.

But work is not just income. Work structures identity. It provides status, belonging, rhythm, purpose.

If AI reshapes work at scale, we are not just facing economic disruption; we are facing an identity shift.

Even if productivity rises, what happens if meaning declines?

GDP models don’t capture meaning, just money.

Structural Future #2: AI amplifies us

Here, AI removes the boring parts. Humans focus on creative, relational, strategic work. Professions transform rather than disappear.

Some jobs may not exist in five years. New ones will emerge.

Perhaps each person can do more of what they do well, and less of what is replaceable.

Maybe we work less for others and reclaim more time for ourselves and our families.

This is the version many hope for.

And I believe it is possible, but only if we remain intentional about how we steer innovation.

The Risk Layer: Transition and Externalities

Separate from these structural futures is another dimension: how well we manage acceleration and its side effects.

Right now, development is moving faster than society can think and act.

The negative sides of AI are not speculative. They are already here, and they are real, measurable, and often hidden from everyday view.

  • AI systems consume enormous amounts of energy through data centers. Expansion plans are massive.
  • Bias is amplified by the way these systems are built and trained.
  • There is a serious copyright issue, that many would call infringement. AI systems have been trained on the work of millions. Now value flows not back to those creators, but primarily to AI companies. Without the data (humanity’s data!) these systems would be little more than lines of code.
  • Economic power is concentrating. Current valuations of AI companies do not suggest a future sustained by $20-per-month hobbyist plans. Access may not remain democratized.

These are not distant risks. They are present realities.

And here is the deeper concern: We are already struggling to address today’s problems, while the pace of development keeps accelerating.

Governance, regulation, infrastructure adaptation, and public understanding move slower than model releases. The gab is widening.

The Acceleration Power Struggle

Why is that gap so hard to close?

Because acceleration itself is structurally incentivized.

In markets, incentives drive behavior. The same applies here.

AI development is entangled with power: corporate power, national power, economic dominance. Acceleration is not just innovation. It is strategic advantage.

The fastest wins.

This is not just competition between companies. It is geopolitical. A race between countries.

If one actor slows down to address environmental cost, bias, or societal adaptation, another actor gains ground.

That makes “let’s slow down” far more complicated than it sounds.

So we are in a paradox:

We need time to manage AI’s externalities. But the competitive dynamics push us to move faster.

And this is precisely why we must be intentional about what we accelerate.


Where I stand

I am not anti-AI. I don’t want to take the joy out of innovation.

But enthusiasm alone is not strategy.

I use AI not only because it is exciting, but because I feel I must. If you don’t understand a transformative technology, you lose the ability to influence it. Avoiding it won’t make it safer. Expertise might.

Would I prefer slower acceleration? Yes. But realistically, that is unlikely in the current geopolitical and economic system.

So here is what I advocate for:

Let’s celebrate innovation — and invest as aggressively in mitigating AI’s environmental impact, bias, governance gaps, access inequality, and potential labor and identity shifts as we do in scaling capability.

Right now, most effort goes into making AI more capable. Far less goes into addressing the risks already visible — and even less into preparing for the structural changes that may still come.

Top priorities for innovation should be:

  • Cleaner AI: A drastically reduced environmental footprint.
  • Fairer AI: Societal impact matters as much as benchmarks.
  • Responsible leadership: Leaders who prioritize long-term societal stability over short-term competitive wins.
  • Broad access: Systems trained on humanity’s collective knowledge must not become tools reserved for those who can afford massive monthly spend.
  • Societal preparation for labor transformation: Serious thinking about how work, income, education, and identity evolve if AI meaningfully reduces the need for human labor.
  • Clear guardrails for a world of advanced autonomy: Frameworks that define limits, accountability, and control if AI systems approach or exceed human capability in key domains.

Acceleration is built into the system. Incentives push toward capability, scale, dominance. Responsibility does not emerge automatically from those dynamics. It requires restraint. It requires shifting priorities. And that does not happen naturally in competitive markets.

That is why awareness matters.

Celebrating new capabilities must be matched by equally serious scrutiny of environmental impact, bias, copyright, power concentration, as well as the long-term societal consequences of automation.

Unless society understands not only what AI can do, but also what it costs and what it may fundamentally change, there will be no meaningful pressure on those who can steer it.
And without collective awareness, the incentives driving acceleration will continue unchecked.

Dr. Michaela Greiler

I make code reviews your superpower.

Leave a Reply

Your email address will not be published. Required fields are marked *