Back
Published

Why introducing AI fails

AIsoftware engineeringorganizational developmentleadership

by Mischa Ramseyer

The wrong starting point

Most organizations know they need to do something with AI. What’s missing is a viable entry point. Not because of skepticism, but because of responsibility. Instead of building, they explain. Instead of anchoring responsibility, they prepare.

AI introductions start where they cannot have any effect — on paper.

Why “introduction” is misleading

The term suggests that AI can be prepared, explained, and then rolled out — like a new tool or a method. AI-powered software engineering does not work that way. AI only unfolds its impact where software is actually built. As long as nothing is built, responsibility stays abstract, decisions can be postponed, and assumptions remain untested.

As long as nothing is built, everything stays abstract

Strategies, programs, and guidelines follow a familiar logic: first create clarity, then act. With AI, this logic delays the real confrontation with a highly complex, fundamental topic. Responsibility cannot be delegated to concepts. It only becomes real when software exists and has to run.

When organizational rules stop working

Once real systems are built, two logics collide: organizations built on clean handoffs, clear lines, predictability, and exact responsibilities — and work that demands continuous decisions, co-creation, and innovation. What used to work in segmented processes (stage-gates, approvals, handovers) is now concentrated on a few engineers.

Processes must be rethought, radically simplified, and automated.

What used to work in separated roles — business analysis, architecture, testing, operations — now concentrates at the engineer level. Not as a method, but as a consequence: AI accelerates code production, not processes or team dynamics. Responsibility concentrates where software is built.

Structures and processes must adapt to the work — not the other way around.

Why “taking everyone along” stalls progress

Many organizations try to soften this break with change management: take everyone along, avoid overwhelm, explain first. That only smooths the surface. The real, deep change in software engineering only becomes visible once real systems are running:

  • Some cannot. The new role demands a breadth not everyone brings: understanding business requirements, architecture, UX, testing, and operations simultaneously. Years of specialization become a constraint.

  • Some will not. 20 years of experience in a role — and now that should suddenly matter less? Statements like “Besides, I still write much better code than a machine!” become louder. The resistance is understandable. It changes nothing about the reality.

  • Some will not engage. Technically open, but not willing to work fundamentally differently. AI writes the code, demands continuous leadership, attention, and control — not occasional instruction.

More detail on how software engineering changes is in our article AI makes coding easier — software engineering gets harder

Conclusion

Simple AI introductions do not fail because of the technology, but because of a false ambition: take everyone along, avoid risk, explain first, then act. This path leads nowhere. What is needed instead is a different entry point. One that makes responsibility real, not theoretical. One that starts with those who want to, can, and are ready.

What that entry point looks like is in the next article The right entry into AI-powered software engineering.