AI WITHOUT HYPE. WHAT DECISION-MAKERS REALLY NEED TO KNOW.

6 Reasons Why Most AI Projects Fail.

The situation

A company decides to use AI. Management is excited, the budget is approved, the vendor is hired. Three months later, the results are sobering. The output doesn’t fit. Employees aren’t using the system. The project is at risk of failing.

Not an isolated case. The norm.

The problem

Everyone is building on AI today. And rightly so. AI is a powerful tool that can create real value. But between the promise and the reality, there is often a large gap.

Most AI projects don’t fail because of the technology. They fail because of the people and processes behind it. Because of false expectations, poor foundations, and inadequate preparation.

As described in the first article of this series, the understanding of what AI can and cannot do is not yet properly established. That is the starting point of almost every problem.

The root cause

A tool is only as good as the person using it. Those who hold a hammer wrong will miss the nail. Those who use AI without a plan, without clean data, and without a clear goal will get results nobody can use.

The problem is rarely the AI itself. It lies in the preparation, the execution, and the mindset.

What can we take away from this?

1. No concrete goal

I come from the engineering world. To describe systems, requirement specifications are created. Requirement specifications contain unambiguous requirements. You have to think systems through properly and develop the right requirements before you start building.

The same applies to AI projects. You have to think a system through properly before writing a prompt. Otherwise it becomes too unspecific. The AI makes assumptions you don’t actually want. You give it too much room to interpret.

AI can support requirement development and remind you of things you may have overlooked yourself. But the human must retain control and question the results critically.

2. AI is used even though there is a better solution

My principle is: as much AI as necessary, as little as possible.

Automations without variables always deliver the same output. And that is exactly what you want in production: stability and reproducibility. A simple if-then must not be handed over to AI. It has to be implemented in rigid structures and clear logic. AI introduces variability into the system. Sometimes that is intentional — but often it is not.

3. Data hygiene

AI can only be as good as the data it draws on. Outdated data, inconsistent data, or otherwise contaminated data will with high probability produce output that does not deliver the desired result.

At Volkswagen we had a direct saying for this: garbage in, garbage out. That applies to AI today just as much. Those who start an AI project with bad data should not be surprised by bad results.

4. Acceptance

AI is on the rise and by now an established tool. Nevertheless, there are legitimate doubts and a certain caution in companies when dealing with it.

This can be related to employees not yet having engaged with AI. But also to someone fearing being replaced by AI and preferring to block the use of the technology rather than learning to work with it and extracting efficiency gains from it.

Both are a risk to project success. Those who introduce AI without bringing their people along will fail at the execution — not the technology.

5. Human in the loop

During my time as an engineer in the automotive industry, we often spoke about HiL. We meant Hardware in the Loop — the relevant control unit embedded in a test environment.

I want to use the abbreviation differently now: Human in the Loop. One of the most important principles within an AI-supported workflow.

Of course you want to automate as much as possible. But it makes sense to build in checkpoints at certain stages. Points where an employee reviews and approves the results. A kind of quality control of the workflow at critical junctures. Not because you don’t trust the AI, but because responsibility must remain with the human.

6. The first prompt is rarely the last

Those who write a prompt for the first time often expect perfect results immediately. The disappointment follows quickly.

This is because the prompt was not yet clean enough. Too many assumptions were made. The task was formulated too broadly. AI needs precise inputs to deliver precise outputs. Prompt engineering is a discipline in its own right and requires iteration, patience, and an understanding of how AI works.

Those who understand this approach AI projects with the right mindset: as an engineering process, not as pressing a button.

Am I against AI?

Anyone reading both articles in this series might think I am against AI. I am not.

I find AI extraordinarily useful. But you have to be careful about the way you deploy AI technically. Because that is the reason you use AI in the first place: to save time and achieve meaningful, reproducible results.

Those who keep that in mind build systems that actually work.

Does this sound familiar?

In many companies, this is exactly where unnecessary time losses and structural problems arise. Often this goes unnoticed for a long time — until projects start to stall.