AI & Automation.
In the automotive industry, time is money — but test vehicles are even more money. A prototype or pre-production vehicle costs between 150,000 and 800,000 euros depending on its stage of development. Anyone who works in vehicle development knows the situation: vehicle development is divided into large development areas, such as powertrain, electrical/electronics, or driver assistance systems. Each of these areas is responsible for its own test vehicle planning, because the vehicles are typically configured for specific areas and cross-utilization between areas is in most cases not possible.
Within a single area, however — such as driver assistance systems — the situation looks different. Multiple departments with different test tasks, different requirements for the vehicle’s development status, and different time needs all compete for the same pool of test vehicles. Camera-based systems, radar sensors, fusion algorithms, homologation, software validation — every unit has its requirements, its deadlines, and its dependencies.
Today, this is mostly coordinated by a single person or a small team that functions as a human optimization algorithm. Requirements are collected by email, consolidated into spreadsheets, and conflicts resolved in alignment meetings. By the time the finished plan is distributed, it is already outdated — because one department changed its test scope, a vehicle unexpectedly had to go into the workshop, or in the worst case was involved in an accident.
This is not a niche problem. It is a structural problem that affects every larger development area, every OEM, and every major Tier 1 supplier. And it is a problem that can be significantly reduced with a combination of automation, AI, and a well-thought-out system architecture.
A spreadsheet can plan, but it cannot think. A calendar tool can display bookings, but it cannot recognize that two consecutive bookings on the same vehicle require a conversion that takes four weeks and shifts every subsequent appointment. An email can communicate a change, but it cannot automatically check which other bookings are affected and whether a conflict arises.
The real problem is not the absence of data. It is the absence of logic that can do something with that data. And the knowledge lives in people. When the person who holds the overview leaves the company or is absent for an extended period, the process collapses.
What is needed is not a better Excel. What is needed is a living system.
The system thinks in two clearly separated phases.
In the planning phase, the entire test requirement is collected once at the start of a development project, processed through an AI optimization analysis, and translated into an initial plan. At the end of this phase, a fixed test vehicle fleet is established. How many vehicles are procured or reserved is decided in this phase. From that point on: work with what is available. Plan changes are possible, but the fleet size is set. If a vehicle is lost due to an accident or irreparable defect, a defined replacement or redistribution process kicks in — also mapped in the system.
In the operational phase, the plan lives. Departments report changes, vehicles go into the workshop, deadlines shift. The system processes this continuously, updates the plan automatically, communicates changes to all affected parties, and escalates to a human precisely when it cannot find a solution itself or when a decision requires human responsibility.
These two phases are reflected in four workflows that communicate with each other.
The first workflow is triggered manually once. Each participating department receives a link to a structured input form. What is collected there is not free text, but structured data: number of vehicles required, test duration, required development status of the vehicle, flexibility in the time window, dependencies on other departments, priority of the test activity.
All inputs are stored in a suitable database. As long as departments are still pending, the system automatically sends reminders. When all have responded, the workflow passes the collected data to the AI.
The AI receives a classic optimization task: how many test vehicles are needed at minimum, and how must the usage times be arranged so that as many departments as possible can meet their requirements while using as few vehicles as possible? The result is not a blind calculation output, but an explained proposal: which departments can test sequentially on the same vehicle? Where are there conflicts that require a decision? Which department is flexible enough in terms of timing to shift?
The test vehicle manager receives this proposal for review and approval. Only after his approval does the system switch to the operational phase.
From the moment the operational phase is active, Workflow 2 handles all incoming change requests. It is triggered when a department submits a change request to the plan via the web interface.
Importantly, changes to the initialized plan are not a given. Anyone requesting a change must justify it. This is not a bureaucratic hurdle, but a necessary barrier against uncontrolled replanning — which in practice quickly leads to chaos because every change can have knock-on effects on other departments.
The process works as follows: the department submits the change request with justification. The AI then analyzes the change request, checks the impact on the overall plan, and generates a structured approval template for the test vehicle manager. This template contains the department’s original justification, the AI’s feasibility analysis, the affected bookings, and the possible consequences of approval or rejection.
The test vehicle manager decides on the basis of this template. He approves, rejects, or returns a modified counter-proposal. Only after approval is the plan updated and all affected departments notified.
If the AI detects a conflict during the analysis that goes beyond a simple feasibility check, it escalates directly to Workflow 4 — without the manager having to make a simple approval decision first. Instead, he receives the full conflict briefing.
This is the most technically demanding workflow, because it introduces a dimension into the planning that no conventional calendar tool knows: the configuration state of a vehicle.
A test vehicle is not a generic asset that can simply be passed on. When one department tests a vehicle with a specific sensor configuration, a particular software version, and defined add-on components, and the next department needs the same vehicle with a fundamentally different configuration, there is work in between. Conversion time, workshop capacity, possibly waiting time for parts or a new software calibration.
Workflow 3 is triggered as soon as a new booking comes in for an already occupied test vehicle. It loads the preceding booking and compares the configurations. If no conversion is necessary, it schedules a direct handover. If a conversion is necessary, it checks workshop capacity and the available time between bookings.
If everything fits, a conversion block is entered in the plan and the responsible workshop is notified. If it does not fit, the case goes to Workflow 4.
For the web interface this means: every test vehicle has not just booking blocks, but also explicit conversion periods as separate planning elements. Visible to all parties involved, with the respective configuration status before and after the conversion.
This workflow is the core of the entire concept, because it defines the relationship between human and system.
The system is not an autonomous decision-maker. It is a structured assistant that resolves all solvable cases on its own and delivers the best possible decision basis for all unsolvable ones. Workflow 4 ensures that this boundary is cleanly maintained.
When Workflow 2 or 3 detects an unresolvable conflict, Workflow 4 loads all relevant data and passes it to the AI. The AI does not generate a generic error message template, but a specific escalation briefing: what is the concrete problem, which bookings are affected, which solution options has the system already evaluated and why did they not work, what decision options exist and what are the consequences of each.
This briefing goes to the test vehicle manager, including a direct link to the escalation view in the web interface. There he sees everything at a glance, makes his decision, or enters his own proposed solution. The system translates this decision into concrete plan changes, updates all affected bookings, and informs the departments involved.
The manager decides. The system executes.
Beyond the workflow logic, there are several technical aspects that need to be thought through for a serious implementation.
Test vehicles as data objects: every vehicle in the system is an independent object with a complete configuration history. Which software version is currently running, which add-on components are installed, what development status is present. This information is not static — it changes with every conversion, and every change is versioned so that the plan is traceable at any point in time. If a vehicle is permanently lost, it is deactivated in the system and a replacement process is initiated.
Configuration compatibility: not every department can work with every vehicle configuration. A chassis test does not require a specific software version; an ECU test does. The system must know these dependencies and take them into account when making assignments.
Differentiated conflict types: a scheduling conflict is different from a configuration conflict, which is different from a capacity bottleneck in the workshop. The AI must distinguish between these types because the solution options differ and the decision basis for the manager is different.
Auditability: in automotive development with its quality assurance processes, not only the result is relevant but also the path to it. Every plan change, every approval, every rejection, every escalation, and every management decision is documented with a timestamp and justification. This is not a nice-to-have — it is a fundamental requirement.
The system is only as good as its interface. A Gantt-style view per test vehicle shows at a glance who is using which vehicle when, which conversion work is planned, where gaps exist, and which approvals or escalations are open.
Each department sees its own bookings as well as the immediately relevant neighboring bookings on the same vehicles. The test vehicle manager sees everything. Changes are not reported by email, but requested directly via the interface, with a mandatory field for justification. Approval templates and escalations have their own views with all the information needed for a decision.
This is the actual comparison with the status quo: instead of a centrally managed document that only one person truly understands, every stakeholder has access to their relevant section of the plan — in real time, always current, with a clear structure for changes and decisions.
The obvious benefit is the savings in test vehicles through better utilization. If the system saves two or three vehicles, the implementation pays for itself within the first project.
The more lasting benefit is different: the knowledge stays in the system. Today, the quality of test vehicle planning depends significantly on individual people. When these people leave the company, the planning knowledge goes with them. In the system, every decision, every conflict, every solution is documented. Onboarding goes from months of familiarization to a structured entry.
Added to this is response speed. Today it takes days for a reported change to be incorporated, communicated, and aligned. In the system this happens within hours — in simple cases within minutes.
And finally: control without added effort. The approval process in Workflow 2 ensures that no uncontrolled changes are made to the plan without the test vehicle manager needing to sit in alignment meetings every day. He decides in an informed way when it is necessary, and the system takes care of the rest.
Test vehicle planning is not a glamorous problem. It is an operational problem that arises in every development project, generates significant coordination effort, and has a direct impact on costs and development speed. AI and automation cannot solve everything here — nor should they. But they can raise the large part of the process that is today manual, error-prone, and time-consuming to a quality and speed that is simply not achievable with human coordination alone.
What remains: the decision when the system reaches its limits. That is the point where the human is needed. And for that, they are truly there.
Many companies struggle with exactly these problems — often without consciously recognizing them.
Together we can quickly identify where automation and AI can create concrete value.