Cloud native EDA tools & pre-optimized hardware platforms
Whether MiL, SiL, PiL, HiL, Unit Test, software test or integration test: The world of automotive software testing knows a lot of technical terms and so it can happen that two people understand something different under the same term. Misunderstandings can occur and make effective collaboration difficult – we know similar situations, too. Let’s clear things up a bit and start at the very beginning.
The automotive world is constantly evolving and new terms such as “software-defined vehicle” testify to the importance of software for vehicles today.
In the development, formerly purely mechanical fields were increasingly expanded to include software-based and digital functions. The functions and behavior of a vehicle are now realized almost exclusively through software. Accompanying this, when one speaks about software, testing is mentioned immediately, too. But why software and testing? Software also only runs on hardware in the vehicle, and together they form an ECU (Electronic Control Unit). There are vehicles equipped with up to 150 ECUs – with about 100 million lines of code (LOC). The ECUs communicate and interact with each other to realize a specific function of the vehicle and make them tangible for customers.
Discover key strategies and expert insights to ensure reliable and trustworthy automotive semiconductors.
With so much programming code, what could go wrong?! Let’s take a look at an example of a vehicle function that customers can experience directly: the display of traffic signs in the instrument cluster. This is how it works:
The connection of all sensors, actuators and control units is known as the networking architecture, which takes at least three years to develop for a vehicle until it is ready for series production. And the correct interaction of all the sensors, actuators and control units involved naturally shapes the functionality and quality of the vehicle. In order to test the correct interaction, a vehicle must be tested repeatedly and iteratively in multiple stages.
The big challenge is that parts of a vehicle are often developed more as a product and less as a project, and thus a great many people from several companies and departments are involved in the creation of an automobile.
In summary, this means that the development of an automobile is much more complex than one might first think. On the one hand, this is due to the organizational framework conditions, and on the other hand, it is due to the large number of system components with software content. The complexity is further increased because functions can be experienced through the interaction of several system components.
To compensate for all this, many tests are required and performed for a vehicle. What is tested, specifically, on which test level, and how the test is being done is subject of the next section.
The notions test object, system-under-test and test element are often used synonymously. According to ISTQB, a test object is defined very generically as a “work product to be tested”. Accordingly, a test object can be:
In the following, we use the term test object and system-under-test synonymously for everything that is to be tested.
A test case always consists of at least these two pieces of information:
1. Test data that defines how a test object should be stimulated.
2. Expected values to the test object, that define which computations/states the test object should assume during stimulation.
Optionally, a test case can be enriched with further relevant information. A typical precondition for a test object “ECU”: The ECU is awake and ready to receive messages.
Test data and expected values are required for test cases at all test levels and test executions in this form. Expectation values are given by various sources of information – also called test oracles. A test oracle can be an existing system (as a benchmark), a specification, or the specialized knowledge of an individual. In no case should the code under test serve as a source of information.
Dynamic testing is the execution of a test object. Most people associate the term testing with dynamic testing.
In dynamic testing, a test case is created and executed that stimulates a test object with the test data. The stimulation causes the test object to either perform a calculation or change its state. The reaction of the test object is recorded in dynamic testing and compared with an expectation value. If the reaction is equal to the expectation, the test case is considered to have passed. If it is not equal, it is considered to have failed.
The opposite of dynamic tests are static tests. In static tests, the test object is not stimulated, but analyzed statically. An example of a static test is the review of a source code file.
Automotive SPICE indirectly assigns test levels to its process model and knows the following five processes:
When assigning according to Automotive SPICE, it should be noted that the processes expect more activities than just dynamic testing.
But at which test levels are the test cases actually executed and for what purpose? We start at the smallest level: with the coding. These are the test objects that can be tested the earliest.
Software programming is followed by development-related unit tests. They are also referred to as module tests, functional tests or unit tests. In unit testing, the smallest software components, the units, are tested.
Units are changed frequently, therefore unit tests must often be adapted, supplemented and executed again. Unit tests have two main goals:
1. Early quality assurance
2. Fast detection of cross-effects in code changes
Unit tests are usually the first to be automated in software development.
Because the software or software components are permanently adapted and changed, alignment within the framework of the Continuous Integration approach is extremely useful and already established. The repetition of tests, regardless of the test level, is always referred to as regression testing and is required, among other things, in Automotive SPICE for software unit verification. The simplest method of implementing regression tests is to automate tests and execute them in a Continuous Integration environment.
Unit tests are followed by software integration tests. Integration is the assembly of individual software components and their testing. The focus here is on the compatibility of software components with each other. Integration tests usually take place in several stages. Depending on the extent of the total software, between a few intermediate stages up to several hundred intermediate stages for the integration tests are provided. The number and selection of intermediate stages ultimately result from the software architecture and the software design. The more elements and levels there are, the more intermediate stages can be expected in the integration test.
Typically, integration tests are developed bottom-up by first integrating and testing a few units, about 3-5, with each other. The resulting composite is then integrated with other already tested composites or other units at the next intermediate stage and tested again. This chain of iterations continues until the entire software for an ECU has been built and tested.
The high number of integration tests initially sounds like a lot of effort, but it has the clear advantage that errors are found faster and better. In our experience, the effort required to set up an additional intermediate stage in the integration test is compensated for by a reduction in the effort required to create the test case when the test stage is initially set up.
What else speaks in favor of integration testing? Errors found can be more easily narrowed down to their cause and analysis is therefore significantly simplified.
And best of all: Experience shows that most software errors are found in integration tests.
For those who are not yet convinced, any Continuous Integration approach provides for exactly these testing stages.
With the integration tests completed, the software test follows, which is usually executed on the target hardware. The test object in software testing is identical to the last test object in integration testing: it is the fully integrated software. However, their respective purpose distinguishes the two test levels from each other.
The software test is followed by further integration tests. This time, however, not at the level of software, but at the level of system components. The procedure is the same as for software integration tests. An ECU is tested in conjunction with one or more sensors or actuators, and further components are added bit by bit until the system is in place.
The final tests take place in the system test. In this process, all system components are integrated into one system and tested. The focus in the system test is to determine compliance with the system requirements and the deliverability of the system.
In automotive development, there are now a few additional organizational challenges, such as the question: What is a system? For the automotive OEM, it is a vehicle. But how does a supplier who provides a subsystem, such as a powertrain or a software component, answer the question? In this case, the test object must be specified more closely for the test stages.
From a contractual point of view, there is also a further test stage: the acceptance test, which is carried out by the customer. From a contractual perspective, acceptance is a declaration that the development (software, hardware, system, etc.) meets the contractual criteria. With the acceptance, the remaining payment is due and the warranty begins.
A test environment is like a training ground for the test object and its players. It should correspond as closely as possible to the real production environment so that the significance of tests in interaction with other players, states and signals is as high as possible.
In this context, there is often talk of in-the-loop tests, such as:
1. Model-in-the-loop (MiL)
2. Software-in-the-loop (SiL)
3. Processor-in-the-loop (PiL)
4. Hardware-in-the-loop (HiL)
The term before “in-the-loop” stands for the type of the test object. The “in-the-loop” refers to a special type of interaction of the test object with components of the simulated production environment. In “in-the-loop tests”, the environment reacts to states and calculations of a test object. These tests are thus the opposite of open-loop tests, where no reactions of the environment are simulated.
The advantage of “in-the-loop” as opposed to open-loop tests is the better approximation to a real production environment. However, the setup of an in-the-loop environment is more complex.
In the automotive environment, development is often model-based. Most models are created with MATLAB/Simulink or TargetLink. The models are usually validated as Model-in-the-Loop (MiL) in the form of unit and software integration tests directly in the development environment.
This type of dynamic testing uncovers errors in the control strategy and logic. The simulation of the embedded system is executed within an equally simulated model of the environment. The advantage of this very early testing is the fast detection and possibility of correction of errors already in the construction of the model.
In Software-in-the-Loop (SiL) testing code is tested on the PC. This is either handwritten or generated from a model. The scope is different for both types of code.
SiL is used for the test stages unit testing, integration testing and in some cases also for software testing. Hardware is not yet applicable at this stage.
The code tested in SiL cannot be executed on an embedded ECU. For execution, the code must be compiled for the target processor. The code generated in this process can be tested in two ways:
In both cases, one speaks of Processor-in-the-Loop (PiL) and actually means the test of software built for the target processor architecture. Strictly speaking, the test stage could therefore also be called Target Software-in-the-Loop.
The main goal of Processor-in-the-Loop tests is to detect compiler errors or, in the case of software components that are very close to the hardware, such as drivers or the control of actuators, to check the compatibility of hardware and software components at an early stage.
The next logical step is testing hardware: that is, the finished software on the physical ECU with peripherals. Now the focus is on how the inputs and outputs, communication buses and other interfaces interact in real time. The term for this is Hardware-in-the-Loop (HiL). HiL tests start with an ECU and can be implemented up to the system network level. There are HiL test benches that can test entire vehicles with correspondingly high costs for setup and operation. They are nevertheless well established, since carrying out manual vehicle tests is also expensive and much more time-consuming from an organizational point of view.
In vehicle testing, the components ECUs, actuators and sensors are tested in the final target environment. Mostly, the vehicles are tested under different environmental conditions in cold-, warm- and hot-land. Even today, these tests are mainly performed manually. In some cases, measurements are recorded automatically and later evaluated with the help of tools. This test stage takes place at every OEM. To perform vehicle tests, the vehicle and all its components must be available. However, the tests have poor scalability because trained drivers and vehicles are required for manual tests.
The density of terms and information makes one thing clear: knowledge of backgrounds, processes and communication between projects can be the key to developing, testing and successfully implementing embedded systems both effectively and efficiently.
In automotive software testing, there are many approaches and methodologies where, in our opinion, there is neither right nor wrong, but rather favorable and unfavorable constellations. Of course, this depends on various parameters, involved organizations and ultimately on people working together.
We are fans of early testing and we recommend this starting directly at unit testing level. Going further to integration tests, different functions can be tested to get direct feedback for the development. All this results in a fast, cooperative product development.