Case Study – Reshaping End-of-Line Testing and HMI Verification With a Custom Automated Testing System

hmi verification automated robot arm

The Problem

The client is one of the most prominent manufacturers of household equipment in the world. Their premium home appliances are equipped with many features controlled via different types of display interfaces. To ensure that every appliance will work correctly, the client needed a way to thoroughly test how HMI (human-machine interface) responds to finger-operated commands.


A large number of features and settings requires a complex navigation menu with lots of screens, allowing for hundreds of possible navigation combinations. If testing was done manually, the process would have taken a significant amount of time, slowing down production.

With an average of 200 different screens and multiple submenus per appliance, it is virtually impossible to detect all bugs manually. Regular software updates and changes in display design require additional testing done in a very short timeframe of up to a couple of weeks.

The Solution

robot arm hmi verification

NOVELIC had its testing, embedded software/hardware, and mechanical engineering teams working together to develop an optimal solution for the testing environment and all its components.

Testing setup

We made a customized testing system consisting of a high-resolution color industrial-grade camera, a robotic arm with 4 or 6 DOF (degrees of freedom), and custom software for both the camera and the robotic arm.

The camera takes real-time images of the interface that is being tested. These images are processed using custom-made software and classified as one of around 80 different screen types.

The robotic arm can imitate the following human finger movements:

This setup allows for the testing of interfaces that are button-only, screen-only, or a combination of a screen with buttons.


Our engineers have developed custom software based on Python and OpenCV to preprocess and process images from the camera and to calibrate and control the robot. The software that controls the robot can run independently or with the camera.

Image software features:

image processing

Robot software features:

image processing

How it works

Each test case contains a set of instructions for the robot (i.e. navigate to screen, carousel check, click, confirm, etc.). Intelligent navigation software uses these instructions to navigate the robotic arm. The camera sends real-time screen images to image software that recognizes and classifies the screen type, assisting intelligent navigation software with the execution of instructions. The extracted information is compared with the expected one after each step, returning a Pass/Fail output based on the result. If a step cannot be properly executed (i.e. due to a bug), the software will instruct the robotic arm to execute the next step and continue with the testing.

Thanks to every step of the process being recorded, timestamped, and stored in the cloud, it is possible to track back all the steps that led to the bug, reproduce the test if needed, and report the bug to the client. Since the system can operate without supervision once it has been activated, this allows for one person to run multiple systems at the same time, significantly reducing testing time.


faster fit testing

7x faster FIT testing

faster smoke testing

8x faster smoke testing

improved bug detection

Improved bug detection