THE CONTRIBUTION OF IA IN SOFTWARE TESTING - INTERVIEW WITH BRUNO LEGEARD
In recent years, innovations in machine learning and artificial intelligence have contributed to the evolution of many industries. The testing world is one of them. But how can AI be part of test automation? How can it help business teams to better understand this field? What are its gains, risks and limitations? Discover the answer to these questions in our interview with Bruno Legeard.
Bruno Legeard is an expert in test automation, who has been working for several years on the use of AI to facilitate software testing. Professor of Software Engineering at the University of Franche-Comté, he supervises several PhD theses on this topic. He is also one of the co-founders of Smartesting, which develops AI-based test automation tools, and he contributed to the new AI Testing certification within the ISTQB.
You gave a talk on using execution traces to optimize automated regression testing with AI, can you tell us a little more about that?
From the moment we became aware of the need to automate tests, the question is: "how to do it"? The current practices that rely on the coding of automated scripts are practices that pose difficulties today because they induce a very significant maintenance effort. This is where AI and new technologies come into play, to reduce this maintenance effort on the one hand and to ensure that automated tests cover the key paths in production on the other. This is the subject we have addressed and for which we have proposed an AI-based solution, which is the analysis of software usage traces to complete the coverage of automated tests. Testing the paths that matter, knowing how to do it based on the analysis of what the user is really doing.
Apart from this maintenance effort, what other gains can AI bring to test automation?
There is a set of techniques that are starting to emerge, based on AI, on several topics related to test automation. For example, being able to identify, when the automated test is a classic test (a web end-to-end test for example), when this test simulates the user, it requires being able to activate objects in the interface. And often, broken tests come from a change in these graphical objects. And machine learning can find the right object and automatically modify the script to fix the locator, i.e. the location of the graphical object that must be activated during the test. But that's just one element.
The second topic is the relevance of automated tests, i.e., making sure that the key user paths are covered by the automated tests. And there's a third important topic, which is the topic of prioritizing tests at runtime. When you have end-to-end testing, which ensures that user journeys work, execution can take time. One of the solutions to this problem is to be able to prioritize the execution of automated tests based on learning from previous results, a prediction of the anomaly that is going to be detected.
The idea, therefore, is to run scenarios that will be KO'd more often to give a quick status on, for example, an error correction?
Absolutely. The magic of AI and machine learning is that it's not the rules that are defined by humans. It's the learning of the model, of the algorithms by the algorithms, that allows us to predict with a good reliability. Let's imagine that we have 300 tests to run, 17 of them will fail and reveal anomalies. With the prediction, these 17 tests will be in the first 20 to be executed. If we have the time to run these 300 tests in 3 minutes, we won't need them, but the reality is often not that. The reality is that the 300 tests run in 4/5 hours, so it's in our interest to have reliable prioritization. And AI, in our work, is a supervised learning technique, the fact that we learn from the history, the context in which the test is executed, its duration, the changes that have been made, etc. AI should be seen as a daily facilitator for testers.
On the contrary, for you, what are the limits of AI in test automation?
In a few years, it seems quite clear to me that we will be able to develop intelligent and autonomous testing systems. There are things in the laboratory that are starting to appear around self-adaptive tests. The number one subject is regression testing , tests that are difficult to maintain and to do. I would bet that in 5 to 10 years, we will have this type of robot. We will have capabilities that will replace the creation of tests by testers in certain environments and in an automatic way. This also means profiles, good business knowledge, etc., it's a bit repetitive work that will be replaced in part. But testers have a very strong added value on all the quality engineering activities, and the implementation of intelligent and autonomous test robots will come as a help and a reinforcement of their activities
AI is quite a complex topic, what challenges do you think it can pose in test automation?
When using and implementing learning techniques from data sources, if the data sources are not reliable, then the AI results will not be reliable. If you're trying to predict defect rates or risks associated with different software components based on development history, then that history data has to be archived, maintained and reliable. This is also a problem, it is not enough to say: I only have to use this AI engine to get this result, for example, to make a prediction on the error rate of the component. If you don't have the right data up front, the component will never work. Testers have a role to play in ensuring that the data used by the AI is of good quality.
*End-to-end testing: Testing of a complete integrated system to verify that all integrated components work in the final environment on targeted user paths.
*Script: Description of the different steps to test one or several functionalities on a web or mobile application.
What did you think of this interview? Take the opportunity to discover our other interviews with Marc Hage Chahine and Xavier Blanc!