IA, A DECISION SUPPORT TOOL - INTERVIEW WITH MARC HAGE CHAHINE
In recent years, innovations in machine learning and artificial intelligence have contributed to the evolution of many industries. The testing world is one of them. But how can AI be part of test automation? How can it help business teams to better understand this field? What are its gains, risks and limitations? Discover the answer to these questions in our interview with Marc Hage Chahine.
Expert in methods and tools at Sogeti, Marc Hage Chahine has always worked in testing and wanted to share his experiences. He is lucky enough to be able to do this in his daily work, but also through writing articles on theHe is lucky enough to be able to do this in his daily work, but also through writing articles on the Tester's Tavern blog or in magazines like Programmez! Marc Hage Chahine also has the pleasure of participating in the organization of major testing events such as the STLS (in Sophia Antipolis) and the JFTL.
Test automation is a rapidly evolving market. According to the latest estimates from Research and Markets (2021)*, its revenues could reach $49 billion by 2026. What do you think the future holds for test automation?
Test automation has been touted as the near future for over twenty years. We keep talking about it as a replacement for manual testing. But we can see that it doesn't replace manual testing and that it won't replace it, but also that its implementation is complex. If we look at the latest CFTL surveys, test automation has not progressed over the last two years, even though we are in agile contexts where automation is increasingly necessary.
Why isn't it moving forward?
For a long time, automation remained very technical, with code and development. But for a number of years now, we have been seeing tools without code or scripting, which have emerged with Keyword Driven Testing*, whose main representative is Robot Framework®. This is generation zero testing for non-technical people. We have Gherkin®, which is going to do automation on the same principle. We have other tools that are beginning to be more and more mature, more accessible and functional. But even if they are accessible, they are still quite technical tools and are based on code and scripting.
Since then, new tools are coming out: Keysight® with Eggplant, Tosca® from Tricentis or the latest version of UFT®, where they are trying to add AI, blocks, modules, so that people can use them more easily. There's also a lot of image recognition, which allows you to be code agnostic and have one script to run your test on different phones (iOS, Android), tablets, PCs and even embedded. The next evolution will be AI. We are seeing the emergence of tools such as Gravity®, from Smartesting, which will analyze logs to propose paths and automate them.
Be careful, just because there are all these tools doesn't mean they'll all be used. There are price barriers, whether to make this investment or not. The teams in charge of product development also need to have the right policy, the right testing strategy and the ability to manage quality. Don't forget: to have good automated tests, you must first have good tests. This need for automation must be felt by the teams, and the automation must not rest solely on the tester.
Speaking of AI, how do you see it fitting into test automation?
For me, AI is there to facilitate decision, actions. It's nothing more than automation beyond just execution automation. It has to help make choices, but AI will not do everything. You need data, you need information and therefore you need people behind it. With AI, you can beat Go or chess champions, but chess AI is not going to beat a Go champion. It is very specific and it is therefore on a specific case that it will be more efficient than us and will allow us to progress. There are a lot of use cases with AI, especially for prioritizing tests, which will lead to selection. For example, we have 500 regression tests, which ones should we do first to find the most bugs? But in the end, we can only do 400 or 450, so we have to determine which ones we won't do.
What gains can AI bring to test automation?
AI can help on several points: it can help on the analysis, especially of failed tests, it can help on the use cases put forward by the editors, which are failing, why, and even make them work again and correct the scripts directly. AI will also help with maintenance and upkeep by allowing better management of the test repository. For example, if there are duplicate tests, the AI will tell us that we should think about deleting them because they are not useful, or it will tell us to rework the data of certain tests, etc. And AI will also help with the construction: we say what we want to do and the scripting is done afterwards, which is what tools without scripting already do.
The only real limit of AI will be what we imagine doing with it. The other limits will be the implementation costs and also the interest of doing it, because it is not because we imagined doing something that it is really interesting. For example, flying cars have been imagined for 50 years and could very well be done, but in the end it's useless, it doesn't bring any added value and it consumes too much energy. There has to be a need.
What do you think the risks of AI might be for test automation and the testing profession?
There are a lot of risks with AI. The first is a bias related to overconfidence: if the AI says so, the AI is right so I do what the AI says. This is often related to the fact that we don't have the time. AI should be a decision support tool, but AI should not make the decision. If we don't have the time to think, to see the indicators, to see the data, we are no longer on decision support, we are on decision making. And this is the real risk with AI, knowing that AI is not perfect. It lives on data, data that, even if they are updated, relevant and representative, can change from one day to the next. Let's take a very current example, the presidential election polls: Macron was at 20/25% in the polls, then there is a new data, the war in Ukraine, and he goes to 30/35, then there is a new data, the consulting firms, and he goes back to 25%. The truth of one day is not the truth of the next, and AI is not able to anticipate this.
Personally, how confident are you in AI?
By default, I don't have confidence. I see potential, possibilities, contexts where it will work. For example, for Gravity®, we have two feedbacks on APIs that will be given at the JFTL (French Software Testing Day). In this context, in this framework, it works well and therefore, I will have confidence, for a similar context, to succeed in implementing a solution that will save time. Otherwise, there is no confidence as long as there is no feedback, by default, it doesn't work.
But then, what would be the ideal context for this to work?
The ideal context is going to be anything that is repetitive, boring, things that are not necessarily difficult but easy and not very interesting to do. They are going to require doing a lot of easy things, but the sequence is going to be tiring and become difficult. All this will be subject to automation and this is the case in the whole history of humanity: tools were made because it was easier to break a coconut with a sharp stone. And each time, more and more time was freed up.
What did you think of this interview? Take the opportunity to discover our other interviews with Bruno Legeard and Xavier Blanc.