IA AND USER BEHAVIOR - INTERVIEW WITH XAVIER BLANC
In recent years, innovations in machine learning and artificial intelligence have contributed to the evolution of many industries. The testing world is one of them. But how can AI be part of test automation? How can it help business teams to better understand this field? What are its gains, risks and limitations? Discover the answer to these questions in our interview with Xavier Blanc.
Xavier Blanc is a specialist in software quality. Director of the Laboratoire Bordelais de Recherche en Informatique (LaBRI), he co-founded ProMyze, a start-up that helps define and exchange good software development practices and thus improve the quality of teams. He is also a Professor at the University of Bordeaux, with a specialization in software engineering.
AI is a topic at the heart of current trends and it is especially starting to invite itself into the testing world. Personally, how do you see AI fitting into test automation?
In particular, AI can help automation by not leaving the automatician alone: how to assist the automatician in his work to help him provide better quality tests? Because finally, when you write an automated test, you repeat operations and this is where AI has shown that it is very strong. When we start repeating, when there is data that exists and we do things that we have already done, or have been done by other people, that's where AI can help us add quality. We'll take an application and ask the AI to analyze it and help us find the most relevant tests.
There is a second point on diagnosis. We have our test base, we know how to automate it, it runs very well, and then sometimes there is still a diagnosis that is difficult to make. The test tells us that it doesn't work, it's red, so there's a bug, but it's not a bug, it's something contextual: maybe it's the network that doesn't work well, a latency. And we lose time on the diagnosis. We have experience, but it's so complicated that intuition is not enough, and that's where AI can quickly bring us big benefits.
What are the challenges that AI can pose for test automation?
AI is going to have a hard time being very contextual. The applications we test are similar, but they all have specificities and these specificities mean that the choices that apply to some applications do not apply to others. Making an AI that will answer all the problems of the Earth is not even an ambition. What we are going to try to do instead is an AI capable of adapting to the context . It will therefore have to learn very quickly, to guess the specificities of this context and to be very relevant. And that's where we're going to have enormous difficulties, in the months or years to come, is finding this context and knowing how to make decisions that may be radically different, rather than having a dominant average, which is where AI is strong at the moment: it has a lot of points and it knows how to find where the curve passes between all these points, but it will have to know how to remove the points that don't make sense in the context of use for this or that application.
What would be the ideal context for AI to fit into test automation, in this case?
I would like to see AI embedded in applications. We often have masses of data and we spend time analyzing it. We run the algorithms and we end up with an AI at the end, but it lacks a bit of reactivity and having the AI directly in the application will allow it to be very reactive according to the users' behavior. User behavior changes overnight, it's almost impossible to predict and we're always a little bit behind. We have the data, we clean the data and then we run the algorithms.
I think we're going to have AIs embedded in applications that will minimize the time of this feedback loop. AI will be able to give recommendations: users are doing this today, you should look at this kind of tests, etc. We're already seeing this in security. They are monitoring all the time and this allows them to detect new attacks. They also have collaborative AI: they have multiple sites and they try to pool knowledge. We could imagine this kind of thing between different applications, and we realize that the user changes, has this or that behavior, or on the contrary, that there are anomalies. And if there is an AI embedded with the application, we should have a very quick feedback to the developers, to anticipate bug corrections and be much more reactive.
In your opinion, how could AI help business teams, who do not have a technical profile, in test automation?
If we have an AI in the application, it should be able to speak two languages: the language of the business people and the language of all the people working on the development of the application. If we have enough intelligence in the AI, we could imagine that it understands the users better and could go to the business people, telling them what exactly the users are doing. For an e-commerce site, this would allow to know what they buy. So the business people should be able to start reacting as well by proposing this or that based on the AI data. And also the AI should tell the developers where there is a problem, for example on the screen, etc.
It would also allow people in the business to do business tests, A/B testing. Right now, we're putting metrics down and looking at them, but we could also know if it's worth it or not. And if I do that, what would be the AI prediction? Is it going to be interesting or not? We could also imagine the same thing on the technical side: and if I do this, will I introduce new bugs or not?
What risks can AI pose to the testing world?
To use the metaphor of the points, the AI is very good at finding the curve that goes through all these points, but if you ask it what the next point is, it will give you the next point on the curve it has drawn. The problem is, if you don't take the right points, if you overdo it, the AI will take you to lands that are to the west.
And a second problem that can happen is thatthere may be points that are not on the curve that are interesting. We will also have to say to ourselves that there are times when we may have to pull the plug on AI and take risks. And the winners will be those who manage to take risks against the AI's predictions.
So, there are these two pitfalls: having an AI that is not very smart and that sends us in the wrong direction, and the AI that is smart but without being percussive either and that is not going to propose to us to take a point that is not in the right direction and that, nevertheless, is going to be a crazy success. The AI makes averages, so that's the risk, that we get stuck in a routine.
Precisely, how much confidence do you have in AI?
I have absolute confidence in the AI, if we consider that the AI is the algorithm that finds the curve that passes through all the points. That's not a very honest answer, but I don't want to tax the AI because we gave it bad points. The AI learns, so if I give it unintelligent things, it will learn nonsense. The difficulty is to give it intelligent things on which it can learn. We also have algorithms that are beginning to understand in which direction information should be given so that it can learn quickly, but from there to asking the AI to sort out itself which information it should use and which it should reject, I think that it is not up to the AI algorithm to do this sorting, it is upstream, and this is where we will have to be a little more demanding on the way we bring up data So yes, I'm confident, and I think we'll achieve even higher confidence rates if we're able to give the right points. Do I have confidence in the points we give, much less, but for me, it is no longer the responsibility of the AI.
Can you tell us about the work you are currently doing on testing and AI?
We are working a lot on what users do from the web application and we are starting to have quite a few probes that allow us to measure what they do. And from that, an AI should be able to tell us who our users are. There are a lot of technologies that are known to make some kind of classification: there is the mainstream user, the user who is very rare, etc. And the idea is that an AI can tell us who our users are. And the idea is that we should be able to provide automated test sets in an automatic way, saying: if you want to test what the average user does, you just have to do this, if you want to test the trendy users, this is what you should do. The work here, which is a little bit difficult, is to take all the points that human beings make on applications and try to see if we can't classify that, if we can't categorize those points and ask the AI to provide different curves. Once we have these behavioral models, we can ask ourselves the question: won't these models help us to better test the application? This is in line with everything I said, that currently, these behavior models are done a posteriori and we would like to have them in real time, to know if they don't change every day, by integrating the AI in the application directly.
What did you think of this interview? Take the opportunity to discover our other interviews with Marc Hage Chahine and Bruno Legeard !