Write a Blog >>
ICST 2020
Sat 24 - Wed 28 October 2020 Porto, Portugal

Performance testing involving performance test case generation and execution remains a challenge, particularly for complex systems. Different application-, platform- and workload-based factors can influence the performance of the software under test. Common approaches for generating the platform-based and workload-based test conditions are often based on system model or source code analysis, real usage modelling and use-case based design techniques. Nonetheless, those artifacts might not be always available during the testing. Moreover, creating a detailed performance model is often difficult. On the other hand, test automation solutions such as automated test case generation can enable effort and cost reduction with the potential to improve the intended test criteria coverage. Furthermore, if the optimal way (policy) to generate the test cases can be learnt by the testing system, then the learnt policy can be reused in further testing situations such as testing variants or evolved versions of the software, and upon changeable factors of testing process. This capability can lead to additional cost and computation time saving in the testing process. In this research, we have developed an autonomous performance testing framework using model-free reinforcement learning augmented by fuzzy logic and self-adaptive strategies. It is able to learn the optimal policy to generate different platform-based and workload-based test conditions without access to the system model and source code. The use of fuzzy logic and self-adaptive strategy helps to tackle the issue of uncertainty and improve the accuracy and adaptivity of the proposed learning. Our evaluation experiments showed that the proposed autonomous performance testing framework is able to generate the test conditions efficiently and in a way adaptive to varying testing situations.