I have a script which makes 8 request to my web server for static content. When running the test two weeks ago I got the following result:
When I re run the exact same script on the exact same infrastructure yesterday I got the following result twice:
In the last two attempts the test keeps on going for more than 20 minutes (rather than 20 minutes as defined in the script) an the behavior of Gatling is different when compared to the first attempt.
I couldn’t find anything different from the time I run the first test and the last two.
However, would it be right in your opinion to at least cut the simulation file so that I will only view data of the first 20 minutes of the test?
Because I am doing a performance test on a system and multiple runs are being done to ensure that any performance conclusions are valid. So that then I can say that in a test of 20 minutes the system was capable of serving 400,000 transactions for example.
Yes, I am using the standard write to simulation.log. So that may result in Gatling not sending any requests to the server?
Because my Azure VM which has 14 GB of RAM and 8 cores has more than 19 GB of space taken by the accumulated test results and the total disk space is of 30 GB.
I have removed all the previous results and re-run the test. However, the same result happened again. Is there a way to at least limit the test to stop exactly after 20 minutes?
Thanks. Given that I already executed the tests, is there a way to regenerate the results showing only the data for the first 20 minutes of the test? I know I can use -ro but is there a way to restrict the time?
But which lines should I remove because the 20th minute is 142660075…however after that line there are still lines which have a timestamp before that. I mean my idea was to find the first time stamp which is greater than 142660075000 and remove all the lines following it. But that is not the right way to do it I think, right?