I’m curious if there might be a way to capture the error rate during execution and adjust the load accordingly.
For example, we ramp up from 0 to 50 users with 0% error rate… then ramp up to 100 users, and as we reach 75 users we begin seeing 10% of requests error out (upper threshold), so it ramps down until the lower error threshold is reached, let’s say 1%, which occurs when we have 65 users.
Effectively, dynamically adjusting load to keep my error rate at or below 1% to try to identify the “sweet spot” for the applications load.
I know it’s a bit challenging because it involves tracking error rate over some time span to measure against (you would want to sample the responses from the last minute, for example, and not the entire execution).
Currently not possible.
And load is not all about number of virtual users (and which number? arrival rate? number of concurrent?) but also how they use think time, which kind of requests they execute, which kind of load model you use (opened vs closed), etc…
Plus error rate could be related to what happen before. Decreasing the load doesn’t guarantee that your system self heals, maybe it’s not.
I’m not sure a tool can effectively replace a human tester for this.
All true. That being said, sometimes it would be nice to dynamically adjust the load.
For example: How many concurrent requests can we have before we consistently violate our SLA? To answer those kinds of questions, I use RampUsers over a long duration, and then inspect the results. But I have on occasion wanted to be able to ramp up slowly until we are averaging around 500ms response times, and then tweaking the injection profile to sustain that load. It’s doable manually, but it is not fun.
Sometimes I want to use Gatling for automated tasks, not just load testing. For example: Add these 5 million records to the performance lab database, as fast as possible. Adding the record means making a RESTful call. In order to be “as fast as possible” I had to split my feeder into small chunks, and wrap gatling in a shell script that looped itself using “exec $0”, that way I could tweak the number of concurrent users, save the script, and have the change take effect with the next chunk. It was tedious trying to tune the process that way. Doable, but it would have been cool to build an auto-throttling feature.
There are other uses for real-time access to the execution statistics, I’m sure. But I admit, it is low priority.
Thanks guys. I appreciate the thoughtful response!