Currently iám testing a webserivce using the open loadmodel
Each and every time a run a test, suddenly active user rate is increasing to e.g 1000 / 2000 active users, I know in an open model: new users keep on arriving no matter how many users are already there…but it seems to me an extreme reaction…why does it suddenly increase with e.g 1500 active users…because i think then ioxeception premature close / request time out is caused by this behaviour…so there is maybe an (small) delay / bottleneck within the SUT …however now it seems that gatling is starting to much active users in (almost) one shot…so it now not clear to me that SUT is the root case or there is maybe an issue with gatling…
Check attached screenshots for more details.
Remark: I tried also the latest gatling version , no difference at all
I saw something like this on Friday. In my case, my hypothesis is that it had to do with dynamic server startup, and requests queuing up waiting for the newly started server to be ready to take responses. Notice the linear saw tooth of response times? The first requests wait the longest, and as more requests come in, they wait less and less time, suggesting that when the spike ends it’s because suddenly all requests complete at once.
In your case, it could be elastic resources, or more likely it could be garbage collection. The regular saw-tooth wave with periodic large spikes suggests it may be that the small spikes are minor garbage collections, and the big spikes are major garbage collections.
When running load tests, it’s always a good idea to have instrumentation running on the application under test. What’s going on during the spikes? Is it doing a garbage collection? Or is there something else going on?
Here’s a capture from FrontLine with your injection profile.
Gatling works fine and is merely the messenger.
The spikes you see are a result of your system under load forcefully closing connections (hence the premature close).