(scn.inject(constantUsersPerSec(10) during (2 minutes)).protocols(httpConf)
And my scenario takes takes 1 second to accomplish in ‘real world’.
Looking at the output in my console the ‘active’ count is always zero.
First I would expect this number to be 10, but (I think) I understand that this number is zero because my scenario finishes to fast to being able to ‘hold’ 10 constant users.
Yes. Every tenth of a second, a new user gets fired up. It fires off a request. A moment later, it ends. During the short period of time when the stats are generated just happens to have never been while a request was outstanding.
On the other hand I see in the nice gatling report that it actually works, as in, actually my scenario has a constant rate of 10 ‘Active users along the simulation’.
Sort of. There were 10 users active during that second, but likely never all of them active at the same time.
Is this the correct assumption?
And another question. given a scenario like this
setUp(scn.inject(rampUsersPerSec(1) to(50) during(5 minutes)).protocols(httpConf)
I do get a graph with #Active users along the simulation looking like an evenly increasing graph gong from 0 to 50 in 5 minutes.
My question is: In which graph would I spot an expected increase in response time.
First, look at responses per second. Until that graph flattens, you won’t see an increase in response times. If your requests are finishing in 100-250 milliseconds, then with 50 users, you may not be taxing the test system, yet.
Ramp up higher, and monitor load (CPU usage and I/O especially) on the test system as the load increases. Once CPU starts maxing out, then you can expect response times to change.
In the ‘Response Times Percentiles over Time’ I see spikes of higher response times, and is this due to the fact that my scenario just includes one single call and when this call happens it is shown in the graph ‘Response Times Percentiles over Time’?
If you have a single short and fast request, and then a pause, then yes, you will have just spikes. Try turning off the pauses and see how it behaves.
I would expect the response time to be a graph that increases (and correlates) with the #users being added to the simulation (ramping up).
Only without the pauses. With pauses, you have to have a LOT more users.
My hypothesis is that if I add even more users to my ramp up scenario and probably tweak the duration down I would se more of a ‘trend’ with higher response times than the ‘spikes’ I now get.
Just make the scenario loop without pauses. That way each virtual user produces a steady stream of transactions. Then you will start to see the graph with trends happening quite easily.