Hi,
I think it is worth taking time over this.
If you read the following paper:
http://users.cms.caltech.edu/~adamw/papers/openvsclosed.pdf
And other books on the subject
they typically describe 2 or 3 types of workload model :
open, partly open and closed workloads
I don’t have any numbers but my guess is that most systems under test for tools like these have open or partly open workloads. the paper above out of 10 sites found only 2 were definitely closed (clearly not a large enough sample!! but it would appear to be reasonable).
It turns out that a lot of popular load tools like JMeter, The Grinder, Load runner, without standard tweaks, model closed systems - there is a virtual user (implemented as a thread) that loops around starts a new session each iteration.
Others like httperf model open workloads.
And Gatling is capable of modelling all three.
To convert a tool that can only model a closed workload into one that can model open or partly open, the other tools mostly add a feature called pacing to enforce a constant start (arrival) rate for the looping users.
This works well if you get the pacing right, but from my own experience it can be very difficult to get right if you want an accurate test in a highly variable environment like (retail) web sites.
If you set the pacing too high then you have to create more threads than necessary which could cause the load generator scheduling issues, or needing more generator servers.
If you set it too low then you risk running out of pacing time and delaying the start time of the next vuser. If session start times are delayed then problems like coordinated omission can occur.
If your (session duration / ramp time) ratio is too high the load can be uneven. Ie. if the start of the session is a login and the end a order confirmation then all the logins will happen at the same time, and some time later all the order confirmations happen at the same or similar times.
Similar to the previous point, if the start and therefore end times of the vuser sessions are uneven then load can be starved as vusers stop doing work as their sessions end and they wait for the pacing time to complete.
These problems can be avoided but I have seen them happen in highly experienced teams.
For Gatling testing open or partly open systems, pacing is not needed as it can already provide an arrival rate out of the box. What is not present is a guarantee of an inter-arrival time distribution.
there is an exponential think time for inter- request time:
https://github.com/excilys/gatling/wiki/Structure-Elements#pause
So this needs to be provided to the injection rate so that we can get overlapping requests but with the right mean arrival rate.
If your system is closed then injecting the number of concurrent users and looping them with during is the way to model that closed system. A call center data entry system would be a good example - the call center operators loop round scenarios as they take different calls. If the data entry system slows down then the applied load backs off as they cannot proceed to steps in their workflows.
In terms of the DSL, for me the current approach is a breath of fresh open air which has a clear separation between a scenario and the test parameters.