Hi Joerg,
Any suggestion on how I can make the “never have more than N parallel virtual virtual users running” work?
for the case where you want cUps() users arriving, but no more than N active/running users, there is a ticket for this, so it’s not implemented yet: https://github.com/gatling/gatling/issues/1647
The 3000 and 4 are arbitrary numbers. Most likely I confuse “virtual” users somehow.
Probably not but it will likely be useful for you to determine whether the api requests are all independent of each other or how related. You can discard the notion of (implied concurrent) virtual users if they do not repeat sessions and independent of each other. You would need instead the user arrival rate from production data.
Basically I have a CSV file that I generated from a Production log file. The CSV file contains the REST API requests I want to basically play back.
Sounds good
When I use the throttle functionality right now it injects 4 users no matter what state the last 4 users are in (finished or still running).
Throttle is a rate limiter.
You are looking for a concurrency limiter (as far as you have stated the requirement, active users =4)
To investigate this I set up the following test:
a request in a SUT takes a configurable N seconds.
val request = http(“request”) .get(“http://localhost/SUT/request”)
val scn = scenario(“scn”) .exec(request )
I apply an open model workload of cUPS(n) with each user only making 1 request.
- user arrival rate = request throttle rate = response time = 1
setUp(
scn.inject(constantUsersPerSec(1) during(testDuration seconds) ) .protocols(httpConf)
.throttle(jumpToRps(1), holdFor(2 hours))
)
the console prints every 5 seconds:
waiting: 100000 / running: 0 / done:0
waiting: 99995 / running: 1 / done:4
waiting: 99990 / running: 1 / done:9
waiting: 99985 / running: 1 / done:14
waiting: 99980 / running: 1 / done:19
waiting: 99975 / running: 1 / done:24
waiting: 99970 / running: 1 / done:29
waiting: 99965 / running: 1 / done:34
- user arrival rate = request throttle rate = 1
response time = 4
setUp(
scn.inject(constantUsersPerSec(1) during(testDuration seconds) ) .protocols(httpConf)
.throttle(jumpToRps(1), holdFor(2 hours))
)
the console prints every 5 seconds:
waiting: 100000 / running: 0 / done:0
waiting: 99995 / running: 4 / done:1
waiting: 99990 / running: 4 / done:6
waiting: 99985 / running: 4 / done:11
waiting: 99980 / running: 4 / done:16
waiting: 99975 / running: 4 / done:21
waiting: 99970 / running: 4 / done:26
waiting: 99965 / running: 4 / done:31
- user arrival rate = 2
request throttle rate = 1
response time = 1
setUp(
scn.inject(constantUsersPerSec(2) during(testDuration seconds) ) .protocols(httpConf)
.throttle(jumpToRps(1), holdFor(2 hours))
)
the console prints every 5 seconds:
waiting: 200000 / running: 0 / done:0
waiting: 199990 / running: 7 / done:3
waiting: 199980 / running: 12 / done:8
waiting: 199970 / running: 17 / done:13
waiting: 199960 / running: 22 / done:18
waiting: 199950 / running: 27 / done:23
waiting: 199940 / running: 32 / done:28
waiting: 199930 / running: 37 / done:33
waiting: 199920 / running: 42 / done:38
waiting: 199910 / running: 47 / done:43
waiting: 199900 / running: 52 / done:48
waiting: 199890 / running: 57 / done:53
…
Throttle does its job as the rps is constant for all 3 tests, but the active(running) users is different in each test for different reasons.
It is also worth noting that throttle constrains requests from starting, not completing. So the “active(running) users” is a mix of users in the middle of a scenario and users delayed/queued to make their first request (maybe those are not running as such as they have not done any work yet).
If you are happy that as the response time of your SUT varies then so will the active users above or below 4 then seems like a reasonable set up. If active users must be constant, and you want them to be executing a request rather than being delayed/queued or in a pause, then I think the only choice is to revert to a closed model of total of 4 users that loop the scenario.
This is a good way of testing and I am doing this at this point. And this is actually more like the real world example, which is why I like Gatling. Another 4 users could come at any given time and they don’t care how many users are already active.
agree with that!
The reason I want to limit to a constant number of running requests is simply debugging. It helps me analyse logs on the backend (Database, API, etc.) somewhat easier as I know I only have n number of requests (Example is 4) running.
Sounds reasonable for early or simplified testing.
I don’t think throttle is widely applicable though (or as a default way of modelling the load), esp. if you are using cUps().
Thanks,
Alex