Gatling version: 3.11.5.2 (must be up to date)
Gatling flavor: java kotlin scala javascript typescript
Gatling build tool: maven gradle sbt bundle npm
I read the guidelines and how to ask a question topics.
I provided a SSCCE (or at least, all information to help the community understand my topic)
I copied output I observe, and explain what I think should be.
Hi Gatling Community,
I am having trouble understanding how the Closed Model works and thereby how can I create my load with desired parameters.
The relevant part of my simulation (one of the variants) looks as follows
private final ScenarioBuilder pacedSearch = scenario("StaticSearchScenario")
.pace(Duration.ofMillis(9500), Duration.ofMillis(10500)).exec(staticSearch);
{
setUp(
pacedSearch
.injectClosed(
constantConcurrentUsers(10).during(120)
)
).protocols(httpProtocol);
}
To my surprise the generated load looked like this
================================================================================
2024-07-22 11:54:10 GMT 30s elapsed
---- Requests ------------------------------------------------------------------
> Global (OK=2395 KO=0 )
> staticSearch (OK=2395 KO=0 )
---- StaticSearchScenario ------------------------------------------------------
active: 10 / done: 2395
================================================================================
So after 30s of the run the achieved request rate is ~80 req/s (=2395/30) and this is created by 10 active users which therefore run with a pacing of ~125 ms (=1000/80*10) and the load is actually throttled by the tested system, not by the load generator configuration.
I expected the pacing of each user to be, as configured on the pace() element, 10 s on average and therefore the total load with 10 users to come to 1 req/s (and last for 2 min).
Additional info: the tested search request responds in under 30 ms, so it is rather quick. This is of course much less than the desired pacing of 10 s.
What am I missing here? Why is the set pacing not effective?
Some further questions and observations:
-
I am having trouble understanding what is the timing of the users entering the system in the Closed Model here. I am used to defining the ramp up/test duration conditions to say: I want that many users to start executing with their start times uniformly distributed over a given period and then to run their scenario (with pacing or without) for a given amount of time. However in Gatling there is no users(userNo).rampUp(ruSec).duration(durationSec).
-
In the example shown at the beginning all the users start at once, don’t they? That is not ideal, in bigger tests one does not want to start all the users at once (that is also not the reality of a browser based application and people gradually logging in in the office in the morning).
-
I see I could use this construct to ramp up the load gradually, but where do I say that I want these 20 users load to run for (let’s say) 10 min?
rampConcurrentUsers(10).to(20).during(10)
-
I could maybe do the below, however after the ramp up period the 20 users would start sending their requests all at once, no? That is a bad idea, I want the requests to be somewhat spread out regularly
scn.injectClosed( rampConcurrentUsers(10).to(20).during(120), constantConcurrentUsers(20).during(600) )
-
I can solve the above problems with… an Open Model (kind of) plus a during loop
private final ScenarioBuilder pacedDurationSearch = scenario("StaticSearchScenario") .during(600).on( pace(Duration.ofMillis(9500), Duration.ofMillis(10500)).exec(staticSearch) ); { setUp( pacedDurationSearch.injectOpen(rampUsers(10).during(10)) ).protocols(httpProtocol); }
-
The above is a bit of a conceptual mix up, isn’t it? The scenario defines how long the simulation runs, hmm… strange… The pace() element works as expected suddenly, one iteration takes 10 s on average! The model is both opened and closed!!! The number of users in the system is constant (10) over 10 min, their request sending times are regularly distributed over a 10 s interval and the average load rate is 1 req/s. I would still call it a closed model scenario because if the response times somehow degrade to be very long, the request rate will go down and the number of the users in the system will remain constant.
-
One, maybe not so obvious, good thing about that last example is that the HTTP connections are getting reused over iterations which is a property that I often need in my tests. This way I am saying “these are the returning users and they get to keep their connection”. When not reusing the connections and generating lots of traffic one runs into a problem of using up all the outgoing ports as there is that retention period (at least on Windows). This concept of “the same” or “not the same” user, which is then related to opening or not a new connection is not very prominent in Gatling, is it? One cannot declare a group of users as “returning users” vs “new users” or is there an abstraction/setting for it?
Regards,
Tomasz G.