Thank you for your prompt and detailed response Stephane.
This makes me now think that if I want to have pacing on my scenario level I need to code it manually. I found the below code to work as desired in terms of pacing (I could still make it more reusable…), I can now get 1 req/s with 10 users. A minor thing, but it would be nice to have it as a standard building block (if I may say so).
private final ScenarioBuilder pacedSearch = scenario("StaticSearchScenario")
.exec(s -> s.set("startTimeMillis", System.nanoTime() / 1000000))
.exec(staticSearch)
.pause(s -> {
long startTimeMillis = s.getLong("startTimeMillis");
long pauseMillis = (10000 + ThreadLocalRandom.current().nextInt(-500, 501))
- (System.nanoTime() / 1000000 - startTimeMillis);
return Duration.ofMillis(pauseMillis > 0 ? pauseMillis : 0);
});
{
setUp(
pacedSearch.injectClosed(constantConcurrentUsers(10).during(240))
).protocols(httpProtocol);
}
In terms of the load profile it unfortunately creates “marching elephants”, no wonder, all the users start at once, there is no ramp up phase.
Ok, so… I figure I cannot use the below to spread out the load regularly
rampConcurrentUsers(0).to(10).during(10)
because here there is a ramp up phase, which spreads out the users, but… there is no execution phase?!?. This is kind of surprising for a load tool and goes against my intuition, but whatnot :-).
Looking at the docs… and there is the Meta DSL for the rescue.
{
setUp(
pacedSearch.injectClosed(
incrementConcurrentUsers(10)
.times(1)
.eachLevelLasting(240)
.separatedByRampsLasting(10)
.startingFrom(0)
)
).protocols(httpProtocol);
}
And the load profile is regular at last!
Not bad, still I (often) have another requirement (as usual in life :-)). Here every user is a “new user” that connects to the system and goes away. That is not the desired connection management profile and it can, as mentioned already, relatively easy hit the “port opening” limit on Windows. I would need the users to be “returning users”, so N users would continue to use, more or less, the same N connections. I can imagine, at some point I will want to mix “new users” and “returning users” in one test, but that is future still…
As mentioned in the previous post I can achieve the above by using the during().on() loop on the scenario level and so prolong the life time of the users to the duration of the entire test…
and that would almost match my needs .
The next step is to run a stepped (or capacity, or scalability) simulation. Here the challenge is that users belonging to the further steps would need to have shorter and shorter life time. I think this again needs some coding… I will open another thread for that… it is doable, but it has some (unexpected?) side effects.
Last thing to mention here, one possibility would be to do the below, but these bigger and bigger ramp ups are not really desired. And the connection management pattern again… conceptually it is one group of users finishing work and another, bigger one, coming in and so on. I would prefer to add users as the already running ones continue to run.
private ScenarioBuilder pacedDurationSearchWithName(String name) {
return scenario(name).during(240).on(
pace(Duration.ofMillis(9500), Duration.ofMillis(10500)).exec(staticSearch));
}
{
setUp(
pacedDurationSearchWithName("StaticSearchScenario-" + 0).injectOpen(nothingFor(0*240), rampUsers(1*5).during(10)),
pacedDurationSearchWithName("StaticSearchScenario-" + 1).injectOpen(nothingFor(1*240), rampUsers(2*5).during(10)),
pacedDurationSearchWithName("StaticSearchScenario-" + 2).injectOpen(nothingFor(2*240), rampUsers(3*5).during(10))
).protocols(httpProtocol);
}
Of course normally I would create the above list with a help of a loop that would create as many steps as needed.
Regards,
Tomasz G.