Modelling load (Closed Model, but not only)

Gatling version: 3.11.5.2 (must be up to date)
Gatling flavor: java kotlin scala javascript typescript
Gatling build tool: maven gradle sbt bundle npm

I read the guidelines and how to ask a question topics.
I provided a SSCCE (or at least, all information to help the community understand my topic)
I copied output I observe, and explain what I think should be.

Hi Gatling Community,
I am having trouble understanding how the Closed Model works and thereby how can I create my load with desired parameters.

The relevant part of my simulation (one of the variants) looks as follows

    private final ScenarioBuilder pacedSearch = scenario("StaticSearchScenario")
        .pace(Duration.ofMillis(9500), Duration.ofMillis(10500)).exec(staticSearch);

    {
        setUp(
            pacedSearch
                .injectClosed(
                    constantConcurrentUsers(10).during(120)
                )
        ).protocols(httpProtocol);
    }

To my surprise the generated load looked like this

================================================================================
2024-07-22 11:54:10 GMT                                      30s elapsed
---- Requests ------------------------------------------------------------------
> Global                                                   (OK=2395   KO=0     )
> staticSearch                                             (OK=2395   KO=0     )

---- StaticSearchScenario ------------------------------------------------------
          active: 10     / done: 2395
================================================================================

So after 30s of the run the achieved request rate is ~80 req/s (=2395/30) and this is created by 10 active users which therefore run with a pacing of ~125 ms (=1000/80*10) and the load is actually throttled by the tested system, not by the load generator configuration.

I expected the pacing of each user to be, as configured on the pace() element, 10 s on average and therefore the total load with 10 users to come to 1 req/s (and last for 2 min).

Additional info: the tested search request responds in under 30 ms, so it is rather quick. This is of course much less than the desired pacing of 10 s.

What am I missing here? Why is the set pacing not effective?

Some further questions and observations:

  • I am having trouble understanding what is the timing of the users entering the system in the Closed Model here. I am used to defining the ramp up/test duration conditions to say: I want that many users to start executing with their start times uniformly distributed over a given period and then to run their scenario (with pacing or without) for a given amount of time. However in Gatling there is no users(userNo).rampUp(ruSec).duration(durationSec).

  • In the example shown at the beginning all the users start at once, don’t they? That is not ideal, in bigger tests one does not want to start all the users at once (that is also not the reality of a browser based application and people gradually logging in in the office in the morning).

  • I see I could use this construct to ramp up the load gradually, but where do I say that I want these 20 users load to run for (let’s say) 10 min?

    rampConcurrentUsers(10).to(20).during(10)
    
  • I could maybe do the below, however after the ramp up period the 20 users would start sending their requests all at once, no? That is a bad idea, I want the requests to be somewhat spread out regularly

    scn.injectClosed(
      rampConcurrentUsers(10).to(20).during(120),
      constantConcurrentUsers(20).during(600)
    )
    
  • I can solve the above problems with… an Open Model (kind of) plus a during loop

      private final ScenarioBuilder pacedDurationSearch = scenario("StaticSearchScenario")
        .during(600).on(
          pace(Duration.ofMillis(9500), Duration.ofMillis(10500)).exec(staticSearch)
        );
    
      {
        setUp(
          pacedDurationSearch.injectOpen(rampUsers(10).during(10))
        ).protocols(httpProtocol);
      }
    
  • The above is a bit of a conceptual mix up, isn’t it? The scenario defines how long the simulation runs, hmm… strange… The pace() element works as expected suddenly, one iteration takes 10 s on average! The model is both opened and closed!!! The number of users in the system is constant (10) over 10 min, their request sending times are regularly distributed over a 10 s interval and the average load rate is 1 req/s. I would still call it a closed model scenario because if the response times somehow degrade to be very long, the request rate will go down and the number of the users in the system will remain constant.

  • One, maybe not so obvious, good thing about that last example is that the HTTP connections are getting reused over iterations which is a property that I often need in my tests. This way I am saying “these are the returning users and they get to keep their connection”. When not reusing the connections and generating lots of traffic one runs into a problem of using up all the outgoing ports as there is that retention period (at least on Windows). This concept of “the same” or “not the same” user, which is then related to opening or not a new connection is not very prominent in Gatling, is it? One cannot declare a group of users as “returning users” vs “new users” or is there an abstraction/setting for it?

Regards,
Tomasz G.

First, injection profiles only control when new virtual users get started. A closed workload injection profile controls when new virtual users get started in order to maintain a concurrent number of users target.

They don’t control individual virtual users journeys, which are scenarios.

pace is a kind of pause whose value depends on when a given virtual user last passed here. It only makes sense inside a loop.

Your current result is perfectly expected:

  • your scenario doesn’t include a loop so your pace doesn’t do anything
  • you scenario only consists of one single request, without any pause, so all virtual users complete very fast, just to be replaced by new one, which in turn complete very fast, etc, resulting in a high throughput.

Thank you for your prompt and detailed response Stephane.

This makes me now think that if I want to have pacing on my scenario level I need to code it manually. I found the below code to work as desired in terms of pacing (I could still make it more reusable…), I can now get 1 req/s with 10 users. A minor thing, but it would be nice to have it as a standard building block (if I may say so).

    private final ScenarioBuilder pacedSearch = scenario("StaticSearchScenario")
        .exec(s -> s.set("startTimeMillis", System.nanoTime() / 1000000))
        .exec(staticSearch)
        .pause(s -> {
            long startTimeMillis = s.getLong("startTimeMillis");
            long pauseMillis = (10000 + ThreadLocalRandom.current().nextInt(-500, 501))
                - (System.nanoTime() / 1000000 - startTimeMillis);
            return Duration.ofMillis(pauseMillis > 0 ? pauseMillis : 0);
        });

    {
        setUp(
            pacedSearch.injectClosed(constantConcurrentUsers(10).during(240))
        ).protocols(httpProtocol);
    }

In terms of the load profile it unfortunately creates “marching elephants”, no wonder, all the users start at once, there is no ramp up phase.

Ok, so… I figure I cannot use the below to spread out the load regularly

rampConcurrentUsers(0).to(10).during(10)

because here there is a ramp up phase, which spreads out the users, but… there is no execution phase?!?. This is kind of surprising for a load tool and goes against my intuition, but whatnot :-).

Looking at the docs… and there is the Meta DSL for the rescue.

    {
        setUp(
            pacedSearch.injectClosed(
                  incrementConcurrentUsers(10)
                  .times(1)
                  .eachLevelLasting(240)
                  .separatedByRampsLasting(10)
                  .startingFrom(0)
            )
        ).protocols(httpProtocol);
    }

And the load profile is regular at last!

Not bad, still I (often) have another requirement (as usual in life :-)). Here every user is a “new user” that connects to the system and goes away. That is not the desired connection management profile and it can, as mentioned already, relatively easy hit the “port opening” limit on Windows. I would need the users to be “returning users”, so N users would continue to use, more or less, the same N connections. I can imagine, at some point I will want to mix “new users” and “returning users” in one test, but that is future still…

As mentioned in the previous post I can achieve the above by using the during().on() loop on the scenario level and so prolong the life time of the users to the duration of the entire test…
and that would almost match my needs :slight_smile:.

The next step is to run a stepped (or capacity, or scalability) simulation. Here the challenge is that users belonging to the further steps would need to have shorter and shorter life time. I think this again needs some coding… I will open another thread for that… it is doable, but it has some (unexpected?) side effects.

Last thing to mention here, one possibility would be to do the below, but these bigger and bigger ramp ups are not really desired. And the connection management pattern again… conceptually it is one group of users finishing work and another, bigger one, coming in and so on. I would prefer to add users as the already running ones continue to run.

    private ScenarioBuilder pacedDurationSearchWithName(String name) {
        return scenario(name).during(240).on(
            pace(Duration.ofMillis(9500), Duration.ofMillis(10500)).exec(staticSearch));
    }

    {
        setUp(
            pacedDurationSearchWithName("StaticSearchScenario-" + 0).injectOpen(nothingFor(0*240), rampUsers(1*5).during(10)),
            pacedDurationSearchWithName("StaticSearchScenario-" + 1).injectOpen(nothingFor(1*240), rampUsers(2*5).during(10)),
            pacedDurationSearchWithName("StaticSearchScenario-" + 2).injectOpen(nothingFor(2*240), rampUsers(3*5).during(10))
        ).protocols(httpProtocol);
    }

Of course normally I would create the above list with a help of a loop that would create as many steps as needed.

Regards,
Tomasz G.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.