Modelling load - regular stepped load

Gatling version: 3.11.5 (must be up to date)
Gatling flavor: java kotlin scala javascript typescript
Gatling build tool: maven gradle sbt bundle npm

I read the guidelines and how to ask a question topics.
I provided a SSCCE (or at least, all information to help the community understand my topic)
I copied output I observe, and explain what I think should be.

Hi Gatling Community,
I continue here with my attempts at the load modelling that I started in another thread.

In short I want to have stepped load with regular ramp up phases evenly spreading the users and only adding the new ones. I also want the users to be the “returning users” and therefore not opening a new connection to the system on each scenario iteration. And I want to pace the duration of the scenario iterations (the tested search request executes very fast ~30 ms).

I could do that with the code presented below. I am quite happy with the results


I have regular ramp-ups, the requests are evenly spread out, the users “return” with each scenario iteration. As one can see around 20:20, I was able to find the throughput limit of the tested system.
So far so good, but here comes the nitpicking :slight_smile:. This load profile is created using one scenario per each load step, so I end up with 22 scenarios and this has its consequences

  • Look at the left graph above, the list of scenario names covers a part of the image… a bit unreadable.
  • The console output shows the progress of all the 22 populations every 5 seconds, that is hardly readable. In abstract terms I would prefer to think that I have just a single population of users that grows…
  • And lastly, since I configured logging to a file, my logs are filled with the below information, just endless entries telling me that a Scenario has finished injecting. Once would be enough.
3813870 [GatlingSystem-akka.actor.default-dispatcher-8] INFO  i.g.c.c.inject.open.OpenWorkload - Scenario io.gatling.core.scenario.Scenario@5c892028 has finished injecting
3813870 [GatlingSystem-akka.actor.default-dispatcher-8] INFO  i.g.c.c.inject.open.OpenWorkload - Scenario io.gatling.core.scenario.Scenario@3487f356 has finished injecting
...

Altogether I can do what I need to, but sometimes it feels like it is unnecessary difficult. I am not actually sure if there are better ways to create the desired load with Gatling or if it just wasn’t designed to fit my particular needs.
I already have an idea how I could achieve this load profile with just a single scenario… it will be even more complicated… I might be back once I get there…
In the meantime I would be grateful for any ideas that simplify my solution.

import static com.load.RegularSteppedLoad.regularSteppedLoad;
...
    {
        setUp(
            regularSteppedLoad(staticSearch)
                .scenarioName("StaticSearchScenario")
                .paceMinMillis(987)
                .paceMaxMillis(1013)
                .startingFrom(0)
                .incrementUsers(5)
                .times(22)
                .separatedByRampsLastingSec(1)
                .eachLevelLastingSec(5 * 60)
                .build()
        ).protocols(httpProtocol);
    }

which uses the below class

package com.load;

import io.gatling.javaapi.core.*;
import static io.gatling.javaapi.core.CoreDsl.*;
...

public class RegularSteppedLoad {
    private String scenarioName = "Scenario";
    private ChainBuilder chain;
    private long paceMinMillis = 1000;
    private long paceMaxMillis = 1000;
    private int times = 1;
    private int incrementUsers = 1;
    private int startingFrom = 0;
    private long separatedByRampsLastingSec = 1;
    private long eachLevelLastingSec = 10;

    public static RegularSteppedLoad regularSteppedLoad(ChainBuilder chain) {
        return new RegularSteppedLoad().chain(chain);
    }

    public List<PopulationBuilder> build() {
        return IntStream.range(0, times).mapToObj(this::loadStepPopulation)
            .collect(Collectors.toCollection(LinkedList::new));
    }

    private PopulationBuilder loadStepPopulation(final int stepIdx) {
        return createScenarioWrapWithDurationPace(
            scenarioName + "-" + stepIdx,
            chain,
            (long) (times - stepIdx) * (separatedByRampsLastingSec + eachLevelLastingSec),
            paceMinMillis, paceMaxMillis
        ).injectOpen(
            nothingFor(stepIdx * (separatedByRampsLastingSec + eachLevelLastingSec)),
            rampUsers(stepIdx == 0 ? startingFrom + incrementUsers : incrementUsers).during(separatedByRampsLastingSec)
        );
    }

    private ScenarioBuilder createScenarioWrapWithDurationPace(String scenarioName, ChainBuilder chain, long duringSec,
                                                               long paceMinMillis, long paceMaxMillis) {
        return scenario(scenarioName)
            .during(duringSec).on(
                pace(Duration.ofMillis(paceMinMillis), Duration.ofMillis(paceMaxMillis))
                    .exec(chain)
            );
    }

    public RegularSteppedLoad scenarioName(String scenarioName) {
        this.scenarioName = scenarioName;
        return this;
    }

    private RegularSteppedLoad chain(ChainBuilder chain) {
        this.chain = chain;
        return this;
    }

    public RegularSteppedLoad paceMinMillis(long paceMinMillis) {
        this.paceMinMillis = paceMinMillis;
        return this;
    }

    public RegularSteppedLoad paceMaxMillis(long paceMaxMillis) {
        this.paceMaxMillis = paceMaxMillis;
        return this;
    }

    public RegularSteppedLoad times(int times) {
        this.times = times;
        return this;
    }

    public RegularSteppedLoad incrementUsers(int incrementUsers) {
        this.incrementUsers = incrementUsers;
        return this;
    }

    public RegularSteppedLoad startingFrom(int startingFrom) {
        this.startingFrom = startingFrom;
        return this;
    }

    public RegularSteppedLoad separatedByRampsLastingSec(long separatedByRampsLastingSec) {
        this.separatedByRampsLastingSec = separatedByRampsLastingSec;
        return this;
    }

    public RegularSteppedLoad eachLevelLastingSec(long eachLevelLastingSec) {
        this.eachLevelLastingSec = eachLevelLastingSec;
        return this;
    }
}

Regards,
Tomasz G.

It really seems to me you are overly complicating things and that you could achieve the same thing with a simple loop and controlling the arrival rate of the new users with an open injection profile.

I read your question for quite sometimes. These are my questions, in case if I understand something wrong, let me know:

  • I’m not sure what exactly you want to do with your scenarios ? To execute them parallel or sequential ?
  • I also want the users to be the “returning users” and therefore not opening a new connection to the system on each scenario iteration Did you use some kind of loop like forever ? Is your system a closed model or an open model ?

Hi trinp,
Thank you for your interest

I only have multiple scenarios as a “hack” to ultimately create a stepped load model (as the attached screenshots show). When the scenarios run they do run in parallel, as I want them to. Initially most of the scenarios are waiting and, with time, more and more of them start ramping up the users, which gives the desired effect of increasing the load.

I attached nearly full code in my question so, as you can see yourself, in the method createScenarioWrapWithDurationPace() I use the .during().on() loop, not the forever() loop. I believe my system is a closed model but, as mentioned in the previous post, it has some properties of both models.

Your question actually made me think about an idea that I have not tried yet, thank you, so let me run this

    private final ScenarioBuilder pacedForeverSearch = scenario("PacedForeverSearch")
        .forever().on(
            pace(Duration.ofMillis(9500), Duration.ofMillis(10500)).exec(staticSearch)
        );

    {
        setUp(
            pacedForeverSearch.injectClosed(
                incrementConcurrentUsers(10)
                    .times(4)
                    .eachLevelLasting(4 * 60)
                    .separatedByRampsLasting(10)
                    .startingFrom(0)
            )
        ).protocols(httpProtocol);
    }

And it nearly works as desired :slight_smile: , but there is an ugly catch :frowning:. Below you can see a Grafana graph for that run (please ignore the hiccup at the start), looks awesome. However the reason why I am attaching a Grafana graph is that my simulation keeps running indefinitely and I could only stop it with Ctrl+C. As such Gatling did not generate its normal output with the usual graphs (hmm… I guess I can still try with gatling -ro mysimulationfolder… will do later).


I would need a way to tell the forever() loop to terminate the last step at some point, any ideas?

Regards,
Tomasz G.

You can use maxDuration() to force shut down after a desired time.

1 Like

@trinp thank you so much, I had an “oh, my gosh!” moment :slight_smile:, this is it!

I don’t know how could I have missed it, especially after telling my colleagues “read the docs thoroughly, a solution should be there” so many times… it just escaped my eyes.

Ultimately this does it for my use case

    private final ScenarioBuilder pacedForeverSearch = scenario("PacedForeverSearch")
        .forever().on(
            pace(Duration.ofMillis(9500), Duration.ofMillis(10500)).exec(staticSearch)
        );

    {
        setUp(
            pacedForeverSearch.injectClosed(
                incrementConcurrentUsers(10)
                    .times(4)
                    .eachLevelLasting(4 * 60)
                    .separatedByRampsLasting(10)
                    .startingFrom(0)
            )
        )
        .protocols(httpProtocol)
        .maxDuration(4 * (10 + 4 * 60));
    }

… and it is rather mind-boggling what all the various ideas went through my head, up until this point, on how to make the users execute for a given time and then to stop when desired… it felt like forcing an open door :slight_smile:.

Thank you and regards,
Tomasz G.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.