Scope of pace element

Hi,

Wondering how to handle pacing in case of multiple requests in a single scenario.

Example: My scenario consists of two requests i.e. request_1 and request_2 and I want them to run with 6 and 60 requests/minute respectively.

But the scenario always picks up the slowest pace value to execute all the requests inside it.

Workaround - to create multiple scenarios

Expectation:

Request_1–> 60 requests/mins
Request_2 → 6 requests/mins

Actual :

Request_1–> 6 requests/mins
Request_2 → 6 requests/mins

ScenarioBuilder scn = scenario("Search")
                    .forever().on(
                       pace(1)
                        .exec(http("request_1")
                                .post("/service/search/country")
                                .body(StringBody("{Post Body}")))
                         .pace(10)
                        .exec(http("request_2")
                                 .post("/service/search")
                                 .body(StringBody("Post Body")))
                );

    {
        setUp(scn.injectOpen(atOnceUsers(1)).protocols(httpProtocol))
                .maxDuration(Duration.ofMinutes(2));
    }

I even tried to define the pace at exec level but seeing the same behavior:

ScenarioBuilder scn = scenario("Search")
                    .forever().on(
                       // pace(1)
                        exec(http("request_0")
                                .post("/service/search/country")
                                .body(StringBody("{Post Body}"))).pace(1)
                                  //      .pace(10)
                        .exec(http("request_1")
                                 .post("/service/search")
                                 .body(StringBody("Post Body"))).pace(10)
                );

    {
        setUp(scn.injectOpen(atOnceUsers(1)).protocols(httpProtocol))
                .maxDuration(Duration.ofMinutes(2));
    }

Hi @AshishChadda,

A scenario is like the script that an actor will read to know what to do, how to interact and what to tell.

So, here, if I translate that to an human readable script:

Scene: Search ( <= that is the name of the scenario)

First the request_1:

  • wait for 1 second since the last time you did this request (no wait if this is the first time) (<= this is the pace(1))
  • send the letter "{Post Body}" into an envelop to this address: "/service/search/country"

then
The second: request_2

  • wait for 10 seconds since the last time you dit this request (no wait if this is the first time) (<= this is the pace(60))
  • send the letter “Post Body” into the envelop to this address: "/service/search"

Start from the begining of the scenario ( <= forever())

In that words, you can understand that your virtual user (the actor), will send both letters at the same rate.

Now, consider this code:


  ScenarioBuilder scn = scenario("Search")
    .forever().on(
      pace(10).exec(
        randomSwitchOrElse().on(
          Choice.withWeight(90.0, exec(http("request_1")
            .post("/service/search/country")
            .body(StringBody("{Post Body}"))))
        ).orElse(exec(http("request_2")
            .post("/service/search")
            .body(StringBody("Post Body")))
        )
      ));

Your virtual user will execute a request only once per 10 seconds
request_1 about 90% of the times and request_2 the remaining.

Does that suits your requirements?

Cheers!

Thanks, @sbrevet for the explanation. Actually, I have 100+ APIs with different TPS requirements. I guess the easy and only doable way is to create 100+ scenarios to achieve the same.

I was trying to compare it with Jmeter “Constant Throughput Timer” which behaves differently.

So, we can say pace scope is for the entire scenario rather than per request in Gatling. Actually, all the requests in the scenario get executed to the slowest of them.

Weird requirements, but let it be!
So, they should be independent. One scenario per kind of user, you have 100+ kind of user, so, yes 100+ scenario.

It’s one way, for sure. The only one and the easiest, I doubt.
I used only one of the conditional statements from Gatling API.

But if you want parallel execution, different users and different scenarios make sense.

No, pace is present to avoid requesting too often specific endpoint and the threshold may be different for each, but not like you think.

For instance:

  ChainBuilder pollRequest = pace(10).exec(http("poll call").get("/listen").check(jsonPath("$.messages.id").ofList().saveAs("messages")));

  ChainBuilder readMessage = pace(1).exec(http("read message").get("/message/#{messageId}"));

  ScenarioBuilder scn = scenario("Reader").forever().on(
    exec(
      pollRequest,
      exec(session -> session.set("read", "0")),
      foreach("#{messagesIds}", "messageId")
        .on(readMessage)
    )
  );

Here, we know that the server will detect a bad behavior if same user (authentification token) will poll more than once every 10 seconds, read more quickly than 1 message per second.

Depending on the answer from the /listen request, we don’t know how long to wait, or if there was a a read message in the last loop or not.

So no, it is not the “slowest of them”.

1 Like

Thanks again , @sbrevet . Mostly, all of the projects have similar performance requirements. TPS varies across different endpoints. But I undershoot the concept of scenario now.

Actually, I was getting confused with Jmeter Thread and Gatling Scenario. Now I have better clarity.