Add option to "collapse" terminal logging

Gatling build tool: npm

Hi, currently I’m working on concurrent workflow, so I have to define many scenarios with single api to run, the structure currently is being like this:

export default simulation((setUp) => {
     scenario1.injectOpen(constantUserPerSec(1).during(1)),
     scenario2.injectOpen(constantUserPerSec(1).during(1)),
     scenario3.injectOpen(constantUserPerSec(1).during(1)),
     scenario4.injectOpen(constantUserPerSec(1).during(1)),
     ...
})

The terminal logging output is looking like this at the scenario part


There will be a lot of scenario to throw out, so I think a collapse option for terminal logging is fine enough, or if I missed any points in docs, please point it for me :slight_smile: thank you

Hi,

I’m not sure we’re going to do this.
However, instead of using tons of scenarios, you could use a switch
component.

Hi,
Here is my tested code, sorry just playing in Scala for fast refer at home

        .exec(
                uniformRandomSwitch(
                    forever(
                        pace(2000 milli)
                          .exec(
                            reqresCall.translateCall()
                          )
                    ),
                    forever(
                        pace(1000 milli)
                          .exec(
                            reqresCall.googleCall()
                          )
                      )
                    )
                  )

So my workflow will be that each user will do a forever loop, as code stated, I would expect that my translateCall() is 30 req/min, while googleCall() is 60 req / min. With same code, I will get 3 different output
The correct one

========================================================================================================================
2025-07-15 17:41:32 GMT                                                                              60s elapsed
---- Requests -----------------------------------------------------------------------|---Total---|-----OK----|----KO----
> Global                                                                             |        90 |        90 |         0
> call google                                                                        |        60 |        60 |         0
> call translate                                                                     |        30 |        30 |         0

---- DemoSimulation_Scn ------------------------------------------------------------------------------------------------
[|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||]     0%
          waiting:         0 / active:         2  / done:         0
========================================================================================================================

While the unexpected one

========================================================================================================================
2025-07-15 17:43:10 GMT                                                                              60s elapsed
---- Requests -----------------------------------------------------------------------|---Total---|-----OK----|----KO----
> Global                                                                             |       119 |       119 |         0
> call google                                                                        |       119 |       119 |         0

---- DemoSimulation_Scn ------------------------------------------------------------------------------------------------
[|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||]     0%
          waiting:         0 / active:         2  / done:         0
========================================================================================================================

It seems like the distribution may get stuck, so I’m not really sure if I should go with switch as you suggested. While I haven’t try randomSwitch, I am a little bit keen on putting long value into switch rate as I have many small scenario…

Edit: each scenario will need 1 user only, so the total is 2 as can see in terminal logging

While I haven’t try randomSwitch , I am a little bit keen on putting long value into switch rate as I have many small scenario…

I don’t get your issue, but you could use a doSwitch where you would implement your own key function based on the forever loop index.

1 Like

Just to sum-up:
1/ Terminal scenario collapse - a decline
2/ I think the uniformRandomSwitch is having issue where users are not evenly distributed when using forever loop inside it
3/ I will try doSwitch as suggested, thanks

There’s no issue with uniformRandomSwitch.
While running your code with 2 users, you have 50% chances that they go to the same branch.

doSwitch actually helped me out ! :smiley:
Thank a lot there.

Sample code in Scala for reference:

      val csvFill = csv("keyVar.csv").queue()
      val DemoScenario_Scn = scenario("DemoSimulation_Scn")
        .feed(csvFill)
        .exec(
                doSwitch("#{keyVarTest}")(
                  "1" -> forever(
                        pace(2000 milli)
                          .exec(
                            reqresCall.translateCall()
                          )
                    ),
                  "2" -> forever(
                        pace(1000 milli)
                          .exec(
                            reqresCall.googleCall()
                          )
                      )
                    )
                  )

Need to make a csv feed for this condition!

Why not just use Session#userId() and some modulo?

1 Like

Hi @trinp,

I also faced similar situations not long ago, so maybe my findings will be helpful.

This behavior can be achieved in different ways — each with its own pros and cons.

ClosedModel

/// Pros: Assign specified users to a specific flow
/// Cons: RPS for specific requests drops if response time > 1s

  static long duration = 60;
  private static final ScenarioBuilder scenarioA = scenario("demo")
          .doSwitchOrElse(session -> session.userId()).on(
                  onCase(1).then(during(duration).on(pace(1).exec(dummy("requestA","#{randomInt(500,1500)}")))),
                  onCase(2).then(during(duration).on(pace(2).exec(dummy("requestB","#{randomInt(500,1500)}"))))

          ).orElse(
                  dummy("otherFancyStuff","#{randomInt(500,800)}"),
                  pause(1)
          );


  {
    setUp(scenarioA.injectClosed(constantConcurrentUsers(5).during(duration)));
  }

OpenModel - FizzBuzz style


/// Pros: Double rates can be used (e.g., 0.5, 0.7)
/// Cons: RPS for specific requests drops if response time > 1s

    static long duration = 60;
    static double rate = 1;
    private static final ScenarioBuilder scenarioB = scenario("demo").exec(

            /// 1/1 rate
            dummy("requestA", "#{randomInt(300,500)}"),

            /// 1/2 rate
            doIf(session -> session.userId() % 2 == 0).then(
                    dummy("requestB", "#{randomInt(300,500)}")
            ),
            /// 1/3 rate
            doIf(session -> session.userId() % 3 == 0).then(
                    dummy("requestC", "#{randomInt(300,500)}")
            )
    );


    {
        setUp(scenarioB.injectOpen(constantUsersPerSec(rate).during(duration)));
    }

OpenModel (This is the most optimal approach if you need to hold strict RPS)

 /// Pros:
 //  - Double rates can be used (e.g., 0.5, 0.7)
 //  - Response time does not affect RPS for specific requests
 /// Cons: uniq name for scenario =(

    static long duration = 60;
    static double rate = 1;
    private static final ScenarioBuilder scenarioA = scenario("demo1").exec(
            dummy("requestA", "#{randomInt(300,500)}")
    );

    private static final ScenarioBuilder scenarioB = scenario("demo2").exec(
            dummy("requestB", "#{randomInt(300,500)}")
    );


    {
        setUp(
                scenarioA.injectOpen(constantUsersPerSec(rate).during(duration)),
                scenarioB.injectOpen(constantUsersPerSec(rate / 2 ).during(duration))
        );
    }
1 Like

Hi @slandelle
Your suggestion Session#userId() helped me in cleaning the code, while I don’t really get what you mean by using modulo, but it’s happy to see someone review my tiny piece of code.

Hi @i-nahornyi
Your suggestions showed me a new way to simulate concurrency for the load, for me however, using pace is enough (for now). For the last Open model the most optimal, was the one I had been following in the first place, but I see the terminal log being piling up too much which cause me raising this topic to see if Stéphane wants to tweak the terminal log a bit.
This way, as an outcome, helped me to get the concurrency I need, while minimize terminal log:

        .exec(
                doSwitch(session => session.userId)(
                  1 -> forever(
                        pace(2000 milli)
                          .exec(
                            reqresCall.translateCall()
                          )
                    ),
                  2 -> forever(
                        pace(1000 milli)
                          .exec(
                            reqresCall.googleCall()
                          )
                      )
                    ,
                  3 -> forever(
                        pace(3000 milli)
                          .exec(
                            reqresCall.youtubeCall()
                          )
                    )))

The forever is a little bit ugly, but since I want same user to repeat requests and the amount of users is not big (like 12-14 users needed), so this way is optimal for me.

Thank guys for helping me out, appreciate the efforts.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.