Why does Gatling stop simulation when any scenario exists and doesn't wait until the end?

Let’s say I have this configuration

val scn = (name: String) => scenario(name)
  .forever() {
    .exec(request)
  }

setUp(
  scn("scn1").inject(atOnceUsers(1))
    .throttle(
      jumpToRps(1), holdFor(10 seconds)
    ),
  scn("scn2").inject(atOnceUsers(1))
    .throttle(jumpToRps(1), holdFor(20 seconds))
).protocols(http.baseURLs(url))

I would expect to run the whole simulation for 20 seconds - until all is finished. What actually happens is that the simulation is stopped after 10 seconds, right after the first scenario finishes.

---- Requests ------------------------------------------------------------------
> Global                                                   (OK=20     KO=0     )
> scn1 / HTTP Request                                      (OK=10     KO=0     )
> scn2 / HTTP Request                                      (OK=10     KO=0     )

---- scn1 ----------------------------------------------------------------------
[--------------------------------------------------------------------------]  0%
        waiting: 0      / active: 1      / done:0     
---- scn2 ----------------------------------------------------------------------
[--------------------------------------------------------------------------]  0%
        waiting: 0      / active: 1      / done:0

Have you read the doc? https://gatling.io/docs/2.3/general/simulation_setup/?highlight=throttle#throttling

“If your injection lasts less than the throttle, your simulation will simply stop when all the users are done. If your injection lasts longer than the throttle, the simulation will stop at the end of the throttle.”

Hi Stephane,

this is not what I meant. One way or another, the simulation stops once any of the scenarios is done - whether the injection is finished or the throttling is finished.

What I’m trying to say that since I have two parallel scenarios, I want to finish when all of them are done.

And BTW I think there is also a bug because even if the injection lasts let’s say 5 seconds and throtlling 10 seconds, the simulation ends after 10 seconds, not after 5 as written in the documentation.

One way or another, the simulation stops once any of the scenarios is done - whether the injection is finished or the throttling is finished.

You’re getting “done” wrong. A scenario is “done” once all the users of this scenario complete their journey. If you use a forever loop, they never complete.

What I’m trying to say that since I have two parallel scenarios, I want to finish when all of them are done.

Don’t use a throttling shorter than your normal duration then. Throttling means requests get enqueued. When throtlling terminates, all the pending requests would get “freed” all at once, hence the forceful kill.

And BTW I think there is also a bug because even if the injection lasts let’s say 5 seconds and throtlling 10 seconds, the simulation ends after 10 seconds, not after 5 as written in the documentation.

Same point as above, you’re confusing injection duration and completion.

Let me describe you the use case.

The overall simulation is as follows (pseudocode):

`
prepare:
http_request = repeat forever

run:
inject N users at once

for step i 1…total_steps
jump to rps (step_size * step)
hold for N seconds

`

As you can see, I want to throttle in steps and hold each step for a while. But the thing is that I want to have separate stats for each step. To do this I setUp several scenarios like this

`

(1 to totalSteps).map(step => {
  defaultScenario(loadStepName(step))
    .inject(
      nothingFor(startTime(step)),
      atOnceUsers(concurrency)
    )
    .throttle(
      jumpToRps(0),
      holdFor(startTime(step)),
      jumpToRps(rps(step)),
      holdFor(holdStepForSec),
      jumpToRps(0),
      holdFor(theRest)
    )
}).toList

`

And there you can see that at the end of each scenario I need to have

jumpToRps(0), holdFor(theRest) // huge time

to prevent premature exit.

So I do see your point but my question is: Can I have separate stats for each of the step and not spawn parallel scenarios to do that?

Try wrapping the requests of a particular “step” in a uniquely named group, such as where the name includes the step size parameters. Then the stats of each group will be separate, though as a warning, some stats will not be relevant, like requests per second.

I already have that, am I missing something?

setUp(
  warmupScenario
    .inject(
      atOnceUsers(concurrency / warmupCoeff),
    )
    .throttle(
      stepThrottling(stepSize / warmupCoeff, holdStepForSec / warmupCoeff, maxRps / warmupCoeff)
        :+ jumpToRps(0)
        :+ holdFor(theRest)
    )

    +: loadScenarios()
).protocols(http.baseURLs(url))
  .assertions(
    assertLoadScenarios()
      :+ (global.responseTime.percentile3 lt SLO_LATENCY)
      :+ (global.failedRequests.percent lt SLO_MAX_FAILURES_PCT)
      :+ (global.allRequests.count is maxRequestCount * remoteHosts)
  )
  .maxDuration(warmupTime + loadTime)

def warmupScenario() = scenario("Warming Up")
  .feed(replayFileFeeder.circular)
  .forever() {
    group("Warming Up") {
      exec(emptyHttpRequest.silent)
    }
  }

def defaultScenario(name: String) = scenario(name)
  .feed(replayFileFeeder.circular)
  .forever() {
    group(name) {
      exec(emptyHttpRequest)
    }
  }

def loadScenarios(): List[PopulationBuilder] = {
  val totalSteps = maxRps / stepSize
  val startTime = (step: Int) => warmupTime + ((step - 1) * holdStepForSec)
  val rps = (step: Int) => step * stepSize

  (1 to totalSteps).map(step => {
    defaultScenario(loadStepName(step))
      .inject(
        nothingFor(startTime(step)),
        atOnceUsers(concurrency)
      )
      .throttle(
        jumpToRps(0),
        holdFor(startTime(step)),
        jumpToRps(rps(step)),
        holdFor(holdStepForSec),
        jumpToRps(0),
        holdFor(theRest)
      )
  }).toList
}

def assertLoadScenarios(): List[Assertion] = {
  val totalSteps = maxRps / stepSize
  (1 to totalSteps).map(step => details(loadStepName(step)).responseTime.percentile3 lt SLO_LATENCY).toList
}

def stepThrottling(stepSize: Int, holdStepForSec: Int, maxRps: Int): List[ThrottleStep] = {
  (1 to (maxRps / stepSize)).toList.flatMap(step => List(jumpToRps(step * stepSize), holdFor(holdStepForSec seconds)))
}

def loadStepName(step: Int): String = {
  s"Load Step #${step} [ rps: ${step * stepSize} ]"
}