Steady state workload with ramp up prelude

Gatling version: 3.11.5 (must be up to date)
Gatling flavor: java kotlin scala javascript typescript
Gatling build tool: maven gradle sbt bundle npm

I read the guidelines and how to ask a question topics.
I provided a SSCCE (or at least, all information to help the community understand my topic)
I copied output I observe, and explain what I think should be.

I am using scenario.injectOpen(constantUsersPerSec()) to drive a steady state workload for 20 minutes.

At this point I am driving workloads with hundreds of requests per second and I am seeing poor response times for the initial 2 - 5 minutes of the run. I would very much like to be able to ramp up this open model workload to ease the system into the heavy workload. The goal of this performance test is not stress testing, nor capacity testing. I want to see what load the system can process and study system behavior under that load.

I experimented with scenario.injectOpen(rampUsersPerSec(lowRate).to(targetLoad).during(rampUpInterval)).andThen(
scenarioCopy.injectOpen(constantUsersPerSec(targetLoad))

but the behavior of that was not acceptable. Since the andThen() requiress the first inject to finish completely before starting the second, the ramp leads to a gap in workload with the steady state load again coming in a rush. And it is unsatisfying to have to use a copy of the same scenario to get around scenario naming requirements.

Is there a solution to my problem that I am missing?


In general terms I picture something like the following to be a natural solution to my problem:

scenario.injectOpen(rampUsersPerSec(lowRate).to(targetRate).during(rampInterval).constantUsersPerSec(targetRage).during(steadyStateInterval).


For a scenario I need to model as a pseudo-closed workload I found a way to emulate this basic behavior starting with:

injectOpen(rampUsers(userCount).during(rampInterval))

and then in the scenario use looping for the entire steady state interval with pacing prior to a request block to emulate users coming and going at a metered rate within the execution sessions started with rampUsers. This is not a satisfying solution so I hope I do not have to expand its use.

In general, I would find the capability to do run ramp-up with ability for a steady state load after to be a practical tool for my gatling tool box.

I’ve added a section in the documentation, please check Gatling injection scripting reference

Stephane,

Thank you for your quick response. The solution you document works just as I hoped for my test case. In addition, I should be able to go back and use this same approach to clean up the implementation of a previous scenario.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.