(rampUsers over period) rate is off


I have the following problem. I want to run 9000 short-lived scenarios over a period of 9 hours (so that on average, there would be 1000 scenarios per hour):

setUp(scn.inject(rampUsers(9000) over (9 hours)).protocols(httpConf))

However, Gatling does not honor this contract.
Each session creation is logged, and the resulting log file contains 9000 such records, with the first and the last being:

19:13:35.276 [INFO ] SignupSimulation - Session(Signup and daily activity,1...
02:43:31.916 [INFO ] SignupSimulation - Session(Signup and daily activity,9000...

Note that the first user was created at 19:13, while the last - at 02:43.
So instead of 9 hours, ramp up took about 7.5 hours.

Hardware monitoring has also registered system load only during this shorter period of time.

What could be the problem?

Gatling version is 2.3.0


I’ve just added a test (on master) verifying user scheduling.
I got a +4 seconds drift for a ramp of 9000 users over 9 hours.

I sure want to investigate this drift, but I frankly have to idea what happened on your side.

If you can come up with a reproducer (that I don’t have to run for 9hrs of course), I’ll gladly investigate.


Thanks for looking into this.

I, too, have run a couple of tests, and they were correct. And so were my other runs of the same gatling script (albeit with different ramp-up settings - they are specified via a config file, not hard-coded).

I have even started suspecting that my memory has failed me - that the actual ramp-up settings were different for that run, but no - at its startup, my gatling script logs its entire configuration (including ramp-up settings) into a file; the correctness of settings checked out - they truly were 9000 users over 9 hours.

Also of note, 9 hours / 7.5 hours is exactly 120% - might that hint at something?

Also of note, 9 hours / 7.5 hours is exactly 120% - might that hint at


*Stéphane Landelle*
*GatlingCorp CEO*

Turns out the problem is 100% reproducible - both on an amazon EC2 instance, and a laptop.

here’s the complete code:

import io.gatling.core.Predef._
import io.gatling.http.Predef.http
import io.gatling.http.protocol.HttpProtocolBuilder
import org.slf4j.{Logger, LoggerFactory}

import scala.concurrent.duration._

class Test extends Simulation {

  val logger: Logger = LoggerFactory.getLogger(this.getClass)
  val httpConf: HttpProtocolBuilder = http
  val scn = scenario("Test").exec({ session => logger.info(session.toString); session })

  setUp(scn.inject(rampUsers(9000) over (9 hours)).protocols(httpConf))

This code generates a log file, one record per session created, and the following shows that 20 sessions are started each minute:

Alekseis-MBP-2:logs aleksei$ cut -b 1-16 Alekseis-MBP-2.lan.log | uniq -c
  19 2017-10-10 13:39
  20 2017-10-10 13:40
  20 2017-10-10 13:41
  20 2017-10-10 13:42
  20 2017-10-10 13:43

which means that simulation of 9000 users will finish in 7.5 hours exactly! and it actually does, too.
not in 9 hours as requested.

please run my snippet and let's compare the results.

and here’s the console output from gatling, which also says 100 users were executed in 300 seconds (so 9000 users will get executed in 7.5 hours)

it seems there is a precision loss issue (9 hours * 60 * 60 / 9000 users = 3.6 seconds per user, but it seems Gatling rounds this down to 3 seconds per user):

It looks like there’s indeed a rounding issue for injection rates that are lower than 1 user per second.
Then, it seems it’s been fixed in master/upcoming Gatling 3.
Code has changed a lot there, and I don’t think it can be backported.

The other guys at gatlingCorp and I are pretty swamped at the moment, so I don’t think we’ll have the cycles for investigating any time soon, sorry.

Hi Stéphane,

I am seeing a similar issue but the simulation lasts longer than expected with Gatling 2.3.

The simulation should last 300 seconds (scn.inject(rampUsersPerSec (0) to (75) during (300 seconds))) but it lasts 360 seconds. It gets even worse if the duration is set to 6000 seconds.

Would it come from the same rounding issue ?

Would you recommend using a different version of Gatling?


It just ran a test with same injection profile and scenario as you (1 single request per vu, rampUsersPerSec(0) to 75 during 300) with Gatling 2.3.0: