Throttle Usage

I have recently started using Gatling to run performance tests sending (at peek 600 rps) in ramping up fashion.

I know i can simply inject these ramps

constantUsersPerSec(100) during(1 min) constantUsersPerSec(200) during(1 min)constantUsersPerSec(300) during(1 min)
This setup does work, but from the way i have seen others use .throttle is the right way to do it.

But using the throttle functionality just dosnt seem to work. The few set-ups it does compile in nothing happens. I have searched for every bit of documentation i can find and through many many other topics from this group, but just cant get it produce anything.

(probably more than what you need to see my mistake, but here is what i have been trying to do

object RESTrequest {

val 1Feeder = ssv(“file1.ssv”).circular
val 2Feeder = ssv(“file2.ssv”).circular
val 3Feeder = ssv(“file3.ssv”).circular

val headers_10 = Map( “Content-Type” → “”“application/json”"",
“Accept-Charset” → “”“ISO-8859-1,utf-8;q=0.7,*;q=0.7"”",
“Keep-Alive” → “”“115"”",
“Connection” → “”“keep-alive”"",
“X-Requested-With” → “”“XMLHttpRequest”""

val initiateC2C = feed(1Feeder)
.basicAuth("${username}", “${password}”)


val scn = scenario(“TestScenario”).exec(RESTrequest.initiateC2C)

setUp(scn).throttle(jumpToRps(2), holdFor(10 seconds)).protocols(httpConf)

//or perhaps
setUp(scn.inject(atOnceUsers(2000)).throttle(jumpToRps(2), holdFor(10 seconds)).protocols(httpConf))

i have tried all sorts of diffrent setUp configurations, with every combo of brackets, but just not getting any luck.

It is likly there is some key bit of gataling system i dont understand, but any help would be greatly appreciated.

setUp(scn).throttle(jumpToRps(2), holdFor(10 seconds)).protocols(httpConf)

Won’t compile as you don’t define injection

setUp(scn.inject(atOnceUsers(2000)).throttle(jumpToRps(2), holdFor(10 seconds)).protocols(httpConf))

LGTM. Just that your throttle will be lifted after 10 seconds, and probably everything will blows in your face then as you inject 2000 users :slight_smile:

What’s your problem exactly? What kind of rps profile do you get, and how is it different from what you expect?

I also have problems undestanding the difference between “inject” and “throttle” and how they interact.

It seems (from trying different combinations) that “inject” is essential and throttle optional. But the documentation implies that “throttle” can be applied directly to a scenario
(see This gives me a compilation error: “value throttle is not a member of io.gatling.core.structure.ScenarioBuilder”

When I try:

setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(10 seconds)).protocols(httpConf)

I get 3 requests fired almost concurrently.

When I try:

setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(100 seconds)).protocols(httpConf)

I get 3 requests fired one per sec.

Why does the holdFor time (10 vs 100 sec) make a difference,since the whole thing should not take more than 3 sec anyway?

Any insight greatly aqppreciated.

I love when “Albert Einstein” have some problems understanding something I did, serves you right! :wink:

I just pushed some documentation improvements, but still:

  • You have to understand that throttle is a bottleneck. You still have to define inject, and inject sufficient load to reach your bottleneck.
  • I seems you’re confusing atOnceUsers and UsersPerSec
  • Your 2 set ups should produce the same thing, looks like a bug. Will try to reproduce.

Mmm, can’t reproduce.

setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(10 seconds)).protocols(httpConf)

setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(100 seconds)).protocols(httpConf)

give me extactly the same thing, as expected (scenario has only one request):

  • 1 request at t0
  • 1 request at t0 + 1s
  • 1 request at t0 + 2s

if you carry on the t0+1second then the request rate changes after the
holdFor is expired.
which gives a different result for the 2 configs - at say 20seconds
into the test.
The result(throughput) will be the same between the 2 for <10s and >100s.


it is worth understanding where throttling came from:

it was introduced before inject rate (eg. constantUsersPerSeoncd() ):

so it was created in a world where only closed models (where N users are injected and they loop) is possible.
the workaround for tools that can only do this (like JMeter, LoadRunner, etc) when you want to fix the rate is to provide a pacing(LoadRunner), constant throughput timer(JMeter) or throttling(Gatling). see for example - “…we have a fixed quantity of users but we need to test the application with a different number of requests per minute. Using the Thread Group parameters we can manage the number of users but not the frequency of requests. So, how can we deal with this scenario?..”

the closed workload model where users loop is similar to a call center where a fixed number of identifiable (eg. “Alice”) users(or analogous software processes) loop around doing the same or similar tasks, and their throughput is determined by the response time of the tasks they apply to the system. I can’t think of many real world examples where throttle would be applied to that closed workload.

the open workload model, where independent users arrive at a rate, like public websites, physical retail shops, api requests, etc, has the rate pre-existing, there is no need generally to throttle either in this case as the inject rate should provide this. There are use cases though where to make tests simpler, or hit some complex test constraint, as in Joerg’s case you could throttle per scenario or globally.

the question then may be whether you should inject open(constantUsersPerSecond) or closed(atOnceUsers). there are several threads on that topic.


Here is the out put I get for setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(10 seconds)).protocols(httpConf)

ok so we forgot to check whether your scenario loops?

val scn_sleep_closed = scenario(“scn_sleep”) .exec(forever(){req_sleep} )

My test didn’t loop.

Which version do you use?

Could you share your Simulation, please?

Many thanks for the repsonses.

I am not looping (not knowingly at least)
The simulation is listed at the top of the thread.
I am using 2.0.0RC2

RE: the behaviour being different for t >10 and T<100, the are only 3 requests being sent at 1 rps so the whole thing should take place in the t<10 interval in both cases. Am I missing something?

I am getting the impression I should be using only inject rather inject + throttle as am trying to run ramp test with steps of constant request rates.

Could be related to this:
Could you upgrade to RC3, please?

Upgraded to RC3 but it did not make a difference. Still getting different behaviour for the two values of “holdFor”

The simulation is listed at the top of the thread.

Well, I’m sure it’s not. What you first posted doesn’t even compile (variable names such as 1Feeder are illegal, can’t start with a number).
Could you share the exact Simulation, please?

You can send privately if that’s an issue for you.

yep apologies both, I got mixed up with the other example on this thread.
I couldn’t reproduce either
tried the following?

.inject(constantUsersPerSec(1) during(3 seconds) )

which worked for me.

Unless your system has 3 users/requests arriving at exactly the same time, and there is a component in the real system that is not in the SUT(system under test) that throttles those requests to 1rps, then the above may read and work better.

clearly there’s a defect there somewhere so it should continue to be followed up with Stéphane

OK I can’t fool you :slight_smile:
Here is a simulation that I just ran that displays the behaviour I mentioned with RC3:

Thanks again for your help guys

Regrettably the site and APIs we are hitting are all within are internal system, so we can put them out (and if we did you wouldnt get anything back). Albert here just put a scrubbed version of what we are trying to achive. this works fine using just the injects (of however many requests we want to send) and we can create the flow we need within those injections. I believe we are implementing a open workload model, where we need to check if how many concurrent users our service can handle using ramps. We pull the data of the users out of .SSV files to fill in the user details each time. (which the next packet is just the next entry down in those files.

It would be really good to know what is happening with these throttles, but I think you have pointed us towards the solution for todays problem :slight_smile: though we want to keep using the platform so obviously want to find out what is happening.

The Gatling docs are good clear and concise, but just lack some of the finer details when it comes to using different aspects together.

thanks again

I changed your sample to:

package simulations

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
import io.gatling.core.controller.throttle._

class TestThrottle extends Simulation {

object RESTrequest {

val headers_10 = Map(“Content-Type” → “”“application/json”"",
“Accept-Charset” → “”“ISO-8859-1,utf-8;q=0.7,*;q=0.7"”",
“Keep-Alive” → “”“115"”",
“Connection” → “”“keep-alive”"",
“X-Requested-With” → “”“XMLHttpRequest”"")

val initiateRequest = exec(http(“Post”)
//.basicAuth(“user1”, “1234”)

val httpConf = http
.acceptEncodingHeader(“gzip, deflate”)
.userAgentHeader(“Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20110302 Oracle/3.6-4.0.1.el5_6 Firefox/3.6.14”)

//Now, we can write the scenario as a composition
val scn = scenario(“Scenario Name”).exec(RESTrequest.initiateRequest)

setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(10 seconds)).protocols(httpConf)

I get the proper expected behavior, whatever the holdFor duration: a constant 1rps from t0 to t2s: