load testing - throttling

Hello,
I was trying to do a simple load test of my locally developed REST service, but I’m unable to set up Gatling to do that. In attached file is a script I used. It seems that I completely misunderstood some key concept…
What I want to to do, is to make Gatling fire 100 GET requests per second (10 in parallel) for a period of 1 minute on http://localhost:9080/moods. The script however makes only 10 requests, thus exiting almost immediately. Can you explain me what am I missing?

Load.scala (680 Bytes)

throttling is a limit: you still have to inject enough load.
Try injecting 100 users per second instead.

great, thanks, that explained a lot…once i used

setUp(scn.inject(rampUsers(100) over(30 seconds)).protocols(httpConf))

i got 100 requests spread in period of 30 sec, however is there possibility to do what originally wanted (make Gatling fire 100 GET requests per second (10 in parallel) for a period of 1 minute on http://localhost:9080/moods)?

Hi,
If you want to fire 100 GET requests per second (10 in parallel) for a period of 1 minute
then one candidate way of implementing this is to have 10 users injected executing a scenario that loops forever (with a maxDuration on the simulation).
If there was no pause in the scenario and only 1 request then the response time of the GET request needs to be exactly(or slightly less than) 0.1 seconds to achieve the total of 100rps(requests per second).

The problem is we don’t know in advance what the response time might be, and it will vary. This will affect your throughput. So the requirement will be hard to meet.

  1. You could use pacing( http://gatling.io/docs/2.0.0-RC2/general/scenario.html?highlight=pacing#pace ) to achieve a constant throughput with a fixed number of looping users. But then your users will some of the time be inactive not sending a request. The pacing also has some risks where it can cause coordination with the test environment that can invalidate percentiles results etc.
    Pacing is the workaround for thread-per-user tools like JMeter to approximate an open workload with an arrival rate. There may be a use case for it in Gatling but for the most part cUps() will provide a cleaner workload.

  2. If the response time was way less than 0.1s you could use throttling. However, if the response time degrades enough >0.1s then the throughput will drop accordingly with 10 looping users, missing the 100rps target.

  3. there is an open ticket here: https://github.com/gatling/gatling/issues/1647

Having said that, where does the requirement for 10 in parallel come from?
Typically either the workload of the system is applying a request rate(open), or it is applying some parallel load/concurrency(closed), not both at the same time (although it is possible). The check is whether the requirement is valid or not.

thanks
Alex

that’s what I call a reply :smiley:
thanks a lot,much appreciated