Incorrect req/sec getting simulated

I created a scenario to run with 50 users and pac as 10sec with the objective to achieve 10 req/sec for the block of requests (1, 2 and 3).
But what actually happening is that, in “Global Information” section of the report I see 10 req/sec and each request (1, 2 and 3) show 3.3 req/sec each.

I was assuming that my setup should have shown “Global Information” with 30 req/sec and 10req/sec for each request type. What am I missing here?

`
def workload (step: Int, pacing: Int, duration: Int) = scenario(s"Workload Step $step")
.during(duration seconds){
pace(pacing seconds)
.exitBlockOnFail {
feed(conversationIdFeeder)
.feed(requestIdFeeder)
.group(“Request1”) {exec(Request1)}
.feed(requestIdFeeder)
.group(“Request2”) {exec(Request2)}
.feed(requestIdFeeder)
.group(“Request3”){exec(Request3)}
}
}

setUp(
workload(1,5,250).inject(
nothingFor(10 seconds),
rampUsers(50) over(50 seconds),
nothingFor(220 seconds)
)
).protocols(httpProtocol)
`

Requests Total OK KO % KO Req/s Global Information 2464 2464 0 0% 10.855 Request1 822 822 0 0% 3.621 Request2 821 821 0 0% 3.617 Request3 820 820 0 0% 3.612

Is there any fundamental limitation in gatling which I am missing? I added more users keeping other settings same but still the load isn’t increasing.

my2cents: caching

Caching should improve response times which is not happening. Why would caching affect the load being injectrd? Can you please put few more cents and elaborate? Those three requests in my simulation are REST apis.

I tried with disableCaching at httpProtocol level as well but no luck. Any advise would be really helpful. Thank you!

I removed during and pace from the scenario and tried to load 10 users using ConstantUsersPerSec. This should have given me 10reqs/sec for each request type. Still the rps for each request is getting limited to 2 req/sec!!. I understand that if the combined response time is > 1 sec the RPS would reduce. But 90% of response times are less than 1 sec so there is no reason why it should limit the rps to 2req/sec. So frustrating. I have spent 3 days on this investigation. As I am out of options now my manager has advised me explore another tool but I really don’t want to give up :frowning:

Attaching simulation.log

`
def workload (step: Int, pacing: Int, duration: Int) = scenario(s"Workload Step $step")
.exitBlockOnFail {
feed(conversationIdFeeder)
.feed(requestIdFeeder)
.group(“Request1”) {exec(Request1)}
.feed(requestIdFeeder)
.group(“Request2”) {exec(Request2)}
.feed(requestIdFeeder)
.group(“Request3”){exec(Request3)}
}

setUp(
workload.inject(
nothingFor(10 seconds),

rampUsers(10) over(10 seconds),
constantUsersPerSec(10) during(200 seconds),
nothingFor(10 seconds)
)
).protocols(httpProtocol)

`

simulation.log (496 KB)

Your maths are wrong.
You don’t have a constant number of concurrent users as you use “rampUsers”. You only get the numbers you expect during the plateau, but the global stats account for the ramp up and the ramp down.

Where has it gone wrong?
I am assuming your response was related to my first scenario (which had during and pace): i was ramping up 50 users over 50 sec and then kept them running for 250 sec with a pace of 5 sec. So 50 users with a pace of 5 sec should have given me 10 req/s for each request. I understand "global information" includes ramp up and ramp down. I sent the table only for reference. I see the same stats even for the steady state duration of 250 sec.

In second scenario which has ConstantUsersPerSec with 10 users should have fired 10 req/s for each req regardless of server response. All i see
Is 2 req/s.

Alright, I have figured out the problem. Nothing is wrong with the scenario configurations. One of the feeders in my scenario was calling an external jar and that was taking considerable to generate value feed values.

I believe the execution time of feeders isn’t logged so it went unnoticed (please correct me if I am wrong). Now I pre-generate the required feeds and load them as csv feeder.
Thank you for your helps.

Glad you found it out.
All the built-in feeders that we provide perform data fetching and parsing on simulation start up so they can’t suffer from such flaw.
Performance wise, doing something like tons of SQL requests would be a bad idea.

Thanks. The external jar was doing heavy encryption.

One question: In the following scenario, I was intending to inject the load in steps of 2tps (2tps->4tps->6tps and so on) but didn’t happen accurately. 4/8/12 tps got injected correctly whereas 2/6/10 tps shows hiccups. If gatling uses asynchttpclient why would this happen?

The scenario had following type of setup

def workload (step: Int, pacing: Int, duration: Int) = scenario(s"Workload Step $step") .during(duration seconds){ pace(pacing seconds) .exitBlockOnFail {...

Could you provide a full gist, please?

Simulation log for the test attached. Scenario definition and setup definition below.

`

def pVusers: Int = Integer.getInteger(“vusers”, 20)
def pPacing: Int = Integer.getInteger(“pacing”, 10) // in seconds
def pTps: Int = Integer.getInteger(“tps”, 2) // pVusers/pPacing``

def pRampTime: Int = Integer.getInteger(“rampUpTime”, 5) // in seconds
def pStepTime: Int = Integer.getInteger(“stepTime”, 60) //in seconds 1200
def pNumSteps: Int = Integer.getInteger(“numSteps”, 20) //number of steps of incremental load

//scenario
def workload (step: Int, pacing: Int, duration: Int) = scenario(s"STEP NO $step").during(duration seconds){
pace(pacing seconds)
.exec(
exitBlockOnFail {
feed(requestIdFeeder)
//.group(“ION PRVS GF”) {
.exec(session => {
session.set(“url”, spBrokerURL)
})
.group(“RequestGroup1”){exec(request1)}
.feed(requestIdFeeder)
.group(“RequestGroup2”){exec(request2)}
.feed(requestIdFeeder)
.group(“RequestGroup3”){exec(request3)}
})}

val stepsParams = 1 to pNumSteps map { i => {
workload(i, pPacing, ((pNumSteps - i + 3) * (pRampTime+pStepTime)))
.inject(
nothingFor((i-1) * (pRampTime+pStepTime) seconds),
rampUsers(pVusers) over (pRampTime seconds),
nothingFor(pStepTime seconds))

}
}

println(“Load Pattern:” + stepsParams + “\n”)

setUp(stepsParams:_*)
.protocols(httpProtocol)
.assertions(
global.responseTime.max.lessThan(10000)
)

`

Sorry, simulation log attached now.

simulation.log (2.34 MB)

Did anyone have a chance to look into this? I am facing this issue quite frequently.
Issue is: I was intending to inject the load in steps of 2tps (2tps->4tps->6tps and so on) but didn’t happen accurately. 4/8/12 tps got injected correctly whereas 2/6/10 tps shows hiccups. If gatling uses asynchttpclient why would this happen?

Here is graph from another test.