constrained sut? no more than 7 reqs/sec - no mather what (!)

I have ran a simulation with 3 rest api calls with different load models (both closed and open) and also a simulation with just one isolated call to a different service, all passing through citrix nestler (a reverse proxy with SSL termination).

The conclusion is that I do not get a throughput greater than 7 reqs/sec.

From John Arrowwood I have gotten a basic explanation on tis phenomena before:

What typically happens with resource-constrained systems is, as you give it more work to do, it takes longer to get the work done, but the overall throughput remains relatively constant. So if you have 100 users trying to do 100 things per second, and that happens to be the limit of what the system can do, then if you give it 200 users trying to do 200 things, it will happily do it, but instead of being able to process each request in a second, it will take 2 seconds each. End result is still 100 transactions per second.

This is exactly what I experience.
Given very little load I see an increase in responses per second, but when it reaches 7 reqs/ the response time goes up and the throughput remains 7 regs/sec.

What I wonder is what this could be?

I have experimented with disabling keep-alive:


allowPoolingConnections = true


But no luck.

Any ideas where to go from here to try to pinpoint what causes this constraint on 7 reqs/sec?

Some sort of starvation of connections?


That’s only something you can find out if you monitor the sut resources (memory, CPU, network).
7 rps is very low, so there might be some kind of global lock somewhere (like database table locking).

Ok, I will ask the networks guys and some developers in the project to put on some monitoring then.

7 is ridiculously low. Maybe the application has thread limit constraints? If the CPU and/or IO is not pegged, check for something like that.

Thanks, what is ment by cpu being pegged? Is pegged the same as fixed at a certain limit?

English Idiom. Pegged = 100% or close to. :slight_smile:

Ok, so if cpu and/or disk IO is NOT maxed-out there might be a thread limit constraint in the app. I understand.

Exacty. For example, in a JVM app, you’d typically see some threads blocked on some SQL request or some synchronized access.

Hello again,
I have included another call in my simulation which goes to the same base-url and thought the same infrastructure:

This is my simulation now:


package com.firm

import io.gatling.core.Predef._
import io.gatling.core.scenario.Simulation
import io.gatling.http.Predef._
import scala.concurrent.duration._

class idpSimulation extends Simulation{

  val httpConf = http
    .headers(Map("Content-Type" -> "application/x-www-form-urlencoded; charset=utf-8"))
    .authorizationHeader("Basic bW9iaWllX2NsaWVydDptbrJpbGVfc2VjcmV0")

  val scn = scenario("Scenario")



  setUp(scn.inject(constantUsersPerSec(30) during (10 seconds))).protocols(httpConf)


Now I get 7 reqs/sec for each and 14 all togheter.
Does this indicate no problem with throughput in my server and loadbalancer, but rather a constraint in my laptop or Gatling test setup?

Are you sure that your loadbalancer doesn’t have some security that won’t let you open more than 7 HTTP connections from a single IP?

I am not sure about that.
Is this default for common loadbalancers? To ristrict connections from single IP’s?
You do not think this has something to do with my test? If you could inspect it?
BTW: would it be possible to as you to run the test from your computer? The Rest-service I am testing against is available online. Just to rule out the test itself as a ‘bottleneck’?
I could send the test to you by email.


But I get 14 reqs/sec in sum if I add the two http-calls togheter, not 7.
So it seems like each http-call/.exec reaches a hard-limit at 7 reqs/sec. But the entire test (with two calls going through the loadbalacer) can reach 14, the double.
What are the symptoms of isolated calls having a hard-limit but altogheter I can get more throughput?

Find the source of the problem.

How many do you get if you bypass the load balancer?

How many if you go through the load balancer, but to a static resource, such as a small .html file?

Ok, thanks. I can then do the following:
1)deploy a simple index.html page to the server and run a .GET to reach it with the same load.
2) run the test directly against one of the two servers behind the loadbalancer, thus bypassing

to compare.

If better throughput, then the loadbalancer might be the bottleneck here, right?