I have ran a simulation with 3 rest api calls with different load models (both closed and open) and also a simulation with just one isolated call to a different service, all passing through citrix nestler (a reverse proxy with SSL termination).
The conclusion is that I do not get a throughput greater than 7 reqs/sec.
From John Arrowwood I have gotten a basic explanation on tis phenomena before:
What typically happens with resource-constrained systems is, as you give it more work to do, it takes longer to get the work done, but the overall throughput remains relatively constant. So if you have 100 users trying to do 100 things per second, and that happens to be the limit of what the system can do, then if you give it 200 users trying to do 200 things, it will happily do it, but instead of being able to process each request in a second, it will take 2 seconds each. End result is still 100 transactions per second.
This is exactly what I experience.
Given very little load I see an increase in responses per second, but when it reaches 7 reqs/ the response time goes up and the throughput remains 7 regs/sec.
What I wonder is what this could be?
I have experimented with disabling keep-alive:
`
allowPoolingConnections = true
`
But no luck.
Any ideas where to go from here to try to pinpoint what causes this constraint on 7 reqs/sec?
That’s only something you can find out if you monitor the sut resources (memory, CPU, network).
7 rps is very low, so there might be some kind of global lock somewhere (like database table locking).
Now I get 7 reqs/sec for each and 14 all togheter.
Does this indicate no problem with throughput in my server and loadbalancer, but rather a constraint in my laptop or Gatling test setup?
I am not sure about that.
Is this default for common loadbalancers? To ristrict connections from single IP’s?
You do not think this has something to do with my test? If you could inspect it?
BTW: would it be possible to as you to run the test from your computer? The Rest-service I am testing against is available online. Just to rule out the test itself as a ‘bottleneck’?
I could send the test to you by email.
But I get 14 reqs/sec in sum if I add the two http-calls togheter, not 7.
So it seems like each http-call/.exec reaches a hard-limit at 7 reqs/sec. But the entire test (with two calls going through the loadbalacer) can reach 14, the double.
What are the symptoms of isolated calls having a hard-limit but altogheter I can get more throughput?
Ok, thanks. I can then do the following:
1)deploy a simple index.html page to the server and run a .GET to reach it with the same load.
And
2) run the test directly against one of the two servers behind the loadbalancer, thus bypassing
to compare.
If better throughput, then the loadbalancer might be the bottleneck here, right?