Request Timeout http connection

Hi all,
I want to understand the working of request Timeout configuration present in gatling.conf file.
Case : Only using requestTimeout field in ahc settings
So, basically if a request times out , does gatling end that http connection and a new one is formed or that connection stays but another http connection is also generated ?

Basically, in my tests , when I am receiving the these timeouts, http gatling connections are not ending , rather increasing in accordance with the users.
How can this be handled?
And can any other ahc settings field help to solve this situation ?

Can someone help me understand this all better ?

Gatling closes the connection on request timeout.

From this group’s terms:

  • Make sure you’re using an up-to-date Gatling version
  • Provide a Short, Self Contained, Correct (Compilable), Example (see http://sscce.org/)

It looks like you’re using an old Gatling version, possibly buggy.
Please upgrade (current is 3.5.1) and provide a reproducer if you still face an issue.

We are facing Increased TCP connections when testing a simulation, the ramp up time is 1min 1 to 6000 users per second, the number of requests are around 60-80k per second where as the TCP connections are reaching 400-500k

Sorry but that’s not something we can help with. This is something that can only be investigated in your infrastructure, hence consulting.

You virtual users and hence their TCP connections are piling up.
This is caused by either the client (the Gatling Load Generators) or the system under test (network, target application) being saturated.
When the piling up starts to happen, is the CPU or the bandwidth usage very high?
If so, you have to run your test on either larger Load Generators, or more of them.
Otherwise, your system under test is saturated and can’t cope with the load you’re throwing at it.

hi @slandelle thanks for the response, when the rampUpTime is 10min for 1 to 6000 users per second we are seeing no issues in the TCP connections but with aggressive ramp up we face this challenge, we want to test this sudden surge in traffic use case but the connections are spiking. what could contribute to increased TCP connections? we are running multiple pods in the backend to cope up with this traffic, but the sudden spike is causing this delay

yes CPU does show maximum utlised

we are running multiple pods in the backend to cope up with this traffic

The bottleneck can be your ingress/edge, not the service.

yes CPU does show maximum utlised

Then you need more CPU hence more load generators

@slandelle just checking if you can help some pointers, as the load test begins, for the first 1-2 mins the connections count will be around 1k for 20-25k RPS, after 2min we start seeing increased latency from the backend server and at the same time we see TCP connections spiking to 300-400k and lot of 504/503 error responses, latency shoots up to 20-30second. We are using AWS ALB as the ingress, what could be the cause, any idea/pointers to look into?

we start seeing increased latency from the backend server

This is what you have to fix.

So Gatling is merely the messenger.
Your application struggles with handling your load while new virtual users keep on arriving and cause a snowball.
Why your application is struggling is for you to figure out.