A few questions about timeouts and connections strategy

I’ve been porting my tests from gatling-2.0.0-M3a to RC3 and RC4 and I noticed some changes that I want to understand:

Originally I wanted to update my gatling M3a because when I tried to do a test with more than 3000 requests per minute I started having Remotely closed connections

09:56:16.180 [WARN ] i.g.h.a.AsyncHandler - Request ‘AJAX_READ’ failed: Remotely Closed [id: 0xc44e7696, …

Seeing that my servers were OK I suspected the client to have some issue so I decided to move to RC3. When I ported all the suite, I executed the same test and started seeing

10:37:11.934 [WARN ] i.g.h.a.AsyncHandlerActor - Request ‘HTML_READ’ failed: java.util.concurrent.TimeoutException: Request timed out to xxx/xx.xx.xx.xx:80 of 60000 ms

Instead of the Remotely closed connections, all this always when the test goes higher than 3000 rpm. Fine, so my next step was change the gatling.conf under the ahc section from 60000 to 120000 on

  • connectionTimeout

  • idleConnectionInPoolTimeoutInMs

  • idleConnectionTimeoutInMs

  • requestTimeoutInMs
    But nothing changed, the behavior continued beeing the same, and the TimeoutExceptions continued showing “of 60000 ms”

Finally I’ve updated to RC4, the results were even worse, more Timeouts than RC3, this time before reaching the 3000 rpm line and I noticed that in RC4 there are more connections created even executing the same test.

To summarize, my questions are:

  • Are we having false Remotely Closed / TimeOut exceptions with the tool?
  • Why am I seeing such a difference in the amount of connections created comparing the same test executed from RC3 and RC4?

But nothing changed, the behavior continued beeing the same, and the TimeoutExceptions continued showing “of 60000 ms”

Props were renamed. Update your gatling.conf file (see gatling-default.conf inside gatling-core jar).

Remotely Closed

Remotely closed used to happen a lot because the time window between the moment AHC was pooling a connection and the moment it was used to write the request was too wide, so chances that the server closed it in the meantime were higher.

Are we having false TimeOut exceptions with the tool?

Will investigate. Any chance you can provide a reproducer?

I’ve updated the props and now it’s working great.

About the Remotely closed connections I don’t see already a cause, but looking at the results of some tests I can see a critical difference in the amount of HttpSessions created during the test.

With M3a, the amount of sessions across a two days test was:

But with RC3 and RC4 (need to test with RC5) the amount continues growing and the test starts showing Remotely Closed connections and Timeouts

To give more context, the simulation is

class Standard extends Simulation{

val httpConf = http


.extraInfoExtractor(extraInfo => {

ListString//, session.get(“REQUEST_TYPE”).get)


val totalUsers = System.getProperty(“totalUsers”).toInt

val actionsPerUser = System.getProperty(“actionsPerUser”).toInt

val rampTime = System.getProperty(“rampTime”).toInt

var sessionMaxTime = System.getProperty(“sessionMaxTime”).toInt

var pauseLow = 1

var pauseHigh = (sessionMaxTime/actionsPerUser)

val scenarios_actions = scenario(actionsPerUser + " actions per user")


.pause(pauseLow, pauseHigh)



StandardTestProfile.static_perc → StandardTestProfile.static,

StandardTestProfile.threads_perc → StandardTestProfile.threads,

StandardTestProfile.search_perc → StandardTestProfile.search,

StandardTestProfile.browse_perc → StandardTestProfile.browse,

StandardTestProfile.docs_perc → StandardTestProfile.docs,

StandardTestProfile.profile_perc → StandardTestProfile.profile,

StandardTestProfile.write_perc → StandardTestProfile.write


.pause(pauseLow, pauseHigh)



setUp(scenarios_actions.inject(rampUsers(totalUsers) over (rampTime seconds))).protocols(httpConf)


With these values predefined





Is there any change on the tool that may be affecting and causing this behavior?

That’s weird.

How are connections (Remotely Closed connections and Timeouts) related to HttpSessions in your application?
Is there any chance you’re using SSL session tracking instead of cookies?

I don’t know, I will dig into it to see if it’s related.
But, if the webapp is the same, the tests are the same (only the required refactor code to port from M3a to RC5), the load applied is the same, why am I seeing this difference, is it possible that something internal changed causing that difference in the sessions amount?

More than one year happened between M3a and RC1, so a lot has changed. Also, I’m in the middle of a complete refactoring of AsyncHttpClient.
I’ll end up pinning this issue down for sure.

Just one question: do you use keep-alive?

Yes, my properties are

allowPoolingConnections = true # Allow pooling HTTP connections (keep-alive header automatically added)
allowPoolingSslConnections = true # Allow pooling HTTPS connections (keep-alive header automatically added)

And you don’t explicitly set Connection:close header in your requests?

No, these are our mostly used headers:

Could you give RC5 a try, please?

Double checking the results I’ve found a mistake, the results shown were from different webapps (strictly speaking, the same webapp but with different data attached to it)
I’ve run again some comparative tests between M3a and RC5 and now, at least in terms of HttpSessions is the same, I will keep working with the comparative.

Sorry for the confusion

That’s good news, this was driving me crazy…

Just for the record, initially I started this thread because I saw a difference in the response times of my app between M3a,RC3 / RC4,RC5, first, for the misplaced results I blamed the amount of HttpSession, after establishing that this wasn’t a problem I continued having bad response times, until I found

compressionEnforced = true # Support gzipped responses

in the file gatling.conf

This property is false by default, and if I’m not wrong, it didn’t exist in previous version, or at least it was true by default. Once I changed that everything worked fine no matter if I use M3a, RC3, RC4 or RC5

All right!!!

See: https://github.com/gatling/gatling/issues/2162
What happened is that in older versions of AsyncHttpClient (and Gatling), compression was automatically enabled. The only way to turn it off was in the conf file. This made it impossible to mix populations that enable/disable compression.

The proper comment should be: #Enforce gzip/deflate when Accept-Encoding header is not defined
Setting compressionEnforced reverts to old behavior.

I thought the change would be mostly undetected as the Recorder generates this header.
IMHO, the proper solution for you would be to add the proper Accept-Encoding header in your HttpProtocol, and let compressionEnforced to false.

I’m very sorry for this, I should have advertise this properly.

Note that you should experience another difference is response time: connect time is now properly accounted for in response time.