I’ve been porting my tests from gatling-2.0.0-M3a to RC3 and RC4 and I noticed some changes that I want to understand:
Originally I wanted to update my gatling M3a because when I tried to do a test with more than 3000 requests per minute I started having Remotely closed connections
Seeing that my servers were OK I suspected the client to have some issue so I decided to move to RC3. When I ported all the suite, I executed the same test and started seeing
10:37:11.934 [WARN ] i.g.h.a.AsyncHandlerActor - Request ‘HTML_READ’ failed: java.util.concurrent.TimeoutException: Request timed out to xxx/xx.xx.xx.xx:80 of 60000 ms
Instead of the Remotely closed connections, all this always when the test goes higher than 3000 rpm. Fine, so my next step was change the gatling.conf under the ahc section from 60000 to 120000 on
connectionTimeout
idleConnectionInPoolTimeoutInMs
idleConnectionTimeoutInMs
requestTimeoutInMs
But nothing changed, the behavior continued beeing the same, and the TimeoutExceptions continued showing “of 60000 ms”
Finally I’ve updated to RC4, the results were even worse, more Timeouts than RC3, this time before reaching the 3000 rpm line and I noticed that in RC4 there are more connections created even executing the same test.
To summarize, my questions are:
Are we having false Remotely Closed / TimeOut exceptions with the tool?
Why am I seeing such a difference in the amount of connections created comparing the same test executed from RC3 and RC4?
But nothing changed, the behavior continued beeing the same, and the TimeoutExceptions continued showing “of 60000 ms”
Props were renamed. Update your gatling.conf file (see gatling-default.conf inside gatling-core jar).
Remotely Closed
Remotely closed used to happen a lot because the time window between the moment AHC was pooling a connection and the moment it was used to write the request was too wide, so chances that the server closed it in the meantime were higher.
Are we having false TimeOut exceptions with the tool?
Will investigate. Any chance you can provide a reproducer?
I’ve updated the props and now it’s working great.
About the Remotely closed connections I don’t see already a cause, but looking at the results of some tests I can see a critical difference in the amount of HttpSessions created during the test.
With M3a, the amount of sessions across a two days test was:
But with RC3 and RC4 (need to test with RC5) the amount continues growing and the test starts showing Remotely Closed connections and Timeouts
How are connections (Remotely Closed connections and Timeouts) related to HttpSessions in your application?
Is there any chance you’re using SSL session tracking instead of cookies?
I don’t know, I will dig into it to see if it’s related.
But, if the webapp is the same, the tests are the same (only the required refactor code to port from M3a to RC5), the load applied is the same, why am I seeing this difference, is it possible that something internal changed causing that difference in the sessions amount?
More than one year happened between M3a and RC1, so a lot has changed. Also, I’m in the middle of a complete refactoring of AsyncHttpClient.
I’ll end up pinning this issue down for sure.
Double checking the results I’ve found a mistake, the results shown were from different webapps (strictly speaking, the same webapp but with different data attached to it)
I’ve run again some comparative tests between M3a and RC5 and now, at least in terms of HttpSessions is the same, I will keep working with the comparative.
Just for the record, initially I started this thread because I saw a difference in the response times of my app between M3a,RC3 / RC4,RC5, first, for the misplaced results I blamed the amount of HttpSession, after establishing that this wasn’t a problem I continued having bad response times, until I found
compressionEnforced = true # Support gzipped responses
in the file gatling.conf
This property is false by default, and if I’m not wrong, it didn’t exist in previous version, or at least it was true by default. Once I changed that everything worked fine no matter if I use M3a, RC3, RC4 or RC5
See: https://github.com/gatling/gatling/issues/2162
What happened is that in older versions of AsyncHttpClient (and Gatling), compression was automatically enabled. The only way to turn it off was in the conf file. This made it impossible to mix populations that enable/disable compression.
The proper comment should be: #Enforce gzip/deflate when Accept-Encoding header is not defined
Setting compressionEnforced reverts to old behavior.
I thought the change would be mostly undetected as the Recorder generates this header.
IMHO, the proper solution for you would be to add the proper Accept-Encoding header in your HttpProtocol, and let compressionEnforced to false.
I’m very sorry for this, I should have advertise this properly.
Note that you should experience another difference is response time: connect time is now properly accounted for in response time.