I was helping a colleague with a Gatling script to do some pre-production testing - basically sending some JSON over a REST API as part of a server to server communication
Initially we were puzzled that Gatling reported a few long-runners not shown in the server’s access.log - re-running the same tests with JMeter showed that the long-runners are caused by the time of connection to the server (this is not directly visible in Gatling AFAIK).
So we had a look at the numbers of connections being used on the load injector box and came across many connections in CLOSE_WAIT (might explain the behaviour above)
Each & every request seems to leave one socket in CLOSE_WAIT (~5.000 after a short test run)
The script starts a user & session and only executes a single request so we create a lot of session and virtual users (~5.000)
We did compare the sockets being used with “Apache AB” and that test run leaves no connections behind so the problem stems from the Gatling script
We set “keepAlive=false” in gatling.conf which IMHO should do the trick but it had not the desired effect
we left a lot fewer connection behinds but still growing (probably 80% less but I have no hard number yet)
we still don’t see the behaviour of “Apache AB” where 20 workers used a maximum of 21 sockets
the test run suddenly had a lot more long-runners
Having said that I’m a bit puzzled - what I’m doing wrong in terms go HTTP connections and sockets?
Thanks in advance,
PS: My colleague is using Windows (he was given little choice) and Gatling 2.2.3 - not perfect