I think I have come across a possible memory leak. I am accessing a web page and for some reason gatling get stuck on it. There are some ajax calls to google-analytics etc, which I disabled but that didn’t help. As soon as I hit that page, CPU climbs up to 60-70% and I can see ram increasing upto 700k only with single user.
After turning on logging, I noticed that I received response 200 ok alongwith with full response body. I think AHC maybe behaving erratically here.
I have attached logs and console output. Will appreciate if any of the devs can suggest a workaround.
Script is fairly straight forward. here is a snippet
val httpProtocol = http .baseURL("[http://www.qa.mycompany.org](http://www.qa.mycompany.org)") //.inferHtmlResources() .acceptHeader("""*/*""") .acceptEncodingHeader("""gzip, deflate, sdch""") .acceptLanguageHeader("""en-US,en;q=0.8""") .connection("""keep-alive""") .userAgentHeader("""Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36""") .disableResponseChunksDiscarding .disableWarmUp
`
object StaticUrl{
val headers_rules = Map(
“”“Cache-Control”"" → “”“max-age=0"”",
“”“If-Modified-Since”"" → “”“Thur, 04 Dec 2014 02:25:53 GMT”"",
“”“If-None-Match”"" → “”“1417832753-1"”")
val rulesPage = exec(http(“rules”)
.get("""/industry/mycomapany-rules""")
.headers(headers_rules))
val randomLUsers = scenario(“Links”).during(120 seconds) {
exec(StaticUrl.rulesPage)
}
setUp(randomLUsers.inject(atOnceUsers(1)))
.protocols(httpProtocol)
I was able to suppress the issue with exec(flushHttpCache) or use constantUsersPerSeconds.
Whatever cache-control is employed or will be employed, it should not make gatling spin up cpu and memory because of this. Desired behavior should be if it reads from cache then report the cache numbers. In my case, without flushing cache, I got 5600 msecs response time and only 1 request/respone in the 2 mins during loop.
After I did flush I got 240 msecs response time with 113 requests.
Thanks for pointing me in the right direction. But I think that gatling should handle the same behavior as a browser would, which gatling does in all the other scenarios. In case, if the user reads it from cache, so be it.
@Alex: this Cache-Control header is something Abhonav set on the REQUEST, which is plain wrong.
@Abhinav: there’s no issue on the Gatling side: your response has both an Expire header and a max-age, causing further requests to be served from cache without any HTTP request, just like a browser would. You didn’t even set a pause, so you end up with some kind of empty loop, causing your CPU to spin.
Not sure about the behavior you’re trying to simulate: one user flushing its cache and hitting the same page over and over again at the speed of light?!
I removed cache-control headers from my request and added 2 seconds pause and ran the tests. As you said cpu was working normally but it didn’t give me appropriate result.
The test reported only one request in 2 minutes which took 1900 msecs to respond. See testFile2.log
Then I re-ran the test with flushHttpCache and saw about 1800 requests with 90 ms response time, which is congruent with browser load time.
I am using closed model here because that is what team requested.
So, question is how can I send requests without flushing cache and still expect the correct behavior.
Ah, I had missed that. I guess the intent is to bypass potential proxy caches.
Thanks for the pointer. Still not sure if we should support this. This really looks like a corner case to me. WDYT?
I am using flushCache to work around the situation. I do not wish to disableCaching globally as there are some users designed to exec other requests.
The problem with some of the static pages is usually they will have some caching enabled. I initially was looping them in a feeder without any header. I discovered these caching issues when gatling reported unusually high response time and I recorded each one of them to find out whats going on. Lengthy process but I am still a newbie.
This issue is only for closed model, which is what lot of existing load test involves (thanks to loadrunner and jmeter). I leave this upto you if want to support it or not. If you need my help in verification, I am available.