Memory leak

Sorry had to delete the other post.

I think I have come across a possible memory leak. I am accessing a web page and for some reason gatling get stuck on it. There are some ajax calls to google-analytics etc, which I disabled but that didn’t help. As soon as I hit that page, CPU climbs up to 60-70% and I can see ram increasing upto 700k only with single user.

After turning on logging, I noticed that I received response 200 ok alongwith with full response body. I think AHC maybe behaving erratically here.

I have attached logs and console output. Will appreciate if any of the devs can suggest a workaround.

Script is fairly straight forward. here is a snippet

val httpProtocol = http .baseURL("[http://www.qa.mycompany.org](http://www.qa.mycompany.org)") //.inferHtmlResources() .acceptHeader("""*/*""") .acceptEncodingHeader("""gzip, deflate, sdch""") .acceptLanguageHeader("""en-US,en;q=0.8""") .connection("""keep-alive""") .userAgentHeader("""Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36""") .disableResponseChunksDiscarding .disableWarmUp

`
object StaticUrl{

val headers_rules = Map(
“”“Cache-Control”"" → “”“max-age=0"”",
“”“If-Modified-Since”"" → “”“Thur, 04 Dec 2014 02:25:53 GMT”"",
“”“If-None-Match”"" → “”“1417832753-1"”")

val rulesPage = exec(http(“rules”)
.get("""/industry/mycomapany-rules""")
.headers(headers_rules))

val randomLUsers = scenario(“Links”).during(120 seconds) {
exec(StaticUrl.rulesPage)
}
setUp(randomLUsers.inject(atOnceUsers(1)))
.protocols(httpProtocol)

`

console_output.log (25.7 KB)

testFile.log (29.1 KB)

I’d say that your page has an Expires header, causing it to be cached.

So, what should I do with it? Disabl caching?

Requests with cache control max-age=0 should bypass the local cache so this would be a gatling bug if served from cache… I think.

I was able to suppress the issue with exec(flushHttpCache) or use constantUsersPerSeconds.

Whatever cache-control is employed or will be employed, it should not make gatling spin up cpu and memory because of this. Desired behavior should be if it reads from cache then report the cache numbers. In my case, without flushing cache, I got 5600 msecs response time and only 1 request/respone in the 2 mins during loop.
After I did flush I got 240 msecs response time with 113 requests.

Thanks for pointing me in the right direction. But I think that gatling should handle the same behavior as a browser would, which gatling does in all the other scenarios. In case, if the user reads it from cache, so be it.

@Alex: this Cache-Control header is something Abhonav set on the REQUEST, which is plain wrong.

@Abhinav: there’s no issue on the Gatling side: your response has both an Expire header and a max-age, causing further requests to be served from cache without any HTTP request, just like a browser would. You didn’t even set a pause, so you end up with some kind of empty loop, causing your CPU to spin.
Not sure about the behavior you’re trying to simulate: one user flushing its cache and hitting the same page over and over again at the speed of light?!

Cache-control can be set on the request

http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.3

All I was saying, not that it was correct.
Closed model prone to this issue, Agree with you on that!

I removed cache-control headers from my request and added 2 seconds pause and ran the tests. As you said cpu was working normally but it didn’t give me appropriate result.

The test reported only one request in 2 minutes which took 1900 msecs to respond. See testFile2.log

Then I re-ran the test with flushHttpCache and saw about 1800 requests with 90 ms response time, which is congruent with browser load time.

I am using closed model here because that is what team requested.

So, question is how can I send requests without flushing cache and still expect the correct behavior.

testFile2.log (25.7 KB)

testFile3.log (25.7 KB)

Ah, I had missed that. I guess the intent is to bypass potential proxy caches.
Thanks for the pointer. Still not sure if we should support this. This really looks like a corner case to me. WDYT?

@Abhinav There’s a search engine in our documentation. Have you tried “flush cache”? You can also disable caching globally: http://gatling.io/docs/2.0.3/http/http_protocol.html#caching

I am using flushCache to work around the situation. I do not wish to disableCaching globally as there are some users designed to exec other requests.

The problem with some of the static pages is usually they will have some caching enabled. I initially was looping them in a feeder without any header. I discovered these caching issues when gatling reported unusually high response time and I recorded each one of them to find out whats going on. Lengthy process but I am still a newbie.

This issue is only for closed model, which is what lot of existing load test involves (thanks to loadrunner and jmeter). I leave this upto you if want to support it or not. If you need my help in verification, I am available.

Thanks,
Abhinav

Sounds reasonable thanks