Out of memory issue

Hi guys,

I am running into out of memory issues with my Gatling script (version 2.0.0-SNAPSHOT) when trying to run a high load endurance test (for 48 hours). After approximately 12 hour I see throughput diminishing and eventually Gatling goes out of memory. Based on a post I found on google groups I use the following JVM arguments:


Could the out of memory be caused by these JVM settings? Or is it somehow due to the way my script is set up (randomSwitch in during loop)?

My script looks like this:

import io.gatling.core.Predef._
import io.gatling.http.Predef._

class Search extends Simulation {

println("Heap: " + (Runtime.getRuntime().maxMemory() / 1024 / 1024))

val httpProtocol = http
.acceptEncodingHeader(""“gzip, deflate”"")
.userAgentHeader(""“Mozilla/5.0 (Windows NT 5.2; rv:7.0.1) Gecko/20100101 Firefox/7.0.1"”")

val headers_1 = Map(""“Content-Type”"" → “”“application/json; charset=utf-8"”")

val headers_8 = Map(
“”“Content-Type”"" → “”“application/json; charset=utf-8"”",
“”“X-http-Method-Override”"" → “”“PUT”"")

val headers_12 = Map(
“”“Accept-Encoding”"" → “”“gzip,deflate”"",
“”“Content-Type”"" → “”“application/json; charset=utf-8"”",
“”“X-HTTP-Method-Override”"" → “”“PUT”"")

val step01 =



.exec(session =>
session.set(“currentTimestamp”, System.currentTimeMillis)


val step02 =

val step03 =



val step04 =



val step05 =



val step06 =





val scn = scenario(“search”)

.during(172800) {


30.0 → exec(step01, step02, step03, step04, step05, step06)

Hi Daniel.

That’s weird, your simulation is pretty basic.
Would it be possible for you to share your heap dump, please?



Sure, but the question is how :slight_smile:

Compress, then Dropbox like, maybe?

Yes I see I have enough room in my dropbox, what email adress should I use to share my dropbox with?

Mine, please: slandelle@excilys.com

Nice work daniel :slight_smile:

It should be fine now: https://github.com/excilys/gatling/issues/1997

Things will be even better once I’ve released and integrated AHC 1.9.

Thanks a lot for reporting!

You’re welcome! Love to report issues that are fixed the next day :wink:

Thanks for your swift action!

I will get back to you soon with feedback on your fix .



Crossing fingers :slight_smile:

Hi Stephane,

I did a rerun of the test last night. Gatling is still running out of memory but the good news is it is taking longer now. Unfortunately I did not have the time to wait for the actual out of memory so I don’t have another heap dump for you. Reading issue 1997 made me realize though what might the root cause of the problems. One of the urls in the script is unique on every call because it has a dynamically generated timestamp in it which I set every time the call is made:

.exec(session =>
session.set(“currentTimestamp”, System.currentTimeMillis)

If there is no limit to the Gatling cache size this would in the end always result in running out of memory. Could this be the case? I’m currently re-running the test with the caching disabled and I see the script is using 512 Mb max only.



Hi Daniel,

FYI although this issue is worked out, you don’t have to wait for the OOM error if the leak is slow to get a useful heap dump.

there are various ways to get a heap dump from a running jvm which you suspect is using too much memory: jmap, gcore/gdb, etc

typically get the diagnostics (thread dumps / heap dump) first, then if not needed they can be deleted after.




Things should be better now.

@daniel: some jvms will heapdump if you send it a kill -3 signal. Not sure if the sun jvm does, but that works for websphere at least.