Reports Only Run - java.lang.OutOfMemoryError: Requested array size exceeds VM limit

I recently ran the same simulation on 6 boxes for about 24 hours with a total rate of about 2,000 requests per second. This generated 4G simulation.log files on each box. I would like to merge these into a single report, so I’m running Gatling in reports only mode (-ro {sim-folder}), however each time I’m getting the following error. Is there any work around for this, or are we out of luck?

`

Exception in thread “main” java.lang.OutOfMemoryError: Requested array size exceeds VM limit

at java.util.Arrays.copyOf(Arrays.java:3332)

at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137)

at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121)

at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:569)

at java.lang.StringBuffer.append(StringBuffer.java:369)

at java.io.BufferedReader.readLine(BufferedReader.java:370)

at java.io.BufferedReader.readLine(BufferedReader.java:389)

at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:72)

at scala.collection.Iterator$JoinIterator.hasNext(Iterator.scala:211)

at scala.collection.Iterator$ConcatIterator.hasNext(Iterator.scala:192)

at scala.collection.Iterator$class.foreach(Iterator.scala:893)

at scala.collection.Iterator$ConcatIterator.foreach(Iterator.scala:168)

at io.gatling.charts.stats.LogFileReader.firstPass(LogFileReader.scala:86)

at io.gatling.charts.stats.LogFileReader.io$gatling$charts$stats$LogFileReader$$$anonfun$8(LogFileReader.scala:125)

at io.gatling.charts.stats.LogFileReader$lambda$$x1$1.apply(LogFileReader.scala:125)

at io.gatling.charts.stats.LogFileReader$lambda$$x1$1.apply(LogFileReader.scala:125)

at io.gatling.charts.stats.LogFileReader.parseInputFiles(LogFileReader.scala:63)

at io.gatling.charts.stats.LogFileReader.(LogFileReader.scala:125)

at io.gatling.app.LogFileProcessor.initLogFileReader(RunResultProcessor.scala:55)

at io.gatling.app.LogFileProcessor.processRunResult(RunResultProcessor.scala:37)

at io.gatling.app.Gatling.start(Gatling.scala:66)

at io.gatling.app.Gatling$.start(Gatling.scala:57)

at io.gatling.app.Gatling$.fromArgs(Gatling.scala:49)

at io.gatling.app.Gatling$.main(Gatling.scala:43)

at io.gatling.app.Gatling.main(Gatling.scala)

`

4Gb * 6 boxes = 24Gb

The standard Gatling reporting engine will have a very hard time parsing such amount of data.
The best workaround is FrontLine, that supports clustering out-of-the-box and whose reporting engine is completely different and optimized for such use cases.

Regards,