High RPS for a long simulation run - How to deal with huge "simulation.log" files

Hello,

Using Gatling 2.2.2, I have built a distributed system (using a custom made web server) that uses the “simulation.log” files of each “load instance/machine” to aggregate results and building up test reports (the -ro option).
I absoulutely love the Gatling reports and everything works great except from the fact that the “simulation.log” files can be seriously HUGE at the end of a night / weekend run if you are testing systems at thousands of Requests Per Second.
I have modified the names of the requests and the scenario names in order to write as less as possible in the “simulation.log”, but still they are going to be extremely big.
Also the gatling -ro command would take very long time to process simulation.log files that are several tens of GB.
Do you have any suggestion / hint on how to deal with them?

Since the reports are much smaller than the simulation logs, is it possible to aggregate multiple of them (and then maybe delete the simulation log)?

Any help or hint would be really appreciated,
Thanks in Advance,
Cheers,
Luca

Since the reports are much smaller than the simulation logs, is it possible to aggregate multiple of them (and then maybe delete the simulation log)?

No, it’s not possible and it doesn’t make sense.

Such use case is one of the reasons we developed FrontLine, our commercial product.