Merging reports across many simulations

Gatling version: 3.14.9
Gatling flavor: java kotlin scala javascript typescript
Gatling build tool: maven gradle sbt bundle npm

I made sure I’ve update my Gatling version to the latest release
I read the guidelines and how to ask a question topics.
I provided a SSCCE (or at least, all information to help the community understand my topic)
I copied output I observe, and explain what I think should be.

Hello,

I would like to know if it’s possible to merge somehow results from many simulations? Im pretty sure some older Gatling’s version had som guide describing process like:

  • Launch tests without generating reports
  • Launch gatling with -ro option to generate reports according to provided simulation.log files

And here is my question, if its still available, cuz I found an obstacle how to generate the report when I have 2 simulation.log files

My case is to distribute load to many hosts running ok eks cluster. Potentially we are facing some issues related to Websocket connections closed by the client (gatling), already checked some threads about that and probably its not related to load injector but to the service itself but I need to proof that

Step by step reproduction:

  1. Paste all simulation.log files under target/gatling/results directory
target/gatling
├── LastRun.log
└── results
    ├── simulation1.log
    └── simulation2.log

  1. Trigger command: mvn gatling:test -Dgatling.reportsOnly=results

Output:

mvn gatling:test -Dgatling.reportsOnly=results                                                                                                                                        130 ↵
[INFO] Scanning for projects...
[INFO] 
[INFO] ---------------< com.idemia.pivt:PIVT-performance-tests >---------------
[INFO] Building PIVT-performance-tests 1.0.0-SNAPSHOT
[INFO]   from pom.xml
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] >>> gatling:4.20.8:test (default-cli) > test-compile @ PIVT-performance-tests >>>
[INFO] 
[INFO] --- resources:3.3.1:resources (default-resources) @ PIVT-performance-tests ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory /Users/mateusla/Work/Repos/pivt-performance-tests/pivt-performance-tests/src/main/resources
[INFO] 
[INFO] --- compiler:3.13.0:compile (default-compile) @ PIVT-performance-tests ---
[INFO] No sources to compile
[INFO] 
[INFO] --- resources:3.3.1:testResources (default-testResources) @ PIVT-performance-tests ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] Copying 60 resources from src/test/resources to target/test-classes
[INFO] 
[INFO] --- compiler:3.13.0:testCompile (default-testCompile) @ PIVT-performance-tests ---
[INFO] Nothing to compile - all classes are up to date.
[INFO] 
[INFO] <<< gatling:4.20.8:test (default-cli) < test-compile @ PIVT-performance-tests <<<
[INFO] 
[INFO] 
[INFO] --- gatling:4.20.8:test (default-cli) @ PIVT-performance-tests ---
12:03:17,060 |-INFO in ch.qos.logback.classic.LoggerContext[default] - This is logback-classic version 1.5.20
12:03:17,060 |-INFO in ch.qos.logback.classic.util.ContextInitializer@6d2a209c - No custom configurators were discovered as a service.
12:03:17,060 |-INFO in ch.qos.logback.classic.util.ContextInitializer@6d2a209c - Trying to configure with ch.qos.logback.classic.joran.SerializedModelConfigurator
12:03:17,061 |-INFO in ch.qos.logback.classic.util.ContextInitializer@6d2a209c - Constructed configurator of type class ch.qos.logback.classic.joran.SerializedModelConfigurator
12:03:17,064 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.scmo]
12:03:17,064 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.scmo]
12:03:17,069 |-INFO in ch.qos.logback.classic.util.ContextInitializer@6d2a209c - ch.qos.logback.classic.joran.SerializedModelConfigurator.configure() call lasted 3 milliseconds. ExecutionStatus=INVOKE_NEXT_IF_ANY
12:03:17,069 |-INFO in ch.qos.logback.classic.util.ContextInitializer@6d2a209c - Trying to configure with ch.qos.logback.classic.util.DefaultJoranConfigurator
12:03:17,069 |-INFO in ch.qos.logback.classic.util.ContextInitializer@6d2a209c - Constructed configurator of type class ch.qos.logback.classic.util.DefaultJoranConfigurator
12:03:17,071 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback-test.xml] at [file:/Users/mateusla/Work/Repos/pivt-performance-tests/pivt-performance-tests/target/test-classes/logback-test.xml]
12:03:17,128 |-INFO in ch.qos.logback.classic.model.processor.ConfigurationModelHandlerFull - Main configuration file URL: file:/Users/mateusla/Work/Repos/pivt-performance-tests/pivt-performance-tests/target/test-classes/logback-test.xml
12:03:17,128 |-INFO in ch.qos.logback.classic.model.processor.ConfigurationModelHandlerFull - FileWatchList= {/Users/mateusla/Work/Repos/pivt-performance-tests/pivt-performance-tests/target/test-classes/logback-test.xml}
12:03:17,128 |-INFO in ch.qos.logback.classic.model.processor.ConfigurationModelHandlerFull - URLWatchList= {}
12:03:17,130 |-WARN in ch.qos.logback.core.model.processor.AppenderModelHandler - Appender named [CONSOLE] not referenced. Skipping further processing.
12:03:17,130 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - Processing appender named [FILE]
12:03:17,130 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - About to instantiate appender of type [ch.qos.logback.core.FileAppender]
12:03:17,136 |-INFO in ch.qos.logback.core.model.processor.ImplicitModelHandler - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
12:03:17,159 |-INFO in ch.qos.logback.core.FileAppender[FILE] - File property is set to [target/gatling/LastRun.log]
12:03:17,159 |-INFO in ch.qos.logback.classic.model.processor.LoggerModelHandler - Setting level of logger [io.gatling.http.engine.response] to DEBUG
12:03:17,159 |-INFO in ch.qos.logback.classic.model.processor.LoggerModelHandler - Setting level of logger [io.gatling.http.action.ws.fsm.WsLogger] to DEBUG
12:03:17,159 |-INFO in ch.qos.logback.classic.model.processor.LoggerModelHandler - Setting level of logger [com.idemia.pivt.helpers.DebugHelper] to DEBUG
12:03:17,159 |-INFO in ch.qos.logback.classic.model.processor.RootLoggerModelHandler - Setting level of ROOT logger to DEBUG
12:03:17,160 |-INFO in ch.qos.logback.core.model.processor.AppenderRefModelHandler - Attaching appender named [FILE] to Logger[ROOT]
12:03:17,160 |-INFO in ch.qos.logback.core.model.processor.DefaultProcessor@75329a49 - End of configuration.
12:03:17,160 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@161479c6 - Registering current configuration as safe fallback point
12:03:17,160 |-INFO in ch.qos.logback.classic.util.ContextInitializer@6d2a209c - ch.qos.logback.classic.util.DefaultJoranConfigurator.configure() call lasted 91 milliseconds. ExecutionStatus=DO_NOT_INVOKE_NEXT_IF_ANY

Parsing log file(s)...
java.lang.reflect.InvocationTargetException
        at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:118)
        at java.base/java.lang.reflect.Method.invoke(Method.java:580)
        at io.gatling.plugin.util.ForkMain.runMain(ForkMain.java:67)
        at io.gatling.plugin.util.ForkMain.main(ForkMain.java:35)
Caused by: java.lang.IllegalArgumentException: requirement failed: Could not locate log file for results.
        at scala.Predef$.require(Predef.scala:337)
        at io.gatling.charts.stats.LogFileReader$.apply(LogFileReader.scala:367)
        at io.gatling.app.RunResultProcessor.initLogFileData(RunResultProcessor.scala:52)
        at io.gatling.app.RunResultProcessor.processRunResult(RunResultProcessor.scala:35)
        at io.gatling.app.Gatling$.start(Gatling.scala:93)
        at io.gatling.app.Gatling$.fromArgs(Gatling.scala:46)
        at io.gatling.app.Gatling$.main(Gatling.scala:40)
        at io.gatling.app.Gatling.main(Gatling.scala)
        at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
        ... 3 more
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  1.336 s
[INFO] Finished at: 2026-03-31T12:03:17+02:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.gatling:gatling-maven-plugin:4.20.8:test (default-cli) on project PIVT-performance-tests: Gatling failed.: ForkException -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

BR
Mateusz

Hello,

That’s not possible and that’s not the philosophy of Gatling.

In Gatling, you can use multiple concurrent scenarios in the same simulation.
Instead of having multiple simulations, each with 1 scenario, extract your scenarios and create simulations that combine them.

Regards

Thanks for answer. I know that I can extract scenarios but my case is different. Im using same scenario and same injection profile at many hosts like. My Target (example) is to reach user’s arrival rate 50 per 1s for open model. My product is facing some issues, mostly related to premature close of websocket connections. Product team is pushing hypothesis that load injector is closing such connections due to overloaded host/jvm/injector itself

To face them I would like to split the injection profile to run 5 hosts where all maven jobs will start at the same time and build a traffic up to 10 users per 1s.

At the end I would like to see single report with the details

Distributed load testing is only featured in Gatling Enterprise.

premature close of websocket connections. Product team is pushing hypothesis that load injector is closing such connections due to overloaded host/jvm/injector itself

Premature close = server closes the connection prior to sending a response. Network or server issue.

So that option described at this thread is already deprecated?

Yes, for a long time.

Okay, thank you very much