Best Way to Batch Process and Send Response Data in Gatling

Gatling version: 3.11.5 (must be up to date)
Gatling flavor: [*] java kotlin scala javascript typescript
Gatling build tool: [*] maven gradle sbt bundle npm

Hi Gatling Community,

I’m researching the best way to implement a solution using Gatling that involves the following workflow:

  1. Extract specific data from HTTP responses during the simulation.
  2. Store the extracted data temporarily.
  3. Once a batch of N responses (e.g., 100) is collected, send the aggregated data to InfluxDB for further analysis.

I’m looking for an efficient and scalable way to handle this while ensuring thread safety and avoiding performance bottlenecks.

Some approaches I’ve considered:

  • Using a ConcurrentLinkedQueue or a similar thread-safe structure to aggregate the data.
  • Writing a custom action or handler to process and send the batch when the threshold is met.

Questions:

  1. Is there a standard or recommended pattern in Gatling to achieve this?
  2. How can I ensure the solution is performant, especially for high-throughput simulations?
  3. Are there better alternatives for managing this workflow in Gatling without interfering with the simulation’s flow?

Any guidance, examples, or best practices would be greatly appreciated!

Thanks in advance,
Wojciech Gaudnik

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.