Is there a mechanism for caching large data objects on gatling and not the session

Is there a means to expose a large data object to many clients without storing it on every session?

As an example, a web application caches 10MB of data in the user’s session cache to reduce network chatter.
This is not an issue to emulate - simply hit the same endpoint and save the deserialized object on the user’s session.

However, this creates a problem for simulating large user scale, as the duplicated data across many user sessions grows disproportionately to the load generated by the users, and gatling’s ability to simulate users becomes capped by the memory allocated to storing this 10MB/session data.

So, I’m curious if there is a means exposed to retrieve this data before launching a simulation, and expose it to each user via a function stored on the session. Otherwise, the memory footprint of each user’s session severely limits gatling’s ability to test scale.

I’m aware that removing the call to load this object from the service from the user simulations increases the variance between the test and reality, but I can test that separately (validate the response and discard it, so it’s trivial to test at high scale as well)

Thanks

You can write whatever code you want in your Simulation, so for example, you could perform an HTTP request that would fetch your data, so you can later add a exec(function) that would populate your users.
Just that you can’t use Gatling DSL for this (this wouldn’t be part of the load test).

If your request is very basic you can consider JDK’s UrlConnection.
You can also consider AsyncHttpClient that Gatling uses underneath. If you do so, there’s a singleton instance available at io.gatling.http.ahc.HttpEngine.instance

Cheers,

Stéphane