Coordinating data across injectors

Hi, we are trying to test some API scenarios in Gatling Enterprise and are looking for any suggestions

The situation is that we have a single privileged user that we can use to login and get an access token. This token is necessary for the APIs in the test scenario, and will expire many times (e.g. every 5 mins) during a load test. The load test could last for hours but the access token cannot be refreshed more than 3 times in any given 5 min period.

There have been several posts about similar scenarios, and we could achieve it by this pseudo-code:

var token = “”

def refresh = scenario(“refresh”)
.during(loadTestDuration){
exec(
// calls to get access token
)
.exec(session => {
token = // grab the token
session
})
.pause(Duration.create(4, MINUTES))
}

def testScenario = scenario(“test APIs”)
.feed(…)
.exec(session => session.set(“accessToken”, token) )
.exec(…)

setUp(
refresh.inject(atOnceUsers(1)).noshard
testScenario.inject(…)
)

This works fine locally, the token is refreshed once every 4 mins and passed to the injectors.

Once we move it to our Gatling Enterprise instance, it fails because its part of a larger test suite running multiple injectors. Each (distributed) injector sends the refresh requests, and it exceeds the limit on getting an access token.

Is there a way for the distributed injectors to share data so that only one instance is making the refresh calls and the tokens are updated in all the injectors? Is there anything built-in to Gatling that will help us accomplish that? If not, can we make use of the underlying Akka cluster somehow?

Sorry, but atm Gatling Enterprise injectors can’t talk with each other (and it’s not an Akka cluster).

  • have as many privileged users as you have injectors (or injectors / 3) so each node can get its own token.
  • if you’re using the self-hosted version, implement the sharing feature yourself: have the first node grab the token and publish it in a shared space (eg an S3 bucket, Redis) and have the other nodes pull from there.

Thank you for the response. We were looking into your second suggestion already, was just hoping there was an option we were missing. Cheers!