Adding/removing service instances during load testing

Hi there

First of all, Gatling is great, I was able to identify a load issue in my service in about 1 hour of mucking round with this DSL - nice! 10 hours later I had Gatling plus a basic http server wrapped in a docker container, plus a script that I can use to launch X instances of the container in our PaaS, kick off the same load simulation on each simultaneously (using the http server), poll for completion, download results, concatenate them and generate a report. What’s more this is extensible so all I need to do is extend the base container, add my simulation and I’m done - generic LT framework - beautiful!

But now I am hitting on a problem, I have a cluster of service instances that I want to load test, these guys cluster with each other sharing data using a partitioned cache. I want to understand the effect from a client point of view of adding/removing cluster members whilst under load as the cache will be re-partitioning under the hood when instances are added/removed.

The way our clients behave is that they have a list of service instances, like the baseUrls that we can setup in the Gatling simulation, but if they get an error from one service instance they will round robin to the next one in the list and not consider it a comms failure (up to a max retry value) - we expect service instance to be unavailable at certain times to support zero downtime deployments and moving services to different hosts for maintenance etc. Also the baseUrls are refreshed periodically from service discovery so we can discover the new service instances.

Currently I can’t simulate this in Gatling from what I can see, what I would like to do is be able to plug in a “url provider” that integrates with our service discovery and set a max number of retries before a request in considered failed - then I can use these for these failover tests.

So, has anyone hit upon this? And if so how did you work around it? Or any suggestions where I could extend Gatling to allow this? Happy to submit my changes back as a PR if/when.



I just answered some of my own question, looks like I can use tryMax as shown in here and then I can at least stop instances, but I can’t add new ones as I can’t predict what ip address they will be running on when I start a new instance as our PaaS assigns the ip on start up.



You’d have to craft the urls with a function, and have a background task that perform the service discovery periodically.

Ah so simple! I’ll give that a try, time for a scala tutorial…

Thanks Stéphane!

Actually looks like tryMax() doesn’t work as I expect, I have my scenario set up like so:

val apps = tryMax(3) {





I have 3 urls set up in http.baseURLs, and I see from the report that a 3rd of my requests fail. I was hoping that a retry as part of a tryMax() loop would round robin over to the next url in the list for the retry - but it looks to try against the same server each time. Of course it makes sense that it doesn’t do this as you may want the chain to run against a single instance if there is state etc.

Anyway, is there a way to get tryMax() or something else, to round robin on failure and only report the request as failed if it hits the max retries without succeeding? This would be a cool feature at the HttpProtocol level, at least for my use case anyway.



I should’ve mentioned that, of the 3 urls in baseUrls only 2 of them are running, the 3rd is down - hence the “3rd of my requests fail”

Hi Stéphane

Any thoughts on the tryMax question?

Also, I’ve looking that “crafting the urls as a function” - from what I can see I would need to change the HttpProtocol and HttpProtocolBuilder to take a function as baseUrls (currently just String, String* and List[String] are allowed) - is that right or is there an easier way to do this?



I’m not sure if this can help.

But take a look at


Here are the docs:

The parameter you pass here as the HTTP_URL will override the baseURLs assuming you pass an absolute URL (e.g., instead of a relative URL (/).
Then, you could have a feeder, or a Scala collection (List of URLs), then iterate over them using either multiple users (feeder alternative) or a forEach loop (collection alternative).

The feeder could return the different URLs, and only those that are ‘up’. Here are the docs for the Feeders:
I’m sure you can come up with a creative way to use one of the built-in feeders (ie. RecordSeqFeederBuilder) and make it work for your use case.

The collection could be generated from an environment variable (e.g. split by commas or something), but then again you could do the same with a CSV feeder, or a JSON feeder, and pre-generate the csv, or json file, before the simulation, or even better have the feeder pull the data directly from an HTTP service using the json-feeder

Good Luck,

Hi Carlos

Thanks for the pointers, the json feeder looks like a really good option.