Best practices for running a set of stress tests

I’m looking for some references on best practices for working with gatling. More specifically, I’m interested in the recommended way to set up a sequence of stress tests. I think my scenario is fairly standard. We are developing a RESTful API and what to check each endpiont (resource + verb) under stress. Assuming a standard SLA of 100 concurrent users getting their response in 300 ms for each such endpoint, I would like to run an isolated test against each endpoint. I.e., I don’t want to concurrently test all endpoints as that would muddle up the results (not that it’s not interesting, it’s just not what this particular set of tests is for).

On top of this baseline, I have a fairly common case (for REST APIs) of having lots of CRUD involved, which means I would like to have some temporal dependency between my tests. I.e., I’d like to create lots of resources first, then update some of these resources, then search/get these resources, and at the end delete the resources.

I’ll add one more parameter that, again, I don’t think is very uncommon. Before this set of tests, I want to set up some preliminary data (users, permissions, etc), and then use some of the results of the setup in all the subsequent tests (for example, an access token so I don’t have to login again for each scenario).

Any references to reading material or examples would be welcome (I haven’t found anything in the gatling documentation, but if I’ve missed it, I’d be happy for a reference). Sorry if this has all been asked before, I may have searched for the wrong things.

Any suggestions about how I would do this? How do you run separate scenarios today one after the other with different performance targets (concurrency, throughput, etc)?

Hi Jonathan,

I’d probably do some analysis on the access logs to find out the amount of users/requests you’d want. Someone told me that the “industry standard” for an endpoint was <= 200ms, but I haven’t verified that. The user workflow is probably to hit a number of endpoints, so that’s what I probably do and as long as you don’t read URLs from a file/collection, you’d get response times for individual endpoints in the results.

I think Gatling shines with complex tests as you can write your supporting code in Scala and as long as you can slot it into the expression language, you can do mostly anything. Gatling has before and after hooks for setup and teardown and CSS selectors to extract tokens. I think what makes Gatling stand out here is that is contains the attributes of functional tools, which you wouldn’t normally see in load-test tools.


I was actually looking for guidance on how to setup multiple scenarios. I already have SLAs for all of my endpoints and all of my scenarios in place. However, they are currently not well separated and cannot be run individually. They currently run sequentially using rendez-vous points which obscures the result statistics and makes it difficult to run scenarios individually.

You can run different scenarios concurrently like this:

If you want to run scenarios sequentially then I’d probably put them in separate tests or add wait states in your scenario injections (nothingFor(10 minutes)).

Note that the gatling-maven-plugin has a runMultipleSimulations option for running multiple simulations sequentially.

I tried running testOnly in SBT with space separated simulations (i.e. testOnly *Sim1 *Sim2), but only one simulation was run. Is this something that should be looked at, Pierre?

testOnly only runs one simulation at a time.
If you want to run several ‘specific’ simulations, you’ll need several testOnly commands.
Otherwise, if you want to run all your simulations at once, you can simply use ‘test’.



It was a user-error. ‘testOnly *Sim1 *Sim2’ will run the simulations sequentially!

This seems to be what I was looking for. I’m new to scala and I wasn’t familiar with sbt. Looks like this will help bring some order to things.

Hi Jonathan,

In my opinion SBT is the way go with Scala development, though some would disagree.

I’d base my load-test project on the gatling-sbt-plugin demo here: (you will need to bump up your Gatling version numbers to 2.1.5 in build.sbt (is it possible to increment these Pierre?))

$ sbt
$ testOnly *Sim *Sim1 *Sim2

Your tests will run sequentially.

I think it would be a good idea to link the gatling-sbt-plugin-demo on the quickstart page!


Hi Pierre,

I’ll update the gatling-sbt-plugin-demo project if you like and submit a pull request?


I updated it yesterday :slight_smile:

Hi Aidy, it took me a few days to get gatling working with SBT. In the end it was actually not difficult, but with the limited knowledge I had, I found the documentation to be a bit obscure. The demo you linked to finally helped me get it working, so thanks.

Now that I have the ability to run tests sequentially, though, I’m struggling with passing data between them. I wan’t to reuse authentication tokens and users between simulations. Any ideas about how to do this?