Use feed with realistic data to load test site


I have a csv file all product-codes that have been queried within one hour (extracted from access.log).


class BasicSimulation extends Simulation {

val httpConf = http
.acceptEncodingHeader(“gzip, deflate”)
.userAgentHeader(“Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0”)

val products = csv(“productQueries.txt”).records

val productCall = feed(products)

val scn = scenario(“de.douglas.productservice.stresstest.BasicSimulation”)





I’d like to configure how many requests are being executed in parallel.
When I use atOnceUsers(100), gatling only executes 100 requests and is done after that.

I used constantUsersPerSec(100) before, but using that, I had to provide a time (which I don’t know beforehand).

What I want is a behaviour like that:

  • use every line of the textfile to create a request
  • execute 100 requests in parallel (I don’t care about users since my calls are stateless)
  • do that as fast as possible

Any help would be highly appreciated!


Have you tried this -

It looks like what you really want is to include a loop in your simulation. 100 users each doing as many requests as it can. That means you want an endless loop that exits out when the feeder is empty.

WARNING: last I remember, when a user gets an empty feeder, it prematurely terminates all users, rather than letting them finish. That may or may not be a problem for you.

Thanks for your help. I tried that, but I don’t know how to specify the level of parallelism here.

Thanks for your reply.

That’s exactly what I want. I ran exactly into the problem you described with the feed running out of elements.
If I use atOnceUsers(100), my repeat-loop runs 100x.

I guess, gatling isn’t designed specify the parallelism.
I’d welcome any further advice. I really like gatling’s reports and its ease of use. Unfortunately I lost too much time getting gatling to do what I want so I’m writing the test myself using akka.


It would be possible to make it work. But it would be a bit hack-ish. For example, if you have 100 threads, you could add 100 records to the end of your feeder that all signal that the user is to exit the loop cleanly. Slightly harder, you could do a custom feeder that always returns the “I’m out of work for you” record once the work is complete. Or, for simplicity, simply arrange it so that each of your users pull a fixed number of records, so that in total they add up to the number of records in the feeder file.

I would also bug the guys and ask them if there is any way to refactor things so that an empty feeder stops user injection, and terminates users that try to read from that feeder, but allow in-process users to continue.