I am using the below injection which would fire 900 requests in 5 mins. I am passing the values from the CSV file with approx 2000 data.
constantUsersPerSec(3.00) during (5 minutes)
Even though there exists more than 2000 data in the input file, Gatling in some cases uses the same data as the input to fire the requests. Is it possible to fire unique data not repeated ones. Since its repeats for some cases the cache can come into picture and response might be fast. So how to avoid this by sending only unique data.
I’m assuming all entries in the CSV-file are unique in the first place?
If so, then gatling shouldn’t recycle values from the CSV-file unless you’re using a feeder configuration that looks something like this:
This will randomly select a value from the csv-file each time a user injected. “used” values won’t be removed from the feeder, and can be reused any number of times. With “just” 2000 entries, the odds of it randomly selecting the same value several times after 900 picks will be relatively high.
If you’re using that, try changing it to the following:
This will shuffle all the entries in the csv into a randomized order, ensuring you get a unique input every time you run the test. Gatling will then sequentially iterate through the list as you inject users instead of selecting a random value.
Keep in mind that while .random can run indefinitely by recycling values, .shuffle on its own cannot. This means you can only inject as many users as there are entries in the csv-file, once you’ve iterated through all 2000 of them in a single simulation, it will stop. If you need to inject more than that, you’ll either need to increase the number of unique entries in your csv-file, or write a custom feeder that generates unique data on the fly.