Facing issue with Feeders while using multiple injectors as POD

Hello Team,
I have a usecase which requires unique set of data to run. I am using multiple pods say ~4 as injector in Gatling enterprise. During the execution I see that the transactions fail for the use case which required unique set of data for ~60% of the time.
In script I am using “.shard” in feeder; still I see the same issue, tried with “.queue” as well but same issue.
val FNumber = csv(“xyz.csv”).shard or val FNumber = csv(“xyz.csv”).queue

Could you please suggest?
I suspect that as I am using multiple pods, each pod might be picking same value from feeder, resulting failure, as the application needs unique data.
However, If I use just 1 POD and run the test, it works perfectly fine.

Hello,

I suspect that as I am using multiple pods, each pod might be picking same value from feeder, resulting failure, as the application needs unique data.

There’s no way using shard can lead to record duplicates across a cluster of load generators. These feature has been stable for years and has an extensive test suite.

val FNumber = csv(“xyz.csv”).shard or val FNumber = csv(“xyz.csv”).queue

csv(“xyz.csv”).queue is wrong for your use case as it’s missing shard.

You must use shard to guarantee that no load generator use the same records.
You must use the queue strategy to guarantee that no virtual users pick the same records inside a given shard. Then, you can omit it as it’s the default strategy.

csv(“xyz.csv”).shard and csv(“xyz.csv”).shard.queue do the same thing as queue is the default strategy.

Could you please suggest?

Possible explanations:

  • you’re missing shard like in the val FNumber = csv(“xyz.csv”).queue you said you were using
  • there’s a bug in your application, probably related to concurrency
  • you have duplicates in your feeder file