Process several CSV lines at the same time with a feed


is it possible to process several CSV lines at the same time with a feed?

e.g I need to create an xml file from a template to send it via HTTP POST. For each request my XML must contain data from 5 lines in my CSV data file.

Currently I do :

//loading template from ssp file
.fileBody(“myRequest”, Map(
“name” → “${name}”,
“address” → “${address}”

When I do this I can process only 1 line of the CSV and my XML looks like :

$name $address

How can I do if I want something like this, reading CSV entries 5 by 5?

$name1 $address1 $name2 $address2 .. .. .. .. .. ..



Hi Loïc,

Your problem is a bit more complex than just polling 5 by 5 as you want the attributes names not to clash.

Feeder is an alias for Iterator[Map[String, _]], so you can do something like this:

val feeder = tsv(“fileName”).queue // explicitly call queue to make the Array into an Iterator

val scn = scenario(“foo”)

However, it won’t guarantee that the 5 records being polled are sequential, as multiple concurrent users might poll at the same time.

I’ll try to think of a better solution.



Thanks I will try this!
Non sequential read is not a problem for me


Sorry I answered too quickly, I’m not sure to understand what I have to put in the suffixMapKey to make this work.
In addition, I have to use circular stategy instead of queue, is it possible?



Here’s a better solution (just copy this directly into your simulation):

type Record = Map[String, Any] // define an alias so there’s less boilerplate

def chunkAndMerge(records: List[Record], chunkSize: Int): List[Record] = {

// translate the record key with a suffix

def suffixMapKey(record: Record, suffix: Int): Record = { case (key, value) => (key + suffix) → value }

// recursively split a list into chunks of a given size (exceeding records are discarded)
def chunk(records: List[Record], chunks: List[List[Record]]): List[List[Record]] = {
if (records.size < chunkSize) chunks
else chunk(records.drop(chunkSize), records.take(chunkSize) :: chunks)

// merge a list of records into a single one
def merge(records: List[Record]): Record = records
.map { case (record, index) => suffixMapKey(record, index) }
.foldLeft(Map.empty[String, Any]) { (record, merged) =>
merged ++ record

chunk(records, Nil)

val feeder = chunkAndMerge(csv(“foo”).toList, 5).toArray.circular // FIXME toArray shouldn’t be necessary, that’s Gatling’s fault, will fix

You then just have to use it as a regular feeder.
Is this clear for you?

This is working great!!

I just needed to define this :

type Record = Map[String, String]

and to change

.foldLeft(Map.empty[String, Any]) { (record, merged) =>
.foldLeft(Map.empty[String, String]) { (record, merged) =>

Thanks a lot for your help!

Ah, sorry, maybe that's something that has changed in 2.0.0_SNAPSHOT/master

You're welcome

I think this method could be useful for a lot of other people, what about addind possibility to parse CSV in multiline mode in the galting code?

Yep, just have to think of a clean API. :wink:

Great :slight_smile:


I need to read 1000 csv records and hit as many users as i wish and those data to be parsed and put in body of a http request.
Right now with my solution i am able to read only 1 random record.

Ay help is appreciated.

Hello,I tried to read user csv files to get useriDs 5 by 5,For HTTP GET:get("/user/1234;1235;1236;1237;1238/"). I used “chunkAndMerge(csv(“foo”).toList, 5).toArray.circular”,but it’s error. Info: “value toList is not a member of io.gatling.core.feeder.RecordSeqFeederBuilder[String]”.
email ,password,userid








and more

在 2013年1月11日星期五 UTC+8下午7:08:30,Stéphane Landelle写道:

This chunkAndMerge doesn’t come from Gatling and is some custom code on your side, can’t help.