I’m a newbie with Gatling. And have to test my ‘almost-big-data-pet-project’. And I considered jMeter as an appropriate choice in the past. But I’ve found out that JMeter can’t import CSV file, split it, and put each chunk as appropriate HTTP parameters. Then I found Gatling. It has programmatical configuration and API, that awesome. But my question the same as to JMeter: Can I write some code(scala) to import my own dataset from a local .csv file or S3 bucket and send for example one thousand HTTP requests with its own distinct parameter from my dataset.
Hi,
But I’ve found out that JMeter can’t import CSV file, split it, and put each chunk as appropriate HTTP parameters.
JMeter can do it https://guide.blazemeter.com/hc/en-us/articles/206733689-Using-CSV-DATA-SET-CONFIG
And Gatling can do it as well https://gatling.io/docs/2.3/session/feeder/?highlight=feeder
Just note that for Gatling, CSV files are fully loaded into memory so you may run into OutOfMemoryError. You can overcome this by using custom CSV feede that streams the file, e.g. like this
class CsvStreamFeeder(iterator: Iterator[String]) extends Iterator[Map[String, String]] {
var columnNames: List[String] = List.empty
override def hasNext: Boolean = this.synchronized { iterator.hasNext }
override def next: Map[String, String] = this.synchronized {
if (columnNames.isEmpty) {
val header = iterator.next()
columnNames = header.split(“,”).toList
}
val columnValues = iterator.next().split(“,”)
(columnNames zip columnValues).toMap
}
}
object CsvStreamFeeder {
def apply(csvFilePath: String): CsvStreamFeeder = new CsvStreamFeeder(Files.lines(Paths.get(csvFilePath)).iterator().asScala)
}
@Tomas Lazy feeders are coming in Gatling 3: https://github.com/gatling/gatling/pull/3268