Stopping properly a simulation with concurrent users consuming a feeder with queue strategy

Hello,

I am running a simulation with 30 concurrent users performing the same action and taking the same jdbc feeder as input.
The feeder is created using the queue strategy.
I want to consume all the records of my feeder during my simulation, and stop propperly when the feeder is empty.

Here is my simulation :

`

val systemsIdentifier = jdbcFeeder(databaseUrl, databaseUser, databasePassword, sql_systemsIdentifier)

val comScn = scenario(“My scenario”)
.asLongAs(true) {
feed(systemsIdentifier)
.exec(performActionsChain)
}

setUp(comScn.inject(rampUsers(30) over (60 seconds))).protocols(httpConf)

`

The problem here is that the simulation stops with the “Feeder is now empty” error as soon as one of my users try to perform a feed on empty feeder.
I would like to end up the simulation properly, so that other users still performing actions can end up their scenario.

Is there another way to stop the simulation per user when the feeder is empty ?

Thanks in advance for you help and support.

Simon Gaestel

No. You’re supposed to provide enough records.

Ok, thanks anyway.

I worked around the problem by dividing the total number of records by the number of concurrent clients and replacing the asLongAs loop by a repeat one.

`

val systemsIdentifier = jdbcFeeder(databaseUrl, databaseUser, databasePassword, sql_systemsIdentifier)

val comScn = scenario(“My scenario”)
.repeat(systemsIdentifier.records.size / nbUsers) {
feed(systemsIdentifier)
.exec(performActionsChain)
}

setUp(comScn.inject(rampUsers(nbUsers) over (60 seconds))).protocols(httpConf)

`

This allows me ending properly the simulation, even if i can miss the last records (max nbUsers -1 records)…
Let me now if you plan to implement a way to properly consume all records from a feeder across multiple clients.

Thanks for your support.

val count = new AtomicInteger(systemsIdentifier.records.size)

.asLongAs(_ => count.getAndIncrement < systemsIdentifier.records.size)

Thanks, that’s exactly what I was looking for.

Definitely we should consider including this example in the documentation.

W dniu środa, 7 stycznia 2015 14:43:51 UTC+1 użytkownik Stéphane Landelle napisał:

Contribs welcome :slight_smile:

Is it possible to mix the ‘during’ with ‘asLongAs’ flavor?

What I want to achieve is to run the test for the fixed duration but skip the scenario if it runs out of data in the feeder. Right now with ‘during’ the feeder errors out, preventing the report generation.

Thanks,
Jebu.

The best you could do is keeping track of how many records were pulled from the feeder with a global AtomicInteger, and protect the feeder with some condition, such as doIf. But the simulation will keep on injecting new users depending on the profile you configured, it’s just that those would cause Gatling to crash.

if I am using a CSV file … is this supposed to be something like…

val counter = new AtomicInteger(0)

val dataSize: Int = io.Source.fromFile(csvFile).getLines.size-1

.asLongAs(session => counter.getAndIncrement < dataSize)

in your example, if count is initialized to the size of the dataSet, then won’t getAndIncrement always be less than size of the dataSet?

sorry I am new to both scala and gatling and confused.

I meant to write… count.getAndIncrement is never less than “systemsIdentifier.records.size” bcz count is initialized to the size of the records.