[2.0.0-SNAPSHOT] want loop csv and each row to roundRobinSwitch -> exec

Hello!
I’m new in gatling, scala, english…

my problem is
I want loop csv file and reach each exec in roundRobinSwitch

csv file size == 10000

query
1
2

10000

result end query=200

i can’t find solution…

i want total number of requests = csv line count (10000)

and query=1… to 10000

Here is my code

val sb1 = http(“sb1”).get(“http://localhost:8000/1.txt”)

val sb2 = http(“sb1”).get(“http://localhost:8000/2.txt”)

val sb3 = http(“sb1”).get(“http://localhost:8000/3.txt”)

val query_sets = csv(“sample_1M_query”)

val queue_size = query_sets.records.size

val count = new java.util.concurrent.atomic.AtomicInteger(0)

val scn = scenario(“Breeze Performance Test”)

.asLongAs(_ => count.getAndIncrement() < queue_size) {

feed(query_sets)

.roundRobinSwitch(

exec(sb1

.queryParam(“q”, “${query}”)

.queryParam(“sb1”, count)

.check(status.is(200))),

exec(sb2

.queryParam(“q”, “${query}”)

.queryParam(“sb2”, count)

.check(status.is(200))),

exec(sb3

.queryParam(“q”, “${query}”)

.queryParam(“sb3”, count)

.check(status.is(200)))

)

}

setUp(scn.inject(atOnceUsers(5))

.throttle(reachRps(40) in (0 seconds), holdFor(100 seconds)))

127.0.0.1 - - [13/May/2014 10:59:03] “GET /3.txt?sb3=980&q=198 HTTP/1.1” 200 -

127.0.0.1 - - [13/May/2014 10:59:03] “GET /1.txt?q=199&sb1=985 HTTP/1.1” 200 -

127.0.0.1 - - [13/May/2014 10:59:03] “GET /2.txt?q=200&sb2=990 HTTP/1.1” 200 -

127.0.0.1 - - [13/May/2014 10:59:03] “GET /3.txt?sb3=995&q=201 HTTP/1.1” 200 -

127.0.0.1 - - [13/May/2014 10:59:03] “GET /1.txt?q=202&sb1=1000 HTTP/1.1” 200 -

What do you want exactly?
Do you want to make sure that all entries are globally reached once, or do you want each virtual user to reach all of them?

Thank you for reply!!

I want globally reach once.

Each user will get shared feed data

user1 = 1
user2 = 2
user3 = 3

user3 = 999
user1 = 1000

so total request count would be feed’s queue size

I tested my code recently…
gatling run 1000 request…:slight_smile: but after that throw Feeder is empty Exception

My Requirements is

  1. fixed requests per seconds : → throttle is best :slight_smile:
  2. fixed test feed (csv) size
  3. round robin each hosts

import io.gatling.core.Predef._

import io.gatling.core.controller.throttle._

import io.gatling.http.Predef._

import scala.concurrent.duration._

import java.lang._

class DefaultLoadTest extends Simulation {

val sb1 = http(“sb1”).get(“http://localhost:8000/1.txt”)

val sb2 = http(“sb2”).get(“http://localhost:8000/2.txt”)

val sb3 = http(“sb3”).get(“http://localhost:8000/3.txt”)

val query_sets = csv(“sample_1M_query”)

val queue_size = query_sets.records.size

val scn = scenario(“Breeze Performance Test”)

.asLongAs(_ => “${query}” != “”, “loop”, true) {

feed(query_sets)

.roundRobinSwitch(

exec(sb1

.queryParam(“q”, “${query}”)

.check(status.is(200))),

exec(sb2

.queryParam(“q”, “${query}”)

.check(status.is(200))),

exec(sb3

.queryParam(“q”, “${query}”)

.check(status.is(200)))

)

}

setUp(scn.inject(atOnceUsers(2))

.throttle(reachRps(40) in (0 seconds)))

}

---- Requests ------------------------------------------------------------------

_ => “${query}” != “”, “loop” => doesn’t work, see https://github.com/excilys/gatling/wiki/Session#el

What’s the problem with your first solution? It looks good to me.

Yes. I will read session EL carefully.

First solution work well with no exception.

but do only 200 requests
1000 requests are my hope…

I want to know and solve why count increment jump to 5 instead of 1

First solution work well with no exception but do only 200 requests

I can't see how this happens.
queue_size is 1000? Did you print it?

Yes. print like that

val query_sets = csv(“sample_1M_query”)
val queue_size = query_sets.records.size
println(“queue_size”)
println(queue_size)

GATLING_HOME is set to /usr/local/gatling
Simulation DefaultLoadTest started…
queue_size
1000

Ah, OK, I get it!

The asLongAs condition is by default evaluated also in EVERY step inside the loop because the default behavior is to exit as soon as the condition becomes false: https://github.com/excilys/gatling/wiki/Gatling%202#core-misc

So your AtomicInteger gets incremented more often than you thought.

You have to force ASAP to false:

.asLongAs(condition = _ => count.getAndIncrement() < queue_size, exitASAP = false) {

Thank you for guide!

After correction, it works below

Each sb1, sb2, sb3 run ~3000 together and feeder empty exception occur.

17:36:52.422 [ERROR] i.g.c.a.SingletonFeed - Feeder is now empty, stopping engine
17:36:52.426 [ERROR] i.g.c.a.SingletonFeed - Feeder is now empty, stopping engine
17:36:52.426 [ERROR] i.g.c.a.SingletonFeed - Feeder is now empty, stopping engine
17:36:52.426 [ERROR] i.g.c.a.SingletonFeed - Feeder is now empty, stopping engine
17:36:52.426 [ERROR] i.g.c.a.SingletonFeed - Feeder is now empty, stopping engine

Mmm, the feeder error is unexpected, will check this out.

I use magic number to escape feed empty exception

val queue_size = query_sets.records.size * 5 - 10

not nice form… but it works :slight_smile:

used .asLongAs(_ => count.getAndIncrement() < queue_size)

I found .asLongAs(… , exitASAP=false) run users count more.

so last code and well done

Thank you so much~

val httpConf = http.baseURLs( // This is best and simple than roundRobinSwitch :slight_smile:
http://localhost:8000/v1/search/1.txt”,
http://localhost:8000/v1/search/2.txt”,
http://localhost:8000/v1/search/3.txt
)

val query_sets = csv(“sample_1M_query”)

val queue_size = query_sets.records.size
val users = 90
println(“queue_size”)
println(queue_size)

val count = new java.util.concurrent.atomic.AtomicInteger(0)
val scn = scenario(“Breeze Performance Test”)
.asLongAs(_ => count.getAndIncrement() < queue_size - users, exitASAP=false) {
feed(query_sets)
.exec(http(“Breeze Performance Test”)
.get("")
.queryParam(“profile”, “true”)
.queryParam(“limit”, “0”)
.queryParam(“q”, “${query}”)
.queryParam(“count”, count.get())
.check(status.is(200)))
}

setUp(scn.inject(atOnceUsers(users))
.throttle(reachRps(600) in (0 seconds), holdFor(48 hours)))
.protocols(httpConf)

.queryParam(“count”, count.get())

This is wrong: here count.get() will only be called once when the Simulation is instanciated.

.queryParam(“count”, _ => count.get())

So there was an issue with the loop: the condition wasn’t evaluated when entering the loop: https://github.com/excilys/gatling/issues/1860

If you upgrade to latest snapshot, beware of removing " - users".

Cheers,

Stéphane

Yeah I will download last build.
Thank you.

Hi Stéphane,

First of all thanks for the good work, I’ve started using Gatling tool and it works great for most scenarios.
But now I’m facing the same issue as Jong.
Where can I find the latest build? Or could you please share the steps on how to build a given branch of the project?
I’ve searched on Maven central repository and the latest version there is 2.0.0-M4-NAP dated 09/04/2014 so it doesn’t contain this fix.

My use case is
I’m trying to replay an access log (all the queries that happened on my server between 2 dates)
I have a maximum of let’s say 100 concurrent users at all times and I am trying to replay the same queries in the same order as the original log as fast as possible, let’s say I have 100 000 queries to replay. For the sake of simplicity we can use a fixed number of users = 100 at all times.

To take a very simple example, I’ve been using the same code as Jong above, trying to replay a file containing 3 queries passed to a feeder using a default queue strategy, but only using 2 users, and I found out that only 2 queries got executed.
What I was hoping for is that user1 would run query1, user2 would run query2 (row 2) and then user1 would run query3 (or user2 depending which one finishes first) since the feeder queue is not empty at this point and still contains query3 to be executed.
But it actually stops after the second query = number of users used in my simulation.
I’ve tried using this asLong condition with exitASAP = false but it didn’t work for me (I’m hoping this change you describe above will fix it).

Could you please explain how this asLong condition with exitASAP = false works and why it is needed?
Why does it stop after running 1 query for each user even if the queue is not empty?

Thanks for your help
Thierry

Where can I find the latest build?

In Sonatype's repo.
See https://github.com/excilys/gatling/wiki/Continuous-Integration

I've searched on Maven central repository and the latest version there is

2.0.0-M4-NAP dated 09/04/2014 so it doesn't contain this fix.

NAP = net-a-porter. The guys there deployed a flagged version of the
snapshot of the upcoming version as we're very late in releasing.
I guess their version is a bit old.

My use case is
I'm trying to replay an access log (all the queries that happened on my
server between 2 dates)
I have a maximum of let's say 100 concurrent users at all times and I am
trying to replay the same queries in the same order as the original log as
fast as possible, let's say I have 100 000 queries to replay. For the sake
of simplicity we can use a fixed number of users = 100 at all times.

To take a very simple example, I've been using the same code as Jong
above, trying to replay a file containing 3 queries passed to a feeder
using a default queue strategy, but only using 2 users, and I found out
that only 2 queries got executed.
What I was hoping for is that user1 would run query1, user2 would run
query2 (row 2) and then user1 would run query3 (or user2 depending which
one finishes first) since the feeder queue is not empty at this point and
still contains query3 to be executed.
But it actually stops after the second query = number of users used in my
simulation.
I've tried using this asLong condition with exitASAP = false but it didn't
work for me (I'm hoping this change you describe above will fix it).

Could you please explain how this asLong condition with exitASAP = false
works and why it is needed?

All loops (repeat, during and asLongAs) actually share the same
implementation.

exitASAP was introduced in order to have for example during loops exit as
soon as the duration is reached. Consider a loop that contains 20 requests
whose total time is 60 secs and the loop duration is 90 secs.

With exitASAP = false, the exit condition is only tested once per
iteration, so you'll actually loop 120 secs.
With exitASAP = true, the exit condition is one every element inside the
loop, so you'll exit the loop after ~90 secs.

Why does it stop after running 1 query for each user even if the queue is
not empty?

I don't know what you're doing exactly. Jong's problem is that he's using
getAndIncrement in his loop condition, which is bad in a session as it's
side effecting (meaning that it doesn't something else than just returning
true/false: it changes some state outside its scope). When you pass a
function parameter, you never when/how it will be executed, so side
effecting is very dangerous.
A safer/cleaner solution would be to not use getAndIncrement and perform
only get in the loop condition and perform increment in an exec(function)
inside the loop.

Get it?

Cheers,

Stéphane

Thanks Stéphane for your quick answer.
I managed to make it work with a asLongAs condition and exitASAP = false even without the latest build but I didn’t get your last comment:
“A safer/cleaner solution would be to not use getAndIncrement and perform only get in the loop condition and perform increment in an exec(function) inside the loop.”

Also I’m not using any duration for my simulation so maybe that’s what is missing. I expect it to run for as long ast it takes i.e. no limit of duration.

So I’ve put together a very simple example attached with 2 simple dummy files containing each 3 requests to replay. You don’t even need the request_bodies as it will generate an error “can’t connect” as expected but we just want to see what gets executed.
I used version 2.0.0-M3.

If I comment out rows 43, 44 , and 74 (closing brace)
//asLongAs(_ => count.getAndIncrement() < queueSize, exitASAP=false)
//{
//}

Then if using only 1 user, it only executes the first row of each file (if using 2 users, it only executes the first 2 rows of each file, you get the idea)

Replay POST request AAAA-0001
Replay POST request BBBB-0001

Otherwise with this asLongAs and exitASAP=false, it executes everything as expected

Replay POST request AAAA-0001
Replay GET request AAAA-0002

Replay GET request AAAA-0003

Replay POST request BBBB-0001

Replay GET request BBBB-0002

Replay GET request BBBB-0003

Is there a better way to do this like specifying an unlimited duration or do a test for exit in a different place?

Thanks
Thierry

http_log_replay_test.scala (3.82 KB)

testReplay1.tsv (115 Bytes)

testReplay2.tsv (115 Bytes)