[2.0.0-M3a] Strange behaviour with sending 2 requests instead of one

Hi. I’m use gatling 2.0.0-M3a.

My scenario look like this:

val scn = scenario(“load-test”)
.exec(
http(“create”)
.put(_ => getURL)
.body(StringBody(_ => getBody))
.header(“Accept”, “text/json”)
.check(status.is(200))
)
.pause(2000 milliseconds)
.exec(
http(“pay”)
.post(xml_url)
.body(StringBody(_ => getXML))
.check(status.is(200))
.check(xpath("//result-code").is(“0”))
)

It works that:

  1. create bill
  2. wait a little bit
  3. pay bill

I have a situation when second request sends 2 times instead of one. It clearly looks from server log. Gatling sends first requests and get response with result_code = 0 on that. It means all fine. Then after some ms gatling sends another exact same request and get response with result_code = 1419 that means that bill already paid. I suppose in report gatling saves result of second requests only and ignore first positive result.

You can see attached log for details. I delete some sensitive info from log, but keep id.
https://s3-us-west-2.amazonaws.com/artemnikitin-test/double_request_log.txt

This is normal behaviour ?

For what I see, the problem is on your side, either in the getXML that generates duplicates, or in your system under stress.

First request is sent by user n°4423 while the second one is sent by user n°4425:

Session(load-test,4423,Map(),1387273991914,33,List(),List(OK),List(),List()) HSsZSWd9zj 0 Session(load-test,4425,Map(),1387273991964,13,List(),List(KO),List(),List()) HSsZSWd9zj 1419

What number of requests can handle Core i5 with 8GB RAM ?
If I need a small load but for long time for example I want 50 rps but for 8 hours. It’s like 1.5 million requests total. How Gatling will behave ? He will use only 50 threads and “reload” it when request finished or not ?

No no no no no, you don’t get it!
Gatling’s virtual users are not threads, but asynchronous messages.

Gatling uses about:

  • 1 main thread
  • 1 timer thread
  • 1*nb cores for AsyncHttpClient
  • 2*nb cores for Netty
  • 3*nb cores for Akka
    That’s roughly 50 threads on an 8 core machine.

Number of concurrent users/messages processed have nothing to do with this.

Ok, I get it ) Thanks

Fun. A classic concurrency issue.
Looks like server side threads are polluting each others data.
Your developers will probably say this can’t happen. Normal functional tests won’t show it.
Best thing to do is to document this carefully so that the proof is unassailable. Then refine your test cases to check for other symptoms: if this happens, it’s likely that this causes other subtle issues as well.