Gatling scenario taking longer than anticipated


up vote
down votefavorite


|


Hi,



I have a gatling scenario which looks something like below which is taking far longer to run than how I thought it would.




<br>scenario("user")<br> .feed(userFeeder)<br> .exec(loginUser("${currentUser}", User.UserPassword))<br> .exec(Teams.listTeams)<br> .exec(Teams.getTeam<br> .during(10.minutes) {<br> exec(Teams.pollTeam).pause(2.seconds)<br> }<br> .exec(logoutUser("${currentUser}"))<br><br>



and this setup



<br> setUp(<br> scn.inject(rampUsers(userCount).over(rampUpTime.seconds))<br> ).protocols(httpConf)<br><br>



Here pollTeam just requests 2 different endpoints.



I am running this scenario with 2000 users, and I would expect that after logging in and listing/getting the teams and constant polling. each user should take around 10 minutes + a little for the other requests.



I therefore though that the whole scenario should have taken the rampUp time (5 mins) plus around 15 mins. However the test has been taking around 2 hours.



(I am running gatling 2.2.2 with the gatling-sbt plugin)

<br><br><br><br>







Are the users not happening concurrently?



Thanks,



Ben


|

What do your response times look like?

Thanks for the response. Looks like the mean response time was around 2 seconds (we had an issue with our db size, have now increased)

The 'poll request' is actually 4 requests as we need information from a few sources.

My understanding of the during block was that it would only last 10 minutes completing the final block if it was still ongoing. So if the users are concurrent and the mean request time is 2 seconds (x4) plus a wait of two seconds, how can it be running for so long? Do I need to split out the tests onto multiple jvms I'm order to reach 1500 concurrent users?

Thanks Ben

First thing first: upgrade! We’ve released 2.2.3.
Then, if you can still reproduce your problem, please provide a reproducer. Otherwise, we’re just playing riddles here.

Except if you’re running on very low resources, like 1-2 cores, 1.500 concurrent users shouldn’t be a problem.

I’m on an ec2 instance (c4.2xlarge) which has 8 cores.

How do i create a reproducer?

Ben

A simulation that we can run and that exhibits the problem.
If it can’t hit a public facing website, you’d have to build a sample app.