New to here. Searched but didn’t find any solution. I’m using 2.0 version of gatling.
My question would be:
I set up the test to be:
setUp(scn.inject(constantUsersPerSec(requestNumber) during(duration seconds)).protocols(httpConf))
where requestNumber =3000 and duration = 10.
When I run it, it actually takes around 40-50 seconds to finish and every second only has 500 concurrent user. I saw in other post they reach really nice concurrency. (Like here: https://github.com/gatling/gatling/issues/414)
I’m not sure where to check(sorry could not find it…). The request is simple HTTP get. I configured the system limit as in the post. Could you tell me how to figure out the possible limiting factors? Or factors besides the http://gatling.io/docs/2.0.0-RC4/general/operations.html?
I’m not getting java io exception, which may means the limit is not from socket/file.
min 2ms, max 4430ms, mean 238ms.
95% 492ms
99% 1658ms
Mean req/s 398.4
Most # of req/s is 505
I just found out some aspect may related to this issue.
The number of active users goes up to 26k in around 10s, and then decline linear to zero to the end.
Maybe the active user is related to the number of request per second?
3000 new users per second is huge if your injection server or your system under test can’t handle such load, all the more as you don’t warm up the system and violently hit with 3000 head front.
How are your CPU and your bandwidth?
Yeah, it is huge but needed.I measured the response and it seems like to be 280byte.
(used: curl -s -w %{size_header} -o /dev/null http://api.XXXX/XXX.json.gz)
That is the average. so 505*280/1048 = less than 200k/ second inbound traffic(correct me if anything above wrong)
Just plain text, struggling to get the bottleneck…
Can you also try different number of concurrent users per secs, and find where it stops scaling? Like a binary search, or git-bisect like, on the number of usersPerSec. Try 15,000 if it works then increase by half of the interval, ~22,000, if it doesn’t work then halve the number to 7500, and so forth.
Also, perhaps you could try to instead of the big bang-like load, try ramping it up over a period of more than 30 seconds , I doubt the JVM can handle 30,000 of anything at such short interval. (objects, actors, files, etc, it would be dominated mostly by the work to allocate all of these resources in the heap, and the GC activity caused by it).
yeah, considering where is the bottleneck, there is no difference if I use 1000 or 2000 concurrent users
mean requests/sec 405.0 (OK=359.5 KO=45.49 )
mean requests/sec 429.4 (OK=251.7 KO=177.7 )
no more than 500, needless to say 1000.
Don’t know how to increase it…
The CPU or I/O will be pegged closest to the machine that is the bottleneck. Look at the CPU of all machines involved in the simulation. Your machine. The web server. The database server. If none of them have high CPU, then start looking at disk IO and/or network I/O.
Do a simulation that ramps from 1 user to 1000 users, adding 1 user per second. Then, while the scenario is ramping up, monitor all the resources. If you are looking in the right place, you’ll find the bottleneck.