Whe I run this load model:
`
setUp(scn.inject(rampUsersPerSec(1) to(100) during (100 minutes))).maxDuration(200 minutes).protocols(httpConf)
`
After a while I get:
`
Whe I run this load model:
`
setUp(scn.inject(rampUsersPerSec(1) to(100) during (100 minutes))).maxDuration(200 minutes).protocols(httpConf)
`
After a while I get:
`
Check antivirus, firewall, and anything that would be preventing Java to use the network.
You mean antivirus and firewall on the server I am loadtesting against? Or could it be something on my Mac that is executing the tests?
I have "tuned" my OS X for Gatling with the tips from gatling.io
Cheers
No, on the Gatling host.
Hello,
I am farily inexperienced in this fiels of checking antivirus and firewall. Do you have any tips?
I run MAC OS X, Yosemite.
Cheers
Are you sure you didn’t simply have a temporary network outage? This error really means that the Network stack is down.
Magnus,
I believe you are calling way too many users from one machine. In your scenarios, You have about 300K users waiting. In this scenario, connection from your computer may bring down the network.
I think you should split up your users and run from various machines. 10, at least.
You can combine your results afterwards. http://gatling.io/docs/2.0.3/cookbook/scaling_out.html
Also, are you trying to run only 100 users for 200 minutes. If thats the case, try something like this
`
val scn= scenario(“My Scenario”).during(3 minute) {
pace(5 seconds, 10 seconds)
exec(…)// your scenario exec chain
}
//injecting users
setUp(
scnrio.inject(rampUsers(100) over (200 seconds))
).protocols(httpProtocol)
`
val scn= scenario(“Search”).during(3 minute) {
pace(5 seconds, 10 seconds)
exec(FacetClicks.facetClicks ,PageClicks.pageClicks, PreviewClicks.previewClicks)
}
Hello,
What is the rationale behind this? Do you combine timeconstraints before the .exec chain with a scenario loadmodel?
In this case what will happen? Every time a user runs the scenario it will use 3 minute and what is meant with "pace"?
Does one user use 3 minutes +/- 5 to 10 seconds each time it is running?
val scn= scenario("My Scenario").during(3 minute) {
pace(5 seconds, 10 seconds)
exec(...)// your scenario exec chain
}
//injecting users
setUp(
scnrio.inject(rampUsers(100) over (200 seconds))
).protocols(httpProtocol)
What it means is that your 100 users will keep looping over your exec chain for 3 minutes. You can change to whatever your requirements are (200 mins etc)
Pace means that after every iteration gatling will pause 3 to 5 seconds before it starts another iteration, so as to give some breathing room to server. You can change it or remove it.
It seems like your scenario model is creating more than 100 users (not sure if intentional). With the approach I provided, it will only generate only 100 users, who will keep hitting the server, if that is what you want.
Thats it, great. Thanks a lot!
During some time running my simulation:
…
val scn = scenario(“Scenario”).during(3 minutes){
pace(5 seconds, 10 seconds)
…
setUp(scn.inject(rampUsers(100) over (200 seconds))).protocols(httpConf)
…
I get:
--------------------------------- ] 0%
waiting: 52 / active: 48 / done:0
---- Requests ------------------------------------------------------------------
I might be able to answer some of these to the best of my knowledge
When users are active, unless you put them to pause for very long time, they should be hitting the server and getting responses. You can verify that by enabling logback.xml and looking at requests and responses.
Stephane may correct if I am wrong. But this is what I have noticed while running gatling tests.
Hello Magnus,
the number active is 48. What does tis mean?
This is the number of users that have started but not finished their scenario. Active in this context means they could be sending/receiving a request, or just pause()ing applying no load to the system; either one. In the case above you have a closed model where the scenario loops (closed like a closed loop, the users start a new scenario once their done with their current one). So 48 have been injected into the simulation, and the other 52 are waiting to participate in the simulation (they have not been injected yet, done no work, not interacted with the SUT yet, etc). 0 are done as the 48 injected are still looping and none have completed that looping duration of 3 minutes.
That I have 48 active users repeating the scenario or just hold the connection?
You have 48 users repeating the scenario. I don’t think there is a concept of holding the connection (or at least clarify what this means to get more help).
Is the throughput in terms of users 48 at this point in time?
Yes, those are the only users interacting with the system.
“waiting” users have not yet been injected and are not participating in the simulation yet; they have not started their scenarios or sent any requests.
This could also be simulated just starting a lot of user without a loop as ‘during’ but I guess you get ore control over the throughput defining a loop-duration and tweaking it to get the right ‘active’ count?
… It depends what your real users do, I can’t emphasise that enough.
Currently you can either control the “active” count (closed workload model, looping scenario),
OR (mutually exclusive !),
you can control the throughput (usersPerSec() open workload model, scenario does not loop).
For completeness, there is a ticket to achieve both: https://github.com/gatling/gatling/issues/1647 . One use case for that might be testing an internet facing service that is behind a service that has a finite connection pool for example.
I’ll look at the simulation timings with a known test workload.
Thanks,
Alex
During some time running my simulation:
…val scn = scenario(“Scenario”).during(3 minutes){
pace(5 seconds, 10 seconds)
…
setUp(scn.inject(rampUsers(100) over (200 seconds))).protocols(httpConf)
Um… 3 minutes = 3 x 60 = 180 seconds. You ramp over 200 seconds. Your first ones will be done before the last ones start. If you ever want to see 100 active users, you should have a longer duration. I suggest setting the duration to 700 seconds ( 5 minutes plus 200 seconds ) so that once all of the users have been started, the scenario will continue to run at full speed for 5 minutes.
In reality, 5 minutes is too little for a real-world test. But to prove that Gatling is working correctly, start there.
…
I get:
--------------------------------- ] 0%
waiting: 52 / active: 48 / done:0
---- Requests ------------------------------------------------------------------Global (OK=634 KO=0 )
getProfile (OK=326 KO=0 )
getCouponList (OK=308 KO=0 )
================================================================================the number active is 48. What does tis mean? That I have 48 active users repeating the scenario or just hold the connection?
It means that there are 48 users that have been started up and are running.
Is the throughput in terms of users 48 at this point in time?
Yes. Transactions per second at that moment in time would be for the users that were active at that moment in time.
This could also be simulated just starting a lot of user without a loop as ‘during’
Not if you are ramping slowly. You want the during() in order to ensure that you manage to get your total simultaneous user count to the target level.
but I guess you get ore control over the throughput defining a loop-duration and tweaking it to get the right ‘active’ count?
There should be no tweaks required. Just set the duration to be ramp time plus sustained load time and it should work perfectly. Always has for me, anyway.
After 180 seconds the number done starts to increase fro 0
Because during() is 180 seconds… exactly what you would expect
, and after 320s this is the situation.
Looks about right. After 200 seconds, 100 users have started, but about 20 have already quit. If they quit every 2 seconds, another 60 will have quit by the time you reach 320 seconds. That would leave 20 remaining. You see 30 remaining, indicating that they do not quit exactly every 2 seconds, probably because of variable pause durations. If that is the case the user count graphs will show a graph that is not a clean line during ramp down.
================================================================================
2014-11-24 20:32:22 320s elapsed
---- Scenario ------------------------------------------------------------------
[###################################################-----------------------] 70%
waiting: 0 / active: 30 / done:70
---- Requests ------------------------------------------------------------------Global (OK=4629 KO=0 )
getProfile (OK=2321 KO=0 )
getCouponList (OK=2308 KO=0 )
================================================================================I said the duration would be 3 minutes, 180 seconds and i also said that I wanted the scenario to ramp 100 users during 200 seconds. But none of the above is correct as it takes 318 seconds to accomplish the test.
Total test completion time is ramp time plus duration() time. 200 + 180 = 380. You just said at 320 seconds, it still had 20 users still working. Clearly, the total test is not completed in 318 seconds.
When I have i.e. 50 active users (going on for about do they repeat the scenario or just hold the connection?
They do what is described in the during() loop. So what they do depends on what you write. Why? Because a tool that doesn’t do what you tell it to do wouldn’t be much use, would it?
so I tested with the known workload…
To test the tool itself you need to apply it to a system that you know works within certain bounds, like calibrating scales.
I said the duration would be 3 minutes, 180 seconds and i also said that I wanted the scenario to ramp 100 users during 200 seconds.
But none of the above is correct as it takes 318 seconds to accomplish the test.
I did something slightly shorter and simpler.
I expect at 120 seconds all the users to have been injected, zero waiting, which is what we see below.
then at 180 seconds, or just over, all are done (the last user injected at the 120th second has completed it’s 1 minute’s worth of looping). tick.
When I have i.e. 50 active users (going on for about do they repeat the scenario or just hold the connection?
Why
The reason why the above simulation peaks at 50 active users from second 60 to second 120 (in the simulation below), is that we are injecting users at a rate of 100/120 per second and the looping duration is 60 seconds, so active users = 100/120*60 = 50
val scn_pace = scenario(“My Scenario”).during(1 minute) {
pace(5 seconds, 10 seconds).exec(req_sleep)
}
…
scn_pace.inject(rampUsers(100) over (120 seconds))