`
setUp(scn.inject(constantUsersPerSec(30) during (1 minutes))).protocols(httpConf)
`
I assume that it simulates 30 new users calling and finishing the call to the one web service call in my .exec per second?
Wanting my web service call to deal with 30 users per second with the response time of 1 second per user, I would assume the scenario takes 1 minute, and anything above that in the duration of the scenario indicates longer response times on my WS call, right? As in given the load of the scenario (30 calls per second) and if going beyond 1 minute, this is the delay (the delta) I get from putting on 30 users per second? One users (one call in the real world takes 1 second).
`
setUp(scn.inject(constantUsersPerSec(30) during (1 minutes))).protocols(httpConf)
`
I assume that it simulates 30 new users calling and finishing the call to the one web service call in my .exec per second?
It makes no assumptions about finishing time. Think of it like this: 1 second = 1000 milliseconds. To get 30 users per second, you would start a new user about ever 33 milliseconds. So it would fire up 2,000 users over the course of the 1 minute.
Wanting my web service call to deal with 30 users per second with the response time of 1 second per user, I would assume the scenario takes 1 minute, and anything above that in the duration of the scenario indicates longer response times on my WS call, right?
Close. The last user will start at exactly 60 seconds. That user will take a second to return. So total duration will be around 61,000 milliseconds, give or take.
As in given the load of the scenario (30 calls per second) and if going beyond 1 minute, this is the delay (the delta) I get from putting on 30 users per second? One users (one call in the real world takes 1 second).
In essence. But it is easier to just look at the average response time in the reports.