default report output

Hi,

I have been testing out the open workload model support in 2.0.0 snapshot -
Great stuff overall, I have a couple of thoughts:

  1. meanNumberOfRequestsPerSecond is confusing (or could cause confusion)

I can make this metric change just by changing the test duration as it includes the time to complete all the requests. This is clear when there is a long response time eg 10 seconds, which could happen if we break a system we are just starting to tune.

for example: scn.inject(constantUsersPerSec(1).during(10 seconds))

That’s a bug.
Investigating.

Thanks

Damn, actually, what would your definition of constantRate(1) during(10 seconds) be?

0 → start 1 user
1 → start 1 user

2 → start 1 user

3 → start 1 user

4 → start 1 user

so you’d start a total of 5 users, and nothing at t=5sec?

assuming each user only sends 1 request without pausing

None completed at t=5
5 requests sent in 5 seconds so a sent/arrival rate of 1 which is the input of constantRate(1) validated. We asked for an arrival rate of 1 and got it.

Arrival/sent here means gatling applied that rate to the SUT. There could be a bottleneck between gatling and the SUT resulting in no or only some requests making it through. Typically we can only measure the true arrival rate from within the SUT or at the SUT boundary.

Completion rate would be undefined at t=5 and until the first request does complete. Because the injection is constant, from t=10 the completion rate will be 1 also, assuming the SUT can sustain the load.

Anything else doesn’t mean a lot as it mixes up different measurements.

If the users inject rate is changing/ramping up then it gets more involved. For example ramp of 100 seconds to 100 users per second followed by constant of 100 seconds. If we include the ramp in the final throughput value then it will be less than 100 even if it was able to sustain 100 users per second for 100 seconds.

We can use 1 summary stat object for each part of the load profile as both t digest and hdr histogram support add/union. The reporting the above should be possible.

I think inject should be fine now: https://github.com/excilys/gatling/pull/1885

thanks, basic question - how do you run gatling from sbt/eclipse? to have a look at those changes

Running Gatling directly from sbt: Pierre’s new sbt plugin: https://github.com/gatling/gatling-sbt
Integrating an sbt project into eclipse:

  1. https://github.com/typesafehub/sbteclipse (declare the plugin in the global file)

  2. Or… still use the gatling maven archetype (requires installing m2e-scala)

apologies I meant from source not snapshot, more like in eclipse with debugger etc

thanks, with snapshot from the 26th I retried the test from before on the reported request rate and got the same numbers so have opened an issue for it.