Gatling 1.5.x reporting not representative of reality

Hey there,

I have noticed that Gatling creates charts that don’t really align with what it’s actually doing, but instead graphs Gatling itself. Graphs are starting when a scenario starts, as opposed to showing when requests actually starting going out the door. Similarly, active sessions are based on when a scenario starts, as opposed to when the first request is made inside of that scenario. As you might imagine, when I shared the graphs with people, we started scratching our heads as to how all these graphs added up.

Here’s an example simulation (scenario itself redacted) and its associated simulation.log:
And here’s a (small) screenshot of the associated report:

There’s 17 seconds of graphing where Gatling is sitting there and no requests have begun - but the graph shows as if we’re actually interacting with the server at that time - which doesn’t align at all with the 2 seconds where we’re actually communicating back and forth.

Is this the intended behavior? In large tests, we haven’t noticed this - probably because we’re ramping users - but I imagine this is just as much of a problem with thousands of users being ramped over time.

Thanks, appreciate your insight!

Looks like a bug, let me investigate.

That’s actually not a charting bug: the first request was indeed sent about 17 seconds after the first user started.

What does ProductScenario.quotesForPricingSubject look like exactly? Does it start with some pause or custom computation before sending the first request?

There’s nothing special going on that I know of.

object ProductScenario {

// …

val pricingSubjectParameters = csv(“services/pricing-subject-parameters.csv”)

// …

val quotesForPricingSubject = scenario(“Pricing Subject Quote”)
Map(“effectiveDate” → “${effective_date}”,“zipCode” → “55105”,“planYearDetailsBloomId” → “${bloom_id}”)).asJSON


FWIW, the standardHttpConf method just sets up some global settings and request/response info extractors.

This email may contain confidential or legally privileged material, and is for the sole use of the intended recipient. Use or distribution by an unintended recipient is prohibited, and may be a violation of law. If you believe that you received this email in error, please do not read, forward, print or copy this email or any attachments.  Please delete the email and all attachments, and inform the sender that you have deleted the email and all attachments. Thank you.

I think I get it.

Gatling http module first performs an HTTP request to our website in order to warm up the HTTP stack and don’t have crappy results on the first request. By default (see gatling.conf), the request targets our website

You’re probably running behind a proxy that would let you access the internet directly, so the request times out (you would see a log in you’d lower the logging level to INFO).

You can either:

  • disable warm up (but then, first request results might no be good)
  • change warm up url to an URL you can reach

Anyway, we’re not performing this request at the right time (should be before injecting users) and logging should be warm.

Could you try what I propose and tell me if it did fix your problem, please?



I was wrong: warm up is done before starting users.

Something took a very long time between starting the user and building the request.
It could be:

  • feeding, but I don’t see how this could be a problem
  • building the request: my big suspect
    In Gatling 1, we were supporting the Scalate templating engine, which compiles (and caches) text templates into scala classes, then into bytecode. You can imagine how fast starting an embedded Scala compiler and compiling Scala classes on the fly can be. Not!

What I suspect is that, as you don’t ramp your users, they all try to compile the template at the same time, and s**t happens.

We’ve dropped Scalate in Gatling 2 and made building bodies much much easier.
If you’re using body templates, I strongly advice you migrate to Gatling 2:

You can either use text files that use the Gatling EL syntax, or directly use Scala 2.10 string interpolation in your Scala code.


That would certainly do it, thanks for the eyes!

Looking forward, I’m super interested in Gatling 2, but we’ve built an entire framework on top of Gatling 1. How stable in the API now? I’d have to refactor our simulation.log stuff - which should be pretty easy - but what about the rest? Would you say it’s stable enough for production use (saying no is fine, I’ll probably make a fork anyway). :slight_smile:

As always, I appreciate the help and rapid feedback.

Impact of migrating from Gatling 1 to 2? Well, it depends on how much custom stuff you’ve written. If you’ve developed your own protocol support (extended our private APIs) or wrote tons of Session function, you might have some fun time…

Regarding stability, there’s some stuff in the DSL that might still change. For example, we’ve added an new inject DSL, but it might change as we’re considering implementing throughput control/throttling. Then, there’s the cluster we haven’t really thought about yet.
But well, it works pretty well and I personally like it a lot better than Gatling 1.

Sounds good, thanks! Speaking of clustering, is this something pegged for Gatling 2 still, or are we looking at this being further down the road?


Honestly, I fear we won’t be able to work on this in 2013, such much to do.