Response checks failing when upgrading to Gatling 3 because of HTTP 100 responses

I’m in the process of updating our existing suite of stress tests to use Gatling 3. During this process I’ve noticed several existing checks that now fail with Gatling 3. e.g.

build 16-Jan-2019 15:20:24 10:20:24.507 [gatling-http-1-2] WARN i.g.h.e.r.DefaultStatsProcessor - Request ‘PUT release jar’ failed for user 1:, but actually found 100

Is there some configuration I need to change to make the existing check compatible with Gatling 3?


What does your request look like? Does it have an Expect: 100-Continue header?
Could you please provide a reproducer we can use on our side?

The request looks something like this

http(“PUT release jar”)
.basicAuth("${username}", “${password}”)
http(“PUT release jar.sha1”)
.basicAuth("${username}", “${password}”)
http(“PUT release jar.md5”)
.basicAuth("${username}", “${password}”)
http(“PUT release pom”)
.basicAuth("${username}", “${password}”)
http(“PUT release pom.sha1”)
.basicAuth("${username}", “${password}”)
http(“PUT release pom.md5”)
.basicAuth("${username}", “${password}”)


val expect = Map(
“Cache-control” → “no-cache”,
“Cache-store” → “no-store”,
“Expect” → “100-continue”,
“Expires” → “0”,
“Pragma” → “no-cache”)

LGTM, so it’s probably a regression wrt Expect:100-continue handling.

Sorry, but I’m way too swamped to build a sample app atm.
If you could share a private access to a Nexus repo and provide a sample simulation, it would be great!

Another piece of the puzzle is the issue appears to only occur when the application under test is behind a load balance (specifically AWS ELB).

I’ll see if I can put together a simplified reproduce case.

nginx as a reverse proxy in front of the application under test also appears to trigger the issue.

I’ve put together an example project that I believe reliably exhibits the behavior.

NOW, this is a reproducer, thanks!

Stéphane Landelle

GatlingCorp CTO