HTTP/2 exception handling


I have some strange behaviour with some long running scenarios using http/2 protocol and gatling 3.0.2

The scenario uses a forever loop, repeating the same action a lot of times. As this is the same iteration, gatling uses always the same http/2 session all the time (which is fine for me in this case)

After some time, I receive such errors from the scenario:

16:56:12.918 [WARN ] i.g.h.e.r.DefaultStatsProcessor - Request ‘xxx’ failed for user 1: i.n.h.c.h.Http2Exception$StreamException: Cannot create stream 2003 greater than Last-Stream-ID 2001 from GOAWAY.

What I suspect is that the first proxy of the infrastructure (which I don’t have access) is only accepting 2000 requests for each http/2 connection and then sent a GOAWAY to indicate the browser to establish a new session, which is probably done transparently by the browser.

I was wondering if there were any option in Gatling to reestablish automatically a new session in this case, as would do a browser ?

I also noticed that without a proper “ExitHereIfFailed”, the scenario continues to try to use the http/2 session, which is yet definitively unusable.



PS: I was able to implement a simple workaround: replace the forever by a repeat and limit the number of requests done by each iteration.


Thanks for reporting.
Would you be able to provide us with a reproduce we can use on our side so we can investigate?


Hello Stéphane,

The problem can be reproduced with a standard nginx version, with ssl and http/2 activated, as by default, nginx recycles http2 connection every 1000 requests

I’ve just modified the standard nginx config ( from ubuntu standard package) with http2 and ssl config:
server {
listen 443 ssl http2 default_server;
include snippets/snakeoil.conf;

root /var/www/html;
server_name localhost;

To test, I used the gatling BasicSimulation slightly modified:
class BasicSimulation extends Simulation {

val httpProtocol = http
.baseUrl(“https://localhost”) // Here is the root for all relative URLs
.acceptHeader(“text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8”) // Here are the common headers
.acceptEncodingHeader(“gzip, deflate”)
.userAgentHeader(“Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0”)
.disableCaching // caching disabled to reproduce the problem with static resources

val scn = scenario(“Scenario Name”) // A scenario is a chain of requests and pauses
.pause(10 milliseconds)
setUp(scn.inject(constantConcurrentUsers(1) during (5 minutes)).protocols(httpProtocol))


The reproduction of the problem depends on the pause duration, the latency between client and server and probably the load of the different systems:

  • with a remote system and a slow network, the problem can occurs sometimes with 100ms pause
  • with a local/fast server, I had to reduce the pause to 10 ms
  • On my real test, I use pace, so when the system starts to slow, I believe the pause time is reduced a lot and explain why the problem occurs

when the error occurs, the following messages can be seen:

---- Errors --------------------------------------------------------------------

i.n.h.c.h.Http2Exception$StreamException: Cannot create stream 1 (100,0%)
2003 greater than Last-Stream-ID 2001 from GOAWAY.

and then:

i.g.h.c.i.RequestTimeoutException: Request timeout to localhos 1 (20,00%)
t/ after 60000 ms

What I suspect is a kind of race condition between Gatling starting using a connection (from netty pool as I understand) and netty processing a GOAWAY message just after.

In addition

  • I think that this kind of problems may occurs if by bad luck a full gc occurs on Gatling (I know, we should try to avoid at any cost full GC …)

  • The problems disappears if you switch back to http1

Thanks for your help and tell me if you need more info on the problem



Thanks for the information.
I’ve opened an issue:


Dear colleagues, I still receive message like in an original post ( i.n.h.c.h.Http2Exception$StreamException: Cannot create stream
203 greater than Last-Stream-ID 201 from GOAWAY) using the latest version of gatling. As I understand after fix this messages shouldn’t appear cause gatling should retry to establish a new connection. Could you shed some light on this question please?

среда, 20 марта 2019 г. в 15:25:50 UTC+3, Stéphane Landelle:

Not sure, but I noticed that if I have a bunch of requests which performed one after another without pause, all of them fail with the message above - and in request headers I can see x-http2-stream-id is increasing for these failed requests. If after these requests I have a pause, after it the next requests are doing well, x-http2-stream-id starts with value 3 and so on.

среда, 20 марта 2019 г. в 15:25:50 UTC+3, Stéphane Landelle:

Please make sure to use the latest version of Gatling.
If you still experience an issue, please provide a reproducer we can run on our side.

Unfortunately at the moment I don’t know how to provide you an ability to reproduce this error in my test env as it needs vpn and credentials and so on. But I can confirm that in my case if I make a series of 100 requests one by one without pauses I get the errors, but if I add even a small pause after each request, everything works fine.
I’m using latest version (3.7.2).

суббота, 4 декабря 2021 г. в 10:48:35 UTC+3, Stéphane Landelle:

Instead of granting access to your platform, you also have the possibility of building a standalone sample.
Without a reproducer, there’s really nothing we can do.

I’ve tried to reproduce your issue against nginx to no avail. Gatling 3.7.2 properly creates a new connection on GoAway.