j.l.IllegalArgumentException: The URI contain illegal characters

Gatling version:3.14.9
Gatling flavor: java kotlin scala javascript typescript
Gatling build tool: maven gradle sbt bundle npm

I made sure I’ve update my Gatling version to the latest release
I read the guidelines and how to ask a question topics.
I provided a SSCCE (or at least, all information to help the community understand my topic)
I copied output I observe, and explain what I think should be.

=================================================================

Sorry I see now there is an acknowledged netty bug that causes my exact issue.

It sounds like the fix is to go to Gatling 3.15.0, but I can’t since my environment closes connections in ways that 3.14.9 tolerated, but 3.15.0 does not. I don’t have the clout to force changes to the system under test.

==================================================================

the following shows an IllegalArgumentException I get from recent runs.

> j.l.IllegalArgumentException: The URI contain illegal characters: /api/v3/assistant-service/care-options?query=Melanoma&latitude=44.977753&longitude=-93.2650108&care-option-types=condition&care-option

The source of the exception is in io.netty.handler.codec.http.HttpUtil. The exception comes from isEncodingSafeStartLineToken() in that class. Extensive logging and analysis found no illegal characters being passed into Gatling’s httpRequest.queryParamSeq(…). This is the boundary between the argument strings I pass in and Gatling/netty taking over. A stack trace is occassionally thrown, which is how I identified and then verified the above netty class as the source of the exception. I will work to get a current stack trace, but none are available right now.

Uppercase M and uppercase J in the parameter values seemed to be particularly problematic.
The issue was fixed by forcing all parameter values to be lower case. I’m sure how much changing the case of these strings changes the backend processing we want to test.

Has any one else seen this issue?

I can’t run Gatling gradle plugin 3.15.0 because its increased sensitivity to closed connections breaks my run in a different way. I don’t have the clout to force our apps to ensure connections aren’t closed.

It doesn’t seem right to have to only send lower case query string values, so I’m looking for ideas for a root cause analysis.

A recent run shows issues not just with query parameters, but also with uri paths:

> j.l.IllegalArgumentException: The URI contain illegal characters: /api/v1/employers/Maaabb/login/context

I will see if changing to a lower case “m” makes the exception go away. Certainly it changes the path so I expect to get a “not found”.

Hi,

j.l.IllegalArgumentException: The URI contain illegal characters:

This is a well known regression in Netty that was fixed since then. Consider Gatling 3.14.9 as burnt.

I can’t run Gatling gradle plugin 3.15.0 because its increased sensitivity to closed connections breaks my run in a different way.

I don’t get it. Could you please elaborate and possibly provide a reproducer?
Upgrading Gatling is the proper way to get rid of the above issue.

When I run with

plugins {
    id "java"
    id 'io.gatling.gradle' version '3.15.0.1'

I get a premature close error. If I go back to 3.14.9, requests run normally except for requests that hit that netty bug.

Here is the error as seen from the run output on the screen.

========================================================================================================================

2026-04-09 13:18:17 GMT 180s elapsed

-— Requests -----------------------------------------------------------------------|—Total—|-----OK----|----KO----

> Global | 1 | 0 | 1

> member user / member POST /api/v4/members/login | 1 | 0 | 1

-— Errors ------------------------------------------------------------------------------------------------------------

> i.g.h.c.i.PrematureCloseException: Premature close 1 (100%)

-— member-gateway ----------------------------------------------------------------------------------------------------

      active:         1  / done:         0

========================================================================================================================

Here is the error from a gatling http trace log:

POST /api/v4/members/login HTTP/1.1
Content-Type: application/json
Accept: application/json
Bind-Session-Id: c119380d-d91c-49ee-9e38-38b1bbd142cd
host: member-test.dev-bind.com
content-length: 47, content=null}
08:18:17.733 [DEBUG] i.g.h.e.GatlingHttpListener$ - Request ‘member POST /api/v4/members/login’ failed for user 1
io.gatling.http.client.impl.PrematureCloseException: Premature close
08:18:17.734 [DEBUG] i.g.h.e.r.DefaultStatsProcessor - Request ‘member POST /api/v4/members/login’ failed for user 1: i.g.h.c.i.PrematureCloseException: Premature close
08:18:17.734 [TRACE] i.g.h.e.r.DefaultStatsProcessor -

I’m sorry I can’t provide an endpoint for you to hit. My test environment is all secured.

If there is something else I can provide please let me know.

I don’t see any change in our own code that could cause a behavior change there. The only possible explanation would be a change in Netty itself.
In this case, the only way to investigate this is to compare TCP traffic captures, eg with WireShark. Is this something you could share, possibly privately?

Note: you’re the first person to face this problem, while Gatling 3.15.0 is widely used.

I’ll try to get wireshark data.

I don’t know if the following provides any useful new information, but here is http debug logging with a stack trace that results from the premature close:

10:03:07.432 \[DEBUG\] i.g.h.c.p.ChannelPool - No channel in the pool for key ChannelPoolKey{clientId=1, remoteKey=RemoteKey{targetHostBaseUrl='https://member-test.dev-bind.com', proxyHost='null', proxyPort=0}}
10:03:07.497 \[DEBUG\] i.g.h.c.i.DefaultHttpClient - Opening new channel to remote=\[member-test.dev-bind.com/3.209.199.19:443\] from local=null
10:03:07.506 \[DEBUG\] i.g.h.c.i.DefaultHttpClient - Connected to remoteAddress=member-test.dev-bind.com/3.209.199.19:443 from localAddress=/10.94.157.174:54179
10:03:07.506 \[DEBUG\] i.g.h.c.i.DefaultHttpClient - Installing SslHandler for member-test.dev-bind.com:443
10:03:07.751 \[DEBUG\] i.g.h.e.GatlingHttpListener$ - Request 'member POST /api/v4/members/login' failed for user 1
java.nio.channels.ClosedChannelException
    at io.netty.handler.ssl.SslHandler.channelInactive(SslHandler.java:1185)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:251)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1424)
    at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:876)
    at io.netty.channel.AbstractChannel$AbstractUnsafe$6.run(AbstractChannel.java:676)
    at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:148)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:141)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:535)
    at io.netty.channel.SingleThreadIoEventLoop.run(SingleThreadIoEventLoop.java:201)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:1195)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base/java.lang.Thread.run(Thread.java:1583)
    Suppressed: io.netty.handler.ssl.StacklessSSLHandshakeException: Connection closed while SSL/TLS handshake was in progress
        at io.netty.handler.ssl.SslHandler.channelInactive(Unknown Source)
10:03:07.751 \[DEBUG\] i.g.h.e.r.DefaultStatsProcessor - Request 'member POST /api/v4/members/login' failed for user 1: j.n.c.ClosedChannelException
10:03:07.752 \[TRACE\] i.g.h.e.r.DefaultStatsProcessor -

This confirms that was happens is really a premature close: the socket is getting closed in the middle of the TLS handshake, probably because your server crashes.

In this case, it’s likely that the change that triggered the issue is the netty-tcnative-boringssl-static upgrade from 2.0.74.Final to 2.0.75.Final, meaning the upgrade of the shipped BoringSSL binaries. As BoringSSL is Chromium’s SSLEngine, I don’t think the culprit is it, but is more likely to be a bug in the TLS layer of your server.

In the Wireshark logs, please check any difference in the Client Hello paquet.

Hi. I have 2 wireshark “Client Hello” packets in text. 1 from 3.14.9, where requests succeed (as long as I avoid the netty bug), and 1 from 3.15.0.1 that fails with the premature close error.

Is there a way for me to get the 2 text files to you? I would rather not publish them here, even though I think they don’t reveal any secrets.

Copilot tells me it thinks there are significant differences, but I’d rather have your opinion. I would be happy if there is something I can change so the premature close does not happen.

Thanks

You can send by email to slandelle at gatling dot io.

But please provide the whole dumps, not juste the Client Hello.

Note that curl and swagger were able to successfully send the request and receive a reply