When running my tests (a bunch of users hammering a server without pause), after a while (usually about 150 seconds), I get this error “SSLEngine closed already”. So
None before a precise moment
A bunch at a given moment
Then some every 5 seconds
So far, I thought it was my server that was having a problem but I’m not so sure anymore.
If I look at the Netty code (https://searchcode.com/codesearch/view/10525295/), it looks more like the server that decides to close the connexion and then the client that should just reopen one. I’m guessing a bit here.
So it shouldn’t be a real KO. It’s some kind of normal behavior.
Strangely, when using Apache Benchmark, I also have some similar errors on stdout but then Apache Benchmark doesn’t consider it to be a failure in the final report.
Am I making any sense? Can someone explain the error?
But in fact, I’m pretty sure I won’t be able to reproduce out of my environment. If it can be of any use, the LB is managing the keep-alive correctly, does SSL session caching but do not support SSL tickets.
So question is: “How is the Netty piece of code below handled by Gatling?”
Right now, I fell that if the connection is closed, the client should recreate it without complaining. Because it’s normal to get a closed connection after a while (I kinda think/guess). WDYT?
SSLEngineResult result = wrap(engine, buf, out);
if (!buf.isReadable()) {
buf.release();
promise = (ChannelPromise) pending.recycleAndGet();
pendingUnencryptedWrites.remove();
} else {
promise = null;
}
if (result.getStatus() == Status.CLOSED) {
// SSLEngine has been closed already.
// Any further write attempts should be denied.
for (;;) {
PendingWrite w = pendingUnencryptedWrites.poll();
if (w == null) {
break;
}
w.failAndRecycle(SSLENGINE_CLOSED);
}
return;
}
There’s such a mechanism implemented for retrying such failure, but it might not work in this case.
I’d need to be able to reproduce, and will only be able to investigate when I’m back to the office in 2 weeks.
Sadly no. I never managed to get a fully reproductible use-case after the fact (not my fault, I wasn’t there anymore). The root cause seemed to be the IDS that was closing connection but they should then have been recreated by gatling.
I am also getting the same error and raised it in gatling forum but no reply so thought to check with you as well if you managed to resolve that or now. if yes then how? it looks like issue from Gatling side?
No. As I said, something was closing the connection. But it was allowed to. We reconfigured the appliance. But Gatling wasn’t recreating the connection for some reason.
Again, here isn’t much we can do without being able to reproduce.
The only time we saw this error, the frequency was 1/1,000,000 in customer environment so not something we could debug.
Yes Stephane, Exactly same thing in happening in my environment. out of 34000 transaction we got 1 SSLException error but because script is running continuous mode so other transactions are failing.
is there anyway in gatling to stop the iteration and start new iteration if any failure occurs ?
If I recall correctly a way to reproduce would be to establish an SSL connection. Then to have the server cutting the connection because of an SSL connection timeout. Check if the client reconnects every time.