Gatling collapsing on low amount of requests (with file uploads)

Hello.

I am benchmarking a backend specialized with services uploads.

I just started with gatling and a very simple upload scenario like this one

import io.gatling.javaapi.core.*
import io.gatling.javaapi.core.CoreDsl.*
import io.gatling.javaapi.http.HttpDsl.*
import java.time.Duration

const val DEFAULT_BASE_URL: String = "http://localhost:8989"
const val DEFAULT_PAYLOAD: String = "10MB.payload"

const val DIRECT_UPLOAD = "/upload"

const val PUBLIC_TOKEN_OK: String = "validx"
const val PUBLIC_TOKEN_KO: String = "boom"


val baseUrl = System.getProperty("baseurl", DEFAULT_BASE_URL)
val payload = System.getProperty("payload", DEFAULT_PAYLOAD)

class BasicSimulation: Simulation() {
    val httpProtocol = http
        .baseUrl(baseUrl)
        .acceptEncodingHeader("gzip, deflate")
        .userAgentHeader("Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0 Gatling")

    val publicUploadRequest = http("upload")
        .post(DIRECT_UPLOAD)
        .basicAuth(PUBLIC_TOKEN_OK, "")
        .formUpload("file", payload)

    val scenario = scenario("file upload")
        .exec(publicUploadRequest)

    init {
        setUp(
            scenario.injectOpen(constantUsersPerSec(50.0).during(Duration.ofSeconds(10)))
            //scenario.injectOpen(atOnceUsers(2))
        ).protocols(httpProtocol)
            .assertions(global().failedRequests().count().shouldBe(0L))
            .throttle(
                reachRps(20).during(1),
                holdFor(Duration.ofMinutes(1)),
                jumpToRps(50),
                holdFor(Duration.ofHours(2))
            )

    }
}

With a 10MB file size it’s working fine, but trying with for example a 100MB file size I would get a lot of error.
Gatling would report that my server close the connections on all the requests j.i.IOException: Premature close, my server would report that the client closed the connection Upload aborted by client: error="context canceled"

After checking with tcpdump who is indeed closing the connection with sudo tcpdump -i lo0 -n '(tcp[13] & 1 != 0) or (tcp[13] & 4 != 0)', I see that the connections were in fact closed by Gatling (TCP-FIN and TCP-RST coming from Gatling side)

I understand that benchmarking uploads is kind of a niche case, is there some kind of tuning I have to do on Gatling or on the JVM to be able to support this case ? Should I consider having multiple instances running the test even with that small values ? (I am running on OpenJDK 64-Bit Server VM Zulu18.32+13-CA (build 18.0.2.1+1, mixed mode, sharing), Ran the test on a Macbook M1 pro, I will try on a large VM if I have the same issue)

I’m expecting that upload a 100MB file will take longer, and adding 50 new users per seconds will add up to a lot of users.

So, I’m expecting a large number of opened file as well.
Did you tweak your OS?

What’s your Gatling version?

My version id 'io.gatling.gradle' version '3.8.4'

So far it’s running locally on the macbook pro m1, I will try to setup a larger loadtesting infrastructure (I did not expect to be blocked that early when testing locally)

I did not expect to be blocked that early when testing locally

my2cents: you’re saturating your bandwidth

j.i.IOException: Premature close

I suspect a race condition, where the request times out but that’s another event that’s being reported instead. Would it be possible for you to provide a reproducer as described here, please?

Everything was running so if it’s that, that would be a limitations on the loopback.

Honestly, can’t say without being able to reproduce.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.