Not able to get RPS more than 240

Hi All,

I am trying to get 300 RPS but not able to get more than 240 RPS where from Jmeter I am getting 325 RPS with the same system configuration. Could you please suggest how could I achieved the same with gatling.

setUp(
constantUsersLoadTestScenario.inject(
nothingFor(5 seconds),
constantUsersPerSec(300) during (300 seconds)))
.assertions(global.successfulRequests.percent.gt(ZERO))
.protocols(HTTP_PROTOCOL)

Thanks & Regards,
Ratnesh.

Hi,

At such a small load, the most likely explanation is that you’re doing different things with the 2 tools.

Impossible to tell for sure if you don’t share a full sample, as required in this group’s terms:

Provide a Short, Self Contained, Correct (Compilable), Example (see http://sscce.org/)

My best guess is that with JMeter, you’re having a fixed number of virtual users performing a loop, hence only opening connections once per virtual user, while with Gatling you’re using a closed workload model and having new virtual users spawned everything one terminates, meaning opening new connections.

Again, that’s just a wild guess until you provide a full sample that anyone else can run on his side.

Hello
with help of Throttling, we can adjust the RPS as per the need
https://gatling.io/docs/current/general/simulation_setup/#throttling

setUp(
constantUsersLoadTestScenario.inject(
nothingFor(5 seconds),
constantUsersPerSec(300) during (300 seconds)))
.throttle(reachRps(325) in (10 seconds), holdFor(290 seconds))

.assertions(global.successfulRequests.percent.gt(ZERO))
.protocols(HTTP_PROTOCOL)

Also make sure if you have enough users to achieve this

Thanks
Sujin Sam

Nope.
Throttling can only set an upper limit on the throughput you would generate otherwise. It can’t increase throughput.

In addition to the above simulation script, I have the below scenario script where I am passing security token and 256 kb JSON payload.

System configuration I have with dual-core, 12 GB RAM.

System configuration
def tracePost(scenarioRequest: String) = {
exec(http(scenarioRequest)
.post(TRACE_ENDPOINT)
.headers(HttpProtocol.CUSTOM_HEADERS)
.header(“Authorization”, s"Bearer ${tokenId}")
.body(ElFileBody(“trace-user-files/data/cls-static-payload-256kb.json”)).asJson
// .body(StringBody(s"""${clsPayload}""")).asJson
.processRequestBody(gzipBody)
.header(HttpHeaderNames.ContentEncoding, “gzip”)
.check(status.in(STATUS_CODE_204)))
.pause(reqPauseTimeInSec)
}

Thanks,
Ratnesh.

Please let me know if more information is required.

Honestly, impossible to say without you providing an actionable sample as described above(http://sscce.org/).
Please consider either providing a simulation that hits a public application, or build a sample app (Spring Boot, nodejs, etc…) that provides the same feature (upload w/ gzip).

Exactly I could able to provide you the complete script as the resource is not public but the below script can give you the idea of what I am trying to say.
Using the below script I am trying to achieve 300 RPS but the max I could able to achieve is 240 RPS and looking the way to improve the RPS.

I am just trying to understand what could be the possibility to increase the RPS, either the system configuration is not enough that I am using to run the load test or do I need to change something from the gatling side like gatling configuration to increase the RPS.

val HTTP_PROTOCOL = http
.inferHtmlResources(BlackList(), WhiteList())
.baseUrl(“https://100.92.121.196”)
.userAgentHeader(“Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36”)
.disableWarmUp
.disableFollowRedirect
.shareConnections
.disableCaching

val constantUsersPerSceScenario =
scenario(“load test scenario”).repeat(1) {
exec(http(“Load Test scenario”)
.post("/trace")
// .header(“Authorization”, s"Bearer ${tokenId}")
.body(StringBody(s"""{
“spans”: [
{
“msg”: “Root span started”,
“sentiment”: “like”,
“trace_id”: “50e44c6c59c948bbb7b13199827ac7ea”,
“component”: “abc”,
“level”: “trace”,
“span_id”: “0000017297b85dfc”,
“session_id”: “0000017297b85dfa”,
“message_type”: “span/start/root/browser”,
“message_id”: “loadtestmsg123”,
“error”: “false”,
“ts”: “1591683341819”
},
{
“msg”: “Root span finished”,
“sentiment”: “like”,
“trace_id”: “b247021853e948ebbe2fd0c45bda714c”,
“level”: “trace”,
“span_id”: “1234”,
“end_time”: “1591683343351”,
“session_id”: “6789”,
“message_type”: “span/end/root/browser”,
“message_id”: “loadtestmsg123”,
“error”: “false”,
“duration”: 2,
“screen.height”: 914,
“component”: “abc”,
“screen.width”: 1920,
“screen.orientationType”: “landscape-primary”,
“ts”: “1591683342351”
}
]
}""".stripMargin)).asJson
.processRequestBody(gzipBody)
.header(HttpHeaderNames.ContentEncoding, “gzip”)
.check(status.in(200)))
}

setUp(
constantUsersPerSceScenario.inject(
nothingFor(5 seconds),
constantUsersPerSec(300) during (300 seconds)))
.assertions(global.successfulRequests.percent.gt(0))
.protocols(HTTP_PROTOCOL)

Thanks,
Ratnesh.

Exactly I not could able to provide you the complete script as the resource is not public but the below script can give you the idea of what I am trying to say.

At some point, giving a rough idea is not enough. In order to properly investigate, we would need to be able to actually run a sample, hence the second option I suggested: “build a sample app (Spring Boot, nodejs, etc) that provides the same feature (upload w/ gzip)”.

Short follow up.

I don’t know why your Gatling test looks slower than your JMeter one. I really would need a reproducer I can run on my side.

What I can say is that, in your test, whatever the technology/tool is, performance is largely dominated by the on-the-fly gzip compression. Gzip compression is simply super expensive, even more than decompression.
For example, on my own test, when I switched from on-the-fly compression to preflight compression (sending payloads that have been already compressed before the test), throughput jumps from 500 rpk to 20,000 rps.
So, what’s for sure is that your current test is broken and the results you get are meaningless. Both with JMeter and Gatling, your bottleneck is your load injector, not your application. You most likely run with 100% CPU usage on your poor dual cores.

Hi Stéphane,

Thank you for your useful suggestion, After sending pre zipped payload now I am seeing that RPS is double from earlier and gatling performing far better than the Jmeter load test.

Thanks,
Ratnesh.

Nice. If you can’t go beyond that, it means you’re now hitting the limits of your system under load, which is the desired result.

Correct , now i can see the actual load test is happening as when I am increasing RPS my my services and resources performance is getting down and by that I can find out the breach of my application.

I am greatful for your support.

Thanks,
Ratnesh.