Gatling tests - want to limit the number of requests instead of time (max duration)

Hello,

I’m looking to limit my test scenario setup be based on number of requests instead of the time.

Currently, I have - setUp(myCustomerScenario(atOnceUsers(10))).maxDuration(3 minutes)

I’m looking for something like this - setUp(myCustomerScenario(atOnceUsers(10))).maxRequests(1000 requests).

How do I go on to achieve something like that?

Thank you.

This is not a contribution.

Regards,
Vivek.

I think the throttle function would fit your user case.

https://gatling.io/docs/3.0/general/simulation_setup/#throttling

regards,
Aju

Hello Aju,

I see that rps needs to be defined. Basically, what I want to achieve is
create requests as fast as possible until you hit say like a million
requests but push the client to max.

So basically, I want to send "n" number of requests as possible to my
server as fast as possible with say 5 clients. How do I achieve that
without knowing the high end of rps?

Thank you.

Hey Vivek,

I’m slightly confused here. You’ve mentioned that you want to “push the client to the max”, aren’t you using gatling to stress the server and not the client ?

So basically, I want to send “n” number of requests as possible to my server as fast as possible with say 5 clients.

What exactly do you mean by 5 clients ? If you’re running the test from 1 injector machine, there is only 1 client simulating load.
“atOnceUsers(10)” does not mean there are “10 clients”. It just means there will be 10 parallel executions of your scenario, all of which will start at the same time from within the same client.

If you want to keep increasing load on the server till you reach a hard limit on performance.
You’d be better of using one of the “ramp” functions for injection along with assertions.

rampUsersPerSec(n1) to n2 during (10 minutes) - Open Model
rampConcurrentUsers(n1) to (n1) during (10 seconds) - Closed Model

Here n1 could be 1 and n2 could be set to some arbitrary high value.
You could use Assertions along with these ramp functions, so you can gracefully stop the test when it breaches certain performance thresholds (latency, error rate etc)

regards,
Aju

Hello Aju,

Thank you once again.

Thanks for detailing your explanation.

Sorry for the confusion (highlighted Red). The scenario here is - I have 3 clients (users in terms of Gatling) and a server. The only way to load the server is by having client issue requests as fast as they can. Maybe next question is intertwined with this one? Maybe I’m unclear on the definition of what a user is?

atOnceUsers(10) - I see (highlighted Yellow) you’ve mentioned that I don’t get 10 users but just 10 parallel executions. This maybe deviating from the question - but can you please shed some light on 10 users vs 10 parallel executions? How are they different? Doesn’t 10 users mean 10 instances (each independent from the other) of my client?

What I meant is that by specifying atOnceUsers(10) what you get is 10 parallel scenarios all being fired from within the same “client” machine (the host injecting the load). From a server’s perspective, 10 parallel connections are opened to the same “client” machine and not to 10 different clients.
Both terms could probably be used interchangeably in this context.

But the point is, that if you’re trying to stress the server, what you should care about is the arrival rate of requests on the server and not on how many “clients” are injecting load. (considering its an open system and not a closed one).
So perhaps you should define your scenario just in terms the number of requests on server instead of specifying “x requests from y clients”. Is there a specific reason that you’ve mentioned a fixed number of clients in addition to requests ?

Hello Aju,

Thank you for your response.

The logic I’m following is more clients creates more requests. I’m testing a ZooKeeper cluster where you have multiple clients, hitting multiple servers (so n to n relation). Instead of having one client perform same scenario (using multiple threads), I want each client to enact one scenario but as fast as possible (or as many times as possible). Please correct me if I’m wrong.

Here is a more detailed explanation (and some code) of what I’m trying to achieve - https://groups.google.com/forum/#!topic/gatling/tODwr7pwpvw

I asked that as a separate question because it essentially is different (but intertwined with this one) from this but will give you a peek into what I’m looking to achieve.

“Instead of having one client perform same scenario (using multiple threads), I want each client to enact one scenario but as fast as possible (or as many times as possible)”

https://gatling.io/docs/3.0/general/concepts/#virtual-user

In Gatling, each user is a “message” and not a separate thread.

From what I understand from your code, you want each user to execute a unique action. Once each user is allocated a specific action, it should execute the same action repeatedly forever. Is my understanding correct ?

I still get the get the feeling you’re confused about what constitutes a user or a client.
There are multiple injection models to choose from apart from atOnceUsers() and feeders can be used to assign unique payloads to your requests.

Hello Aju,

Thanks a lot.

Replies are inline.

“Instead of having one client perform same scenario (using multiple threads), I want each client to enact one scenario but as fast as possible (or as many times as possible)”

https://gatling.io/docs/3.0/general/concepts/#virtual-user

In Gatling, each user is a “message” and not a separate thread.

I see now, could you expand (or provide a link) for the difference between a message and thread?

From what I understand from your code, you want each user to execute a unique action. Once each user is allocated a specific action, it should execute the same action repeatedly forever. Is my understanding correct ?

Exactly. My actions are very similar with minute differences but essentially you understood it absolutely right.

I still get the get the feeling you’re confused about what constitutes a user or a client.
There are multiple injection models to choose from apart from atOnceUsers() and feeders can be used to assign unique payloads to your requests.

I’m not quite sure about this actually. For now, atOnceUsers() servers my purpose but maybe it’ll be more clear if I understand Q1 and Q2 above.

Any updates?

Hey,

My apologies, I lost track of this thread.

A lot of the other traditional load testing tools(jmeter, loadrunner) are implemented in a a way that each virtual user runs as a separate thread, with each thread being allocated certain memory, thus more the number of virtual users being simulated,more the required memory from the load injector machine.

Gatling works in a different way, it leverages the “akka” framework to simulate virtual users. Within the akka framework, “actors” and “messages” are used to achieve concurrency instead of multiple threads. So in essence you have only a single thread doing the work but achieving concurrency through actors and messages.

The long and short of it, is that Gatling is much less “memory hungry” and a much higher load could be simulated from a given injector machine, than would be possible with other tools.
But from the point of view of designing simulations using the Gatling DSL, these implementation details aren’t relevant and are abstracted away from us, so we don’t really need to worry about it, at all.

Now with regards to your scenario, if your aim is to stress out the server, meaning that the requests hitting the server should keep increasing till the server capacity is reached, “atOnceUsers(3)” within a “forever” block won’t help you achieve that, what will happen is that 3 “clients” will perform the actions you define for them, in a loop forever, at no point will the number of requests received at the server in a given instant exceed 3. (since a client start the next loop only after the first one is over).
You would probably need to use ramp functions that gradually increase load.

As for dynamically assign requests to clients, how about using feeders themselves ? You could use a csv feeder, with all the requests placed in a csv file. The default behavior of the feeder is to act as queue, so a value once popped out won’t be used again.

regards,
Aju