Strange jump in the graphs

Gatling version: 3.10.5
Gatling flavor: java kotlin scala javascript typescript
Gatling build tool: maven gradle sbt bundle npm

Hello everyone, I have some strange jumps in the graphs and cannot understand why.
I have ‘main’ scenario with 4 ‘sub-scenarios’ added in the setup method with the injection open model.
In all of them I also have rampUsersPerSec and constantUsersPerSec methods.
So I want to have constant number of users arriving in the system and some of them to arrive during the certain ramp-up time.
So here is the example:
RAMPUPTIMESECONDS = 800 (should be around 13 mins (used for rampUsersPerSec))
EXECUTE_FOR_MINUTES = 5 mins (used for constantUsersPerSec)

NUMBER_OF_CLIENTS_PER_MIN is set to 10
Percentages are set with different values for different scenarios (eg. 30, 50, 20)

//calculates percentage of clients per second from total clients per minute
public static double getPercentageOfClientsPerSec(Double clientsCountPerMin, Double percentage) {
double percentageOfClientsPerSec = (clientsCountPerMin * percentage) / 100 / 60;
System.out.println("Percentage of clients per second: " + percentageOfClientsPerSec);
return percentageOfClientsPerSec;
}

I will attach a few graphs from a different runs. Also attached the simulation.

Any idea why I have that big jumps?

This started to happen few weeks ago.



Update:
I want to have it like this (or similar):

@slandelle any thoughts on this? PS: Updated to the latest version

I suspect you don’t really have an issue.
What you have is active users, not concurrent users, cf Open-Source
What you’re seeing is probably merely users lifetime spanning over a 1s bucket or not.

If you want more help, you have to provide a reproducer as requested.

If that is the case, why I don’t have it more often in the graph, but only once?

The strange thing is that jump happens always at the end of the ramp-up time.
Providing two graphs from two different runs.


Again:

If you want more help, you have to provide a reproducer as requested.

It is a bit hard to provide everything since there are a lot of code, logic and dependencies.
I wanted to hear some ideas and thoughts.

did you check your response time during that time ?
If the response is long, some user may alive longer than expected

Good idea, will check it now.
Thanks!

Once you figure it out, please share so that people know what to check if they stumble on the same behavior please :slight_smile:
I’m also curious the root cause of this

Of course i will share.
I think I’m close to understand what’s happening.

Btw, thank you for your idea, but the problem is not there. :confused: