Misunderstood Active User Execution

Hello,

I am a beginner in load/performance testing.
I had the opportunity to test several testing tools (Locust, JMeter, Gatling) and I find Gatling the most understandable and easy to use.
Nevertheless, I have a question. Contrary to the other tools, the number of requests sent (and for each the same scenario) is 2 or 3 times higher!

Example of results:
→ Locust: 20 000 requests
→ JMeter: 21 000 requests
→ Gatling: 60 000 requests !
My question is: how do the users run in Gatling?

Thanks for your answer!
AG

Fine, why? :smiley:

Seriously, we use a model of Actors for the scenario itself, and users are only small data footprint that flows through this Actor model.

Note that the values you gave are not proofs for anything. It lacks units (per second? per scenario?), reproducible (computer spec? network?), injection profile, etc.

Cheers!

1 Like

Thank you for your answer :slight_smile:

You are right, I did not give enough details…
The information about my scenario:
→ number of users: 50 (injectOpen(atOnceUsers(50)))
→ imposed duration: 60sec ( during(60).on(…))
→ target: relatively heavy page including all javascript links, css, images, etc. Total number of GET requests: about 50

Result information (GATLING):
→ 47530 request executions in total
→ average response time: 160ms
→ 766 req/s on average

LOCUST comparison (for example):
→ 22759 query executions in total
→ average response time: 126ms
→ 378 req/s on average

How does this Actor model affect the higher number of requests with Gatling?

Thanks in advance
AG

For such a small amount of users (50), there shouldn’t be any difference about the user model.

I don’t know how the other solutions work. So I can’t tell for sure about the differences.

Our http client is netty that use non-blocking IO.
There is no max simultaneous connections in Gatling (but may have max in system)
There is no default pause between action steps (perhaps our competitors use default pause?)
Each virtual user use its own connection pool (no shared connection between users as independent users will do in real life).

Perhaps our competitors use a native thread by virtual user (and context switching has a cost) or there is some default computation with the result (we do nothing if you don’t check it in your scenario except for status code).

Does that helps?

Cheers!

Maybe your scenario is not the same in the 3 tools.
Gatling has caching enabled by default, so maybe all the responses but the first 50 ones are cached or 304/not modified. You can check if disableCaching changes something, or run in debug with just 1 user.

Hello,

Thank you for your answer.

Honestly, I still don’t get it.
What bothers me is that this difference is observable with any other load testing tool. With Jmeter, Locust, ApacheBenchmark I get similar results, very close. With Gatling, it’s quite different. I prefer Gatling all the same but I want to know the why of the how, and not just interpret results that could be misinterpreted…

I can’t understand how the Actor model works with Akka: how does it make Gatling perform better?

Thanks for your attention to my questions.

AG

I tried but I got the same results… :confused:

As @slandelle wrote:

SSL caching can be a lever too.
Because caching may be a huge factor here as you loop on same request for a while.

Instead of having a lot of native threads (quite heavy and switching between them can be costly) there are only a few native threads where the Actors will live. It is a better usage of CPU cores (see the current [battle for light threads or green threads] in the Java world for instance).

Using non blocking IO (netty) is another performance gain.

Cheers!

Okay, I understand better but, how work the Actors in threads ?
Thank you for your attention !

AG

See Akka Actors model

Cheers!

You probably want to make sure that your tests with the different tools really do the same thing, in particular how many connections they open.

→ number of users: 50 (injectOpen(atOnceUsers(50)))

This means you’re going to open a total of 50 connections (assuming your server supports keep-alive).
Make sure your other tests do that.

Regarding Locust, as I explained, a single Locust process is only going to use 1 of your core. If you have an 8 cores machine, it’s only going to use 1/8 of you CPU.

→ target: relatively heavy page including all javascript links, css, images, etc. Total number of GET requests: about 50

None of the tools you are testing automate web browsers (too CPU and memory intensive, doesn’t scale). Are those resources explicitly declared in your tests?

Okay thanks you so much, I understand better now !

AG