Througput drops when using a computation function in feeder and number of users is increased


My problem:

I’ve noticed when using a test scenario where you use a feeder, which calls a function to do some calculations for obtaining a value, the througput is dropping heavily when you start testing with a certain number of users.

The scenario:

I’ve created a test scenario where I use a dummy function with some arithmetic operations (sine calculation of 500000 numbers).I have run a set of tests twice. One set of tests where the feeder calls the function executing the calculations. Another set where the feeder does not call this function.The feeding process itself when callling the function takes something like 30 ms on my machine. In the other case, it’s like 0 ms. The scenario calls a dummy rest service which takes 250 ms to respond.

Now for the results:

feeder not calling the function

1 user: througput: 3.5 req/s (mean 252 ms)
10 users througput: 36 req/s (mean 252 ms)
50 users: throuhput: 198 req/s (mean 252 ms)

This all seems reasonable and to be expected.

Now, for the same tests where the feeder calls the function

1 user: througput: 3 req/s (mean 252 ms)
10 users througput: 32 req/s (mean 251 ms)
50 users: throuhput: 32 req/s (mean 251 ms) !!

For the first 2 cases, the results seems logic. The fact that the througput is a little lower is to be expected due to the feeder taking some more time. However, for the case with 50 users, the througput is as if we are testing with 10 users. It seems something is blocking when sending the requests.

These tests where executed using version 2.2.2. When rerunning the test with 50 users and version 2.2.3 I Still have the same result (32 req/s).

I have some ideas to why this is happening, but I like the opinion of an experienced user (I’m a novice myself).

Thx in advance,


Build a JMH benchmark of you computation (that you didn’t provide) and figure out why it doesn’t scale, it’s most likely your bottleneck.