maximum number of virtual users

Hi all,

I know this is a hot topic and I’ve done the research on why you prefer open to closed loop testing, and I agree with you.

However, I feel our use-case justifies the use of a maximum user limit (what you call closed loop) for single machine testing, but not for fleet testing.

Our service can be scaled by throwing more nodes into the network, which sits behind a load balancer. The load balancer has very simple rules:

  • prefer sticky sessions
  • have no more than X active connections to a node
  • if there are no nodes available, default response

So we definitely want open when testing at the load balancer level.

But from the perspective of performance testing a single node (e.g. on a local machine to profile it and gain insight into what is happening), prod-like behaviour is best simulated with a limit on active virtual user connections. This allows us to test extended periods of the maximum limit.

I can’t find any way to impose a maximum number of active virtual user connections. Is there no way to even do this at the http client level? e.g. asyncclient has a config option to set the maximum, and I could share connections between users in order to get the limits.

Best regards,
Sam

I see in https://github.com/gatling/gatling/blob/master/gatling-http/src/main/scala/io/gatling/http/ahc/AhcFactory.scala#L83 that you do not support changing the

int maxConnections,
int maxConnectionsPerHost,

so I will have to monkey patch gatling to do this.

There is a very simple solution, I create ahc.properties containing

org.asynchttpclient.maxConnections=20000

and that means if I go over the limit I get red responses, but I don’t take the computer down with me.

Nope. maxConnections and maxConnectionsPerHost are only safety limits that throws exceptions when the limit is reached.

Typically thread regarding this feature in AHC:
user: “I would expect AHC to buffer requests when there’s no connection available”

me: “ok, so which kind of queue?”
user: “unbounded”
me: “so OOME when your peer can’t handle your load?”
user: “bounded then?”
me: “so what when queue is full?”
user: “block calling thread?”
me: “no way, what you want is back pressure, so implement backpressure, or use AHC reactive streams”

Yes, safety limits are exactly what I want :wink:

I’m wanting to profile the server at its limits without forcing new kinds of failure modes, so this is ideal.

Also, hopefully no more machine freezes and data corruption on startup to deal with.