We faced with an issue on load balancer level with excess of TCP connections per user request. Our production case is 6 user requests per one TCP connection (1:6) but with our current configuration Gatling produces 1:1 one TCP connection for 1 request. As a result we have load balancer overload.
Here is our current configuration:
val scn = scenario("BasicSimulation") .exec(http("request_1").get("/computers")) setUp( scn.inject( constantConcurrentUsers(1) during (60 seconds) ).protocols(httpProtocol) )
We tried to use throttling:
setUp(scn.inject(constantUsersPerSec(100) during (30 minutes))).throttle( reachRps(100) in (10 seconds), holdFor(1 minute), jumpToRps(50), holdFor(2 hours) )
But it doesn’t allow to control the number of users, therefore a number of new TCP connections.
Per my understanding Gatling creates one virtual user per scenario execution than kill it after scenario is executed and creates new one for the next scenario exectuion. To control the relations between a number of users and HTTP requests need to do requests in parallel. For example using .resources() functionality:
http("Getting issues") .get("https://www.github.com/gatling/gatling/issues") .resources( http("api.js").get("https://collector-cdn.github.com/assets/api.js"), http("ga.js").get("https://ssl.google-analytics.com/ga.js") )
So, we can do 6 HTTP requests in scope of one virtual user (TCP connection) but how to control time of scenario execution? Or may be Gatling has more transparent way to control a number of TCP connections with a number of HTTP requests?