Hi All,
I’m wondering if there is a possibility of introducing a new way of ramping up the load in Gatling. My background before Gatling was working with Ticketmaster (I’m a .NET Software Developer) during a time when they needed to performance test their software for an Olympics event. We used another company product that had a very rich dashboard experience when running load/performance testing.
The key thing that I’ve found from that experience and coming to Gatling, is the way the load is being introduced. I see there is a way to inject users over time, and throttle the individual HTTP requests, but the driver in throttling in my experience is ‘Users’. Whenever I was running a performance test, or talking to a business about load, it would always drive from the amount of concurrent users that were being loaded on to a server farm. I was surprised that Gatling doesn’t allow me to throttle based on concurrent users because if a system begins to stress, requests will begin to queue on a server either very slowly or quickly, causing a server crash etc, it’s at this point you can lose exactly how many concurrent users your infrastructure can manage. Even after realising you can only handle a certain amount of users, you may introduce a queuing product that gives web site visitors a ticket, time in a queue etc. and again, the thinking is all about concurrent users in/on the web servers.
I would love the ability to say always keep the concurrent users at 100, and then implement a ramping injection with a new setting of say 200 max concurrent users. It would be even nicer if I can drive the ramp by concurrent users so stating the following:
Ramp up to 100 concurrent users over 5 minutes.
Ramp up to 200 concurrent users immediately.
The second feature I would like is to be able to dynamically adjust this ramp up ‘during’ a test. Once I’ve established a ramp up in code, I find that Im constantly stopping/rerunning based on a new ramp up to see where my server ‘sweet’ spot is. I would much love the ability to kick the test off, and in realtime control the load so that I can slowly increase the load by X amount and monitor server metrics, let things stabilise, maybe even tweak something, then slowly turn up the load, or even decrease the load if something happened environmentally.
Has this thing been considered before? I might be outside of the bounds of what the product is intended for. I’d love to hear your feedback.
Thanks.