I just saw an email discussion of this on JMeter list. Wonder if and what is the Gatling equivalent?
That’s kinda lame though.
We’re currently doing a few things with linux’s built-in queue discipline support. You can do a lot with that. Quite a lot more than just limiting bandwith.
For the record, bandwith is nowhere near as important to response times as latency and/or packet loss is. Limit bandwith and you will still have a fairly fast site if it doesn’t use overly large files. Add a bit of latency however and you’ll quickly find that whatever amount of time you add gets multiplied by a fair amount, depending on your site and whether you are using SPDY or not.
+10. This is something that’s squarely not in the domain of the tool. You can simulate world+dog conditions with HTB, with random perturbations such as packet loss, delays etc. It’s absolutely lovely.
I wouldn’t go that far. It doesn’t hurt to offer some kind of interface or at least a help page for it. Being able to control everything that happens in a test from a single interface is really nice, including this kind of stuff.
I think it’s a completely separate concern and pollutes what’s a load testing tool, but then again that’s my software dev brain talking.
Definitely. A ‘load’ test can actually have a lot of different measurement goals and requirements. This isn’t even that far off the beaten path.
Performance has a lot of different aspects. When the requirement to be tested is that 99% of all user interactions must complete within a second even on the busiest day of the year on a laggy 3g network, how will you test such a requirement? How will you make such a test predictable and repeatable without somehow tying the ‘load’ test tool to the tools used to configure your network?
I know that most shops aren’t that stringent but as the technology and the profession matures expect a lot more of this sort of thing. A tool like gatling or LR is expected to do more than just put load onto something - there have to be meaningful results that can be related to the real world.
Look I agree with you, my salient point is that there doesn’t need to be a shiny button or property that will tweak HTB or whatever QoS buckets your OS comes with. There’s a clear separation of concerns here: The load tool pushes load and measures results. The results are then interpreted according to the condition of the constituent components of the architecture. If you don’t have enough intelligent people along the chain to setup the system to mimic your requirements, then we are steering into the previous discussion about “smart” tools not only doing the thinking but doing the doing!
There is a difference between tools that try to do the thinking for you and tools that let you automate away the dumb stuff
Yes, we have intelligent people. Doing it the way you propose would work very well for a one-off. But long term, you want to not need a team of specialists for doing something that should be easy to automate away.
+1 to Floris’s comments. I can understand both sides of the argument, but side against having facilities to automate away the dumb/repetitivie/boilerplate stuff.
Even a wiki will be helpful if nothing integrated into the tool. And for wiki or whatever else, it also doesn’t have to be part of Gatling, could be a separate project. Though would be nice to have some support of this however which way in Gatling.
I’ve not worked in this area, so I could be ignorant, but I would imagine that QoS, bandwidth, and other networking configurations for testing can’t possibly be that customized per organization that a general guide to how to configure this stuff for different platforms (*nix, Windows, routers, switches, access points, wifi signal extenders, iOS, Android, etc.) couldn’t be documented to help those without a superb intelligent team on hand (not all organizations have the time, luck, and $ to attract such people). I’m sure there are customizations specific to each orgnanization, but there must be some amount of stuff that’s also generic. Sharing how to do (and perhaps automating) that generic dumb/repetitive/boilerplate stuff is what’s of value.