Gatling scaling

Is scaling feature in the Gatling development roadmap?

I love Gatling, it has been my choice of load test tool for quite a while. I have also introduce this tool to many organizations, and they were all impressed by it.

Yes, we can follow the “scaling out” strategy in the documentation, we can write scripts, but these require more work than necessary. It would nicer if Gatling provides scaling feature out of the box. What I am
envisioning is there is a master node sending commands to slave nodes that will be doing the actual load generation. Then at the end of the load test, master will automatically aggregate
the report. The other nice feature would be that master can report live status of each slave during load test (so that I don’t have to open 10 SSH windows looking at each slave). So as a load test running, I only
have to interact with the master node.

I understand Gatling is still at a very young stage, and there are probably more important things to do than scaling feature (like nailing the DSL), but I feel we really need to put the scaling
feature into the development roadmap so Gatling can be even more ahead of its competitors and I really like to see Gatling crush its competitors like LoadUI or JMeter :slight_smile:

Agreed?

Yep. You really need to implement what loadrunner has had for nigh on 15 years or so, now :slight_smile:

Not to say it’s a small job or anything, just that the open source tools really have a lot of catching up to do… and I for one would welcome the existence of actual competition. :slight_smile:

Yep, absolutely agree.
So much to do, so little time, but that will be for 2014.

Agreed, but then again it is always easy when you have the HP behemoth behind you since 2006. We need to see more involvement from the community, I assume the main devs do have day jobs to contend with so their bandwidth can narrow considerably. It’s a shame my Scala skills wouldn’t fill the back of a postage stamp, I’d love to get stuck in with something like this.

Yes we definitely need more community involvement. I really would like to contribute to the scaling feature, so I have been reading on Scala and Gatling’s source :slight_smile:

I actually find contributing to Gatling few times a week after work takes the stress from my day job away :wink:

Agreed, but then again it is always easy when you have the HP behemoth behind you since 2006. We need to see more involvement from the community, I assume the main devs do have day jobs to contend with so their bandwidth can narrow considerably. It’s a shame my Scala skills wouldn’t fill the back of a postage stamp, I’d love to get stuck in with something like this.

Yep, that’s basically it: so much to do, so little time. But there’s many ways to contribute: lend a hand on the mailing list, report bugs, give the snapshots a try, help improve the documentation (we’re working on the Sphinx template for the new documentation, then we’ll “just” have to write it down), front end skills would be really appreciated, etc. Any help welcome!

Yes we definitely need more community involvement. I really would like to contribute to the scaling feature, so I have been reading on Scala and Gatling’s source :slight_smile:

That’s really appreciated!

I actually find contributing to Gatling few times a week after work takes the stress from my day job away :wink:

You mean that our stress tool is also an anti stress tool?

I tell you, loadrunner is a totally inefficient tool - it requires way more resources than gatling - that’s why they had to implement scaling 15 years ago…
A few months ago, I was faced with a situation where a loadrunner installation with 4 load generators couldn’t generate as much load as gatling. Performance wise, loadrunner just sucks.

Hmm. That doesn’t match my experience. Basic web vusers are fairly lightweight. I can have a single generator put out somewhere around ~1000 TPS without too much effort before the CPU caps out, and I really haven’t tried tuning that at all. In fact, my scripts tend to be quite heavy in terms of features :wink:
Don’t forget that this stuff was built in the late 90’s, when CPU and memory resources were quite a bit more scarce. The later additions invariably suck performance wise, but the core web protocol really doesn’t.
So don’t try to use the fancy Ajax stuff - that protocol is fairly horrible. :wink:

But maybe I just lack comparison material here. :wink:

With Gatling, we already reached 80k requests/sec on a i7 MacBookPro …

Hey Nicolas. Floris mentioned TPS so requests/sec is not an equivalent metric.

What does the T stands for ?

Transactions.

Alright. I guessed that. But what is the “transaction” in this context ?

And oops, I meant 8k requests/sec :wink: During the simulation, the number of concurrent active sessions was also about 8k.

Exactly :slight_smile: Since we don’t know what a transaction does in each system (could be a login procedure, could be placing an order etc) it’s hard to compare.

In this case each transaction was a single HTTP request with possibly a redirect response followed by a second request to the redirected resource.

But yes, it’s quite possible for a “transaction” to actually be a collection of HTTP requests. So it doesn’t necessarily equate to RPS, though in practice it often will.

Anyway, that was a fairly untuned file upload script (sending files in the 1-2 MB range on average), and when I look back at that test it wasn’t CPU, it was actually memory that we were capping out on. (And that definitely wasn’t normal - I had some problems with that script which I won’t bore you with.)

I don’t quite know what hardware those generators have, since they’re actually virtual machines that share physical hardware with other generators. Hardware that isn’t really all that new, I might add :wink:

If you really insist on comparing them I can probably dig up something from the archives.

The base point though is that I rarely to never have problems with generator performance, and we do have a fairly decent-sized site to simulate, with 1.5 million unique customers logging in every day. We can simulate the load on the dual-server LST environment with 1 generator sitting there mostly idle, and even though that’s just 1/12th of prod it serves our needs quite easily.

So no. I don’t believe “performance” is really a LR problem. That tool has many, many issues, but web protocol script performance isn’t one of them.

Transactions

So why bother about scaling Gatling, then?

IMHO, the main situation that would require to scale out is when you saturate the NIC.

+10

As in input/output queue length?