I’m porting my framework’s Groovy code that wraps Gatling over to Scala and noticed that M3a is still on scopt2, whereas master is on scopt3 (yay). I need support for scopt3 in order to continue and was curious about the projected timeline for release of M4.
(Also: 2.x is looking great!)
My current best guess for M4 is end of november.
- finish HTML embedded resources fetching support
- polish SBT build
- SBT plugin
- properly document Gatling 2 and move all documentation to new website (Sphinx infrastructure is ready, “just” have to write everything down)
- move everything Gatling related from Excilys Github organization to the Gatling one
- build and launch new gatling.io website
This isn’t strictly related to 2.0.0-M4, but I do have a question with the snapshot releases.
My tests heavily rely on the ability to have range pause durations, e.g.: pause(2 seconds, 3 seconds)
It looks like this has been completely removed - I was going to just recreate this functionality with a Custom PauseType, but the only available generator signatures accept a single argument and the PauseType trait is sealed. We build our load tests using min/max times for users to complete certain requests so that our load tests can be better representative of the real world.
Further, some pauses would look like this:
.pauseCustom(devPause(30 seconds, 1 minute))
Where the devPause would let our engineers send a flag into the framework to cut the times to a constant pause time of 1 so they could iterate on test changes faster (some of our load tests simulate scenarios that take users 30 minutes to complete) - This also doesn’t seem possible anymore. From what I see in the commit history, this was deliberately removed entirely (https://github.com/excilys/gatling/commit/e71ddf2341e2b8e2565fa2e3e96002edcfed6ed1).
Do you have an example of how I can still meet my two use cases?
We felt like it didn’t really make sense to define pause distribution per pause element (have a range here, a constant there, another range elsewhere), but rather globally at Simulation level:
Regarding range pause duration, you can set up “uniform” pauses, where you set up the half width. It will use the pause value as the median.
Regarding having a function for switching from dev to prod, you can still do that, just that the signature changed;
So devPause would look like:
def devPause(dev: Duration, prod: Duration): io.gatling.core.session.Expression[Duration] = _ => if (true) dev else prod
Does this meet your needs? Makes sense?
Any feedback welcome.
Your example of devPause will do nicely - thanks.
As for the global-level pauses, I’m just going to present our use case and see where this leads.
The pause change won’t really have any effect on our REST API tests, but our primary use Gatling is to simulate load against our middleware, effectively simulating users using our rich JS application. In our scenarios, we need some requests to pause for a long duration - suchas someone is filling out a form or reading information we’ve presented them - and then some requests are fast - like back-to-back communication from our frontend to the middleware.
From what I can tell with this change, we would need o use broad strokes across all requests for deviation - which is not representative of what we’ll see in a production environment. Small example:
This is a chain as part of shopping and we enter into the chain from another based on a click-action in the JS application, which kicks off a few requests right off the bat to the service which populates the page. Then the user sits there and contemplates which products they’re going to choose, then finally select one - this is where the longer wait times come into play. We link together quite a few chains, where some requests have ranges of 22sec - 3min, 10-20 seconds, 5-8 minutes, and so-on - so the spread isn’t consistent.
Being able to tune our pauses in between requests independently is critical to our continued use of Gatling. I can get along with not having exacting numbers as seen in my example (say, deviate a percentage from a given median), but defining the deviation percentage at a global level won’t suffice.
Well, I guess bringing pause level ranges would probably be feasible, but just to be sure: are you sure that the pauses you see on your live system are uniformly distributed? Usually, exponential pauses match reality better.
For the UI tests (where we use ranges), we take our min/max numbers of Google Analytics from a rather large block of time (a month or two) and then cut the outliers. Randomly selecting from the range for each user is “good enough” since we don’t really see people staying on the site longer than it takes to get the job done (which doesn’t change with the number of users concurrently using the application). Does that answer your question?
A new snapshot should be available soon on Sonatype.
Thank you! Fantastic as always.
Is this the HTML mode of operation we discussed earlier?
Regarding devPause(), I have two comments:
- Long term, it may be worth it to create a ‘development mode’ for gatling, where you can run a scenario with a different set of settings depending whether you’re doing scenario development or actual performance tests.
If the mode gatling is running in can be discovered from within the Scenario it should then be possible for the test engineer to rearrange the clickflow - to do the user actions in a completely different order depending on the mode.
This can be useful for verifying that the scenario won’t hit functional errors during the test.
You see, if you heavily use conditionals with percentages and you can’t tell how the scenario will run during the test just by executing a single iteration then you can’t guarantee you won’t have functional errors during the performance test itself. Some performance tests are very labour intensive to set up and require a lot of planning, so having a test fail because the simulation ended up with the ‘user’ clicking non-existant buttons or doing things that are not possible in the real word is a rather big thing to avoid. In order to prevent that we have debug clickflows that hit every possible way the scenario can end up going through the clicks.These clickflows coïncidentally also disable all thinktime and do not contain any pauses in the debug path - just like Rob’s scenario is doing. (Except we use 0 seconds, not 1.)
As an aside, to make this even possible I have to create a ‘library’ of clicks for each application - function calls that wrap the url and all verifications and correlations required for each click, so that all knowledge about how how a specific user-level action is achieved technically is wrapped inside it.
(But not the pause time - on a very abstract level, pause time is something that belongs with scenario mode, not with debug mode, and is not intrinsically part of the click. Unless you’re debugging the pause time itself, perhaps.)
I haven’t yet tried to do this with gatling, but I expect I will once I get serious about test-driving it
- If pause time is a global thing, does this mean it can’t change during the scenario run? Or can it?
For simulating internal connectionpools we use a bit of code that manipulates pause time on the fly. If you know that an applicationserver will start exactly 12 HTTP connections during startup and send all requests to the backend across those 12 connections, and you have to simulate that behaviour, then rampup looks rather different from what you’re used to. Specifically, we ramp up by starting 12 threads simultaneously and using a non-linear function to manipulate the think time, gradually decreasing it so that we can still get a linear increase in load even when the number of concurrent threads doesn’t change.
This means that something like .pauseCustom is going to be fed numbers from a function call. Eventually, very small numbers - we’re talking fractions of a millisecond in the final stages of some tests.
I sort of presumed this was easily possible with gatling, but maybe I shouldn’t.
Is this the HTML mode of operation we discussed earlier?
planning to work on this, but need some features in async-http-client first. will take some time.
already possible in master (see below)
build libraries of requests, checks, chains or whatever
that’s the whole point of scenario as code approach
pause time is not a global thing, pause strategy is
current strategies (well, in master) are:
- Constant (default): use the value configured in .pause()
Glad to help.
Regarding the pause strategies: I just got a question from a colleague yesterday about how to make the first 30 virtual users ( threads ) use the thinktime-based rampup strategy (code implementation of the pause function here: https://github.com/randakar/y-lib/blob/master/y_loadrunner_utils.c#L942), and have the rest use regular (fixed range, uniform distribution) think times.
Apparently a portion of the load comes in through one channel (fixed pool) and another portion comes in through a different channel (MQ?), but they use the same request interface for each channel. That’s definitely a new one, but it’s valid. And it goes to show that trying to predict what people with do with pause times globally is a tad hard.
Nonetheless you still -need- global pause strategies on a scenario level, simply because varying the numbers from test to test is a valid thing to do and you don’t want to have to change all the call sites…
There’s many ways to achieve that with Gatling.
The simplest way is to reuse the same scenario with different set ups (users, protocols, etc).
val scn = …
First of all I’d like to say how great the gatling tool is, I’ve been using it a lot, and I really enjoy it
And as this thread is about the timeline for the 2.0.0.m4 release, do you have any update on when it is going to be released?
I know that it was supposed to be end of november, but just wanted to get some update as we’re now mid december
Thanks for all.
FYI, personally, I’ve been using 2.0.0-SNAPSHOT for the last few months, and except for a few glitches now and then, it’s been really stable, and nice new features have been added constantly. The gatling team rocks!
Yeah, we’re late once again, sorry…