I’ve defined a relatively simple scenario with assertions on the number of failed requests, but these assertions are evaluated at the end of the test run.
What I’d like to do is create a test that will stop running when it start seeing a certain error rate (I.E. it should ramp up users over time until requests start failing at a certain rate / percentage, then finish).
Is there a way to do this, I can add assertions to the whole run for evaluation at the end, but the test still runs all the way through when these requests start failing.
-Sam
This is a feature people have wanted before. Others want to be able to throttle the test based on error rates.
Best you can do at this point (unless something has changed recently) is to add the logic you need to the scenario. Or babysit your test as you monitor it with Graphite.
Could you parse the Graphite data and kill the Gatling process when errors hit some kind a threshold?
Yes. You could build a monitoring process that launches Gatling, then pings the Graphite API and then kills Gatling if it doesn’t like what it sees in Graphite. But it would be less work to just detect the results folder, open and tail the simulation log, and calculate the error rates yourself in order to kill it.
That being said: what motivates you to want to kill it at a certain part of the test?