Hey
I am currently using Gatling assertions to mark the Jenkins build as failed. I read in the Gatling docs that “All the assertions are evaluated after running the simulation.”
I wanted to know is there any way to kill the Jenkins job at run time instead of waiting for complete simulation to run?
Like if all the requests are just failing(maybe 5XX errors) there is no point in continuing with the test and there should be some mechanism to kill the build/test automatically.
Thanks
Salil Gupta
To the best of my knowledge, Gatling does not support the ability to create assertions that let you bail if certain conditions are met. You can achieve it, if you are willing to spend the time developing the logic. But I warn you: it won’t be easy, and it will obscure the real intent of your test. I do not recommend it.
What you might want to do is run the test twice in a row, the first one with a short duration, relatively low volume, just to prove that connectivity is established. Then run your full simulation only if the first one passed. Or even create a separate functional test to run first, which only does the bare minimum to establish that the simulation is runnable.
Hey John,
Thanks a lot for such a prompt response.
Actually, I want to run these test as nightly build in an automated manner wherein in case the test is stressing the system too much it should just back off as the test is being directly run on production system which I don’t intend to bring down
Also regarding running 2 test, maybe the system will start choking in the second load test when the actual load is put on the system so it might not solve my problem.
Rule # 1: Do not load test in production.
Rule # 2: When doing load/performance/stress testing, the test is only meaningful if your test is the only thing generating traffic against the server
So I suggest you stop, take a deep breath, and re-evaluate what you are trying to accomplish. WHY are you running Gatling against production?
I am trying to stress the system to see how it behaves under 10X load.
We can even park that aside for now, my query was even in case I am stress testing staging is there any way in which rather than waiting for whole simulation to complete, we can exit the script when our test criteria (assertions) fail like
global.failedRequests.percent.lte(1)
Now I just want 1% of my requests to fail, after certain point once I reach this and 1% of total requests have already failed, shouldn’t the simulation just fail there rather than just keep on running and in the end display a fail message like
Global: percentage of failed requests is less than or equal to 1.0 : false
I am just looking to terminate the test in this case.
Here’s the right way to do this, at least in my opinion:
-
Setup a dedicated “stress” environment. The only traffic on that environment will be generated by your tests, period. Do not share this environment with anyone else
-
Run your test with a long, slow ramp-up, from 0 to N users per second over at least 30-60 minutes, where N is a number that you know is large enough to be more than what it can handle
-
Analyze the results, find the injection rate at which the system reaches the saturation point. An easy way to do that is to drill down to the details , pick one of your requests, and look at the bottom graph where it says response time vs global requests per second. When the graph stops being flat and starts to fan out, that’s the approximate saturation point. Divide the RPS by the average requests per user, and you have your approximate value of N that represents the saturation point
-
Run another test ramping from N-10% to N+10%, where N is the saturation point
-
Analyze these results, and confirm that your chosen value of N is indeed the saturation point, or just below that point
-
Run a test with a fast ramp (5 minutes or so) to N, sustain it for a half-hour to an hour.
-
Analyze these results and confirm that the system was able to sustain that load level for the full test
-
Select as your target load level 90% of N.
-
Run that test, and analyze the results, and set your assertions based on that.
-
Now, set up your nightly job to run in the Stress environment, using your target load level
In the event that a change is deployed to stress that negatively impacts performance, even by a relatively small amount, your nightly job will fail.
And if your value for N (the saturation point) is less than 10x of peak production, then you already know what will happen when you test at that level, there is no point in testing it.
If the team ever deploys a change which is intended to improve performance, then you would re-do the ramp tests to figure out the new saturation point, and adjust your nightly job’s target rate accordingly.
Hope that helps.