I’ve been getting into Gatling recently for load testing at my company, and I’m really happy with the functionality and power. It’s been terrific so far. But recently I’ve been trying to implement some functional specs, and I’m not quite getting it.
This feels like a clueless question, but how do I programmatically set a session value before a functional spec executes?
With a Simulation I understand that I can use exec() or a feeder, but I’m scratching my head over how to do it with a spec.
For example, say I use a json file which has a parameterized value “${docId}” which I want to generate before I execute the spec - it’ll get inserted in the body and then returned in the response, which I then want to validate.
spec {
http(“some name”)
.post(“some endpoint”)
.header(“my header”)
.body(ElFileBody(“my-file.json”)) // this file has the parameterized value
.check(
jsonPath("$.docId").is(_.(“docId”).as[String])
)
}
The functional spec is more of a toy/PoC.
It doesn’t cover all the features in the standard DSL and I’m not even sure/convinced of its value.
Feedback/contributions welcome!
I think it was good idea, but they're seperate use cases. I think you'd be wanting to run your acceptance tests in-browser using Playwright or Puppeteer or something because of frontend JS and running them headless.and in hopefully in parallel. Probably a good idea to run them test first.
I'd want eyeballs on the large load tests and I am not sure I'd put them in the pipeline. Low level load tests I certainly would though.
Gatling shines above *all* other performance test tools and I would be uncomfortable running anything else and pronouncing a release. This also goes with the inbuilt metric and destribution mechanism. I wouldn't put my head on a block with anything else. Not anything, and I have run against some of the biggest sites in the UK.
I don’t see is as a toy - the check() syntax actually makes it a powerful functional test tool.
For example, in one test “suite” I have a series of specs in a HttpFunSpec, the first of which takes an input json file (with some parameterized values) and submits it to a REST endpoint which processes and persists the data, then returns the constructed object, and it takes the values from this response and saves them to the session. The following series of specs hit the various other endpoints in the service, which return objects constructed from that first spec, and use a comprehensive series of jsonPaths to compare the response data against the values stored in the session. The tests are all driven from the input data, with clean separation between the test data and test logic. So I just have a set of input files that run against the same functional test to fully verify the functionality.
I have no idea if this is how it was envisaged to be used, but it’s actually quite elegant and clean. But there are some niggles currently, and I would love to see this developed further.
I wish I could post some of the test code to better show what I mean, but I’m afraid my company prohibits it.