Integration Testing Gist

So, I’ve written a small library that makes my life loads easier when scripting integration tests.

I’ve stripped it down to the core and posted the gist here: https://gist.github.com/sechastain/9766912

It basically allows you to block out your tests pretty nicely.

I had been feeling like gatling caused a bunch of white space and boiler plate in my code.

When passing checks, be sure to declare your list type, e.g.

step(“Some description”, false, “/my/service”, checks = ListHttpCheck)

Anyhow - thought some folks might find it useful. I don’t know how useful it will be without modification, though - e.g. my jsonp implementation and assumptions in the default checks probably differ from yours; I also have a load of other utility functions in my IntegrationUtils to do some routine session creation / dumping, structure checking, etc.

Gatling has proven to be a very useful tool for integration testing in addition to a all around good load testing tool!

–Spencer

That’s fun!

That's fun!

I had been feeling like gatling caused a bunch of white space and boiler
plate in my code.

Such as? Feedback welcome.

Well, when I started out with gatling, I just adopted the style presented
here:

val scn = scenario("My Scenario")
  .exec(
    http("My Request")
    .get("/my_path") // Will actually make a request on
"http://my.website.tld/my_path"
  )
  .exec(
    http("My Other Request")
    .get("http://other.website.tld") // Will make a request on
"http://other.website.tld"
  ...

query params, checks, all that would get their own line. execs were
essentially one block of multi-lined boilerplate. There's what I call a
high "shift count" (how many times I have to reach for the shift key) with
that style. And it doesn't scan well because of how long the blocks would
get and all the parens.

Since I was doing the same kind of thing over and over (naming, uri-ing,
methoding, making http vs https distinctions, etc) I figured it made a
great case for dropping all the parens by providing a one size fits all
parametized way of doing all the things with a good set of defaults. E.g,
I've got a thing I want to do (lets call it blah), where I'll be getting a
URL and checking that it's 200.

When I need to do more than that, the function was structured such that the
things I was going to change up the most were in order (e.g. change the
method, provide body, change the checked status). And after a while, there
were some params I would just always name because I felt it read better
(specifically, headers, params, and checks).

So my blocks became very easy to write and ready single lines. When they
were more than one line, it was mostly checks blocks and occasionally param
blocks.

Also the nomenclature I used (createTest with a list of steps) has proven
to be more natural and easier to understand by my teammates and other
co-workers.

Just one question, why don't you use assertions?

What would be your proposed use-case for assertions?

In general, I only care about testing each step and its validation checks.
If all those pass, then we're good.

--Spencer

More than boilerplate, I think that’s a powerful feature from Gatling :slight_smile:

We need this modular, immutable DSL, so that people can write prototypes: e.g. a request val with all common features, and other requests that would extend it by simply chaining other method calls.

Also, IMHO having a builder style DSL makes it easier to learn, instead of trying to find out what is parameter n.

This way, everyone is free to build his own API on top of our DSL that suits his coding style (I discovered the /: reversed foldLeft with your code, for me, this is… surprising) and his application under test.

One special mention about url/queryParams:

  • you can pass a query directly in the url, so you don’t necessarily have to use queryParam
  • we might change things here in 2M5. The thing is underlying AHC tries to URLEncode the query and the queryParams, and usually does a poor job. And this parsing comes with a performance penalty. I’m working on introducing different strategies.

I don’t want you to think I’ve replaced ask my scenarios with this library. Many of my load tests are still written in the exec block style because of the different things I need to accomplish.

But for integration tests where you define a test of simple steps, this little library has made things much easier for my team to read and understand.

So I think your point about the modular DSL works here.

Everybody always remarks on my use of /: it’s how odersky introduces folding in the scala book :).

Sorry for the typos - I was on my phone and accidentally hit send.

I’m not looking at my code at the moment, but I think I leaned towards queryParam because of URLEncoding. Plus, the query params list is just loads easier to read than a long url.

I wanted to add a few more notes on integration tests vs load testing.

So load testing is, obviously, about bringing load against a web app. To do that most effectively, you want to bring in a lot of different requests. And you’d like to mimic user behavior. This is why Feeders and options beyond just execing an http request (e.g. random switch, pause, if-else, etcc) are so good.

With integration testing, you define a scenario ahead of time that you want to be executed more-or-less as-is with as little variability as possible.

For example, there should be no (or very little) variability in the steps taken. Thus random switching, if-else’s, etc aren’t really useful.

  • I have been thinking about introducing groups, though, for readability.
  • I place pauses as a matter of course after anything not a get, though I can foresee the possibility of needing to override that in the future.
  • Right now, 200 ms is ample time for back-end replication to occur so that secondary reads produce the correct results.

Variability, if there is to be any, should be data-variability. Feeders are a bit overkill here because simply defining your data including variability prior to test creation is sufficient and more readable. Also, the variability is more easily reflected in the documentation (e.g. scenario name and test step names) so that if there are failures, some of the key features of the failure are immediately available.

And, I mean, come on. This is just sexy:

createTest(“Gimme a test”,
step(“Login user”, true, “/login”, “post”, new StringBody("""{“user”:“scott”,“password”:“tiger”}"""),
step(“Check preferences”, false, “/profile/preferences”, checks = List[HttpCheck](
jsonPath("$.username").is.(“scott”),
jsonPath("$.theme_id").ofType[Int].is.(1)
)),
step(“Logout”, true, “/logout”, “post”),
step(“Check logout”, false, “/profile/preferences”, stat = 401)
)

It’s just tight code.

My current integration suite is about 400 test steps in its most rigorous test-case. When everything is right, that entire suite runs in less than a minute (when you account for dependency download and compilations). And the nice thing is, as I add more tests, I don’t anticipate the time to run to increase very much. Whereas with other suites that my team had used previously, it would always take integration tests minutes to run because nothing was parallelized.

I also have a sub-set of tests from it that we use for smoke testing which I find to be a nice perk.

Sorry for the typos - I was on my phone and accidentally hit send.

I'm not looking at my code at the moment, but I think I leaned towards
queryParam because of URLEncoding.

One of the things I want to fix in AHC. + java.net.URI is dead slow.

Plus, the query params list is just loads easier to read than a long url.

+1

I wanted to add a few more notes on integration tests vs load testing.

So load testing is, obviously, about bringing load against a web app. To
do that most effectively, you want to bring in a lot of different requests.
And you'd like to mimic user behavior. This is why Feeders and options
beyond just execing an http request (e.g. random switch, pause, if-else,
etcc) are so good.

With integration testing, you define a scenario ahead of time that you
want to be executed more-or-less as-is with as little variability as
possible.

For example, there should be no (or very little) variability in the steps
taken. Thus random switching, if-else's, etc aren't really useful.

   - I have been thinking about introducing groups, though, for
   readability.
   - I place pauses as a matter of course after anything not a get,
   though I can foresee the possibility of needing to override that in the
   future.
      - Right now, 200 ms is ample time for back-end replication to occur
      so that secondary reads produce the correct results.

Variability, if there is to be any, should be data-variability. Feeders
are a bit overkill here because simply defining your data including
variability prior to test creation is sufficient and more readable. Also,
the variability is more easily reflected in the documentation (e.g.
scenario name and test step names) so that if there are failures, some of
the key features of the failure are immediately available.

And, I mean, come on. This is just sexy:

createTest("Gimme a test",
  step("Login user", true, "/login", "post", new
StringBody("""{"user":"scott","password":"tiger"}"""),
  step("Check preferences", false, "/profile/preferences", checks =
List[HttpCheck](
    jsonPath("$.username").is.("scott"),
    jsonPath("$.theme_id").ofType[Int].is.(1)
  )),
  step("Logout", true, "/logout", "post"),
  step("Check logout", false, "/profile/preferences", stat = 401)
)

It's just tight code.

My current integration suite is about 400 test steps in its most rigorous
test-case. When everything is right, that entire suite runs in less than a
minute (when you account for dependency download and compilations). And
the nice thing is, as I add more tests, I don't anticipate the time to run
to increase very much. Whereas with other suites that my team had used
previously, it would always take integration tests minutes to run because
nothing was parallelized.

I also have a sub-set of tests from it that we use for smoke testing which
I find to be a nice perk.

Must... resist... investigating... priority to 2M4... later...

But yeah, that's cool!