Setup Scenario before load test scenario

Hello,

In my test case scenario I have some setup data required before actually beginning the load test.

So I have something like this

`

setUp(
    setupScenario.inject(atOnceUsers(1)).protocols(HttpConfig()),
    theScenario.inject(rampUsers(100) over (10 seconds)).protocols(HttpConfig()))
.assertions(
    global.successfulRequests.percent.is(100))

`

However I wanted to make sure the setup is complete before the actual test scenario begins as on the server side it will not behave as expected if the setup scenario (and data) have been created.

It seems the current way this works is that both scenarios will be eligible for execution and begin executing concurrently!

So I came up with something.

`

object SetupLock {

    private val setupComplete: AtomicBoolean = new AtomicBoolean(false)

    /**
      * Unlock the setup lock , will allow those waiting for setup to finish to proceed.
      */
    def unlock(): ChainBuilder = {
        exec((session: Session) => {
            setupComplete.set(true)
            session
        })
    }

    /**
      * Call this early in the scenario to wait until the setup lock is unlocked.
      * That way when you move forward you can know that setup finished.
      */
    def waitForSetup(): ChainBuilder = {
        exec(asLongAs((_: Session) => !setupComplete.get()) {
            pause(1 second)
        })

    }
}

`

So to sum up.

  1. Did I misread the doc, Is there an inbuilt way to make dependencies between scenarios work like I want.
  2. Any holes/issues you can see with my proposed solution, Did I miss something?

How long does your setup scenario take to run? To keep things simple, why don’t you simply delay the other scenario?

eg.

setUp(
    setupScenario.inject(atOnceUsers(1)).protocols(HttpConfig()),
    theScenario.inject(nothingFor(60 seconds),rampUsers(100) over (10 seconds)).protocols(HttpConfig()))
.assertions(
    global.successfulRequests.percent.is(100))

Thanks,
Barry

Yes, I had thought about that. Personally I have had bad experiences with arbitrary delays in tests in the past. One problem is determining what’s the right delay? if you don’t make it a long enough the tests can be fragile and just intermittently fail so you tend to be conservative and go for the worst case time but most of the time that means the test is waiting for nothing.

Apologies for the necromancy, but I’m currently struggling through the exact same issue of how to chain calls that run sequentially without using a timed wait.