Asynchronous execution

Hi,

At some point in my simulation, I need to wait for a minute or so for a message to be processed.

I’m currently sleeping for a minute and then looping in a while loop until the value I’m looking for appears.

The problem is that, or course, this freezes everything because the Actor is sleeping.

I can see two solutions to that:
1- Having a many actor instances as I have users. Dumb but would work.

2- Being able to sleep asynchronously to give a change to other users to use the actor. It’s a bit like being able to


.doWhile("${myvalue}" == "") {
    exec(
      poolingForMyValue()
    )
    .pause(1 seconds)
}

How can I do that?

Thanks

Henri

I had a slightly different idea last night but by the same name.

We ran into a problem yesterday where, in production, our clients are making asynchronous requests at faster rates/closer intervals than we had tested, and that created some errors we hadn’t seen before.

I was able to hack together a gatling test, utilizing two different scenarios, to recreate the problem.

But I think the better solution would be to have one scenario that utilized an async version of exec:

asyncExec(
exec(//async req 1),
exec(//async req 2),
exec(//async req 3)
)

My thought being when the async block executes, all three execs are called at the same time, and the asyncExec will not continue execution until it joins all three (or however many) spawned chains.

I think there’s probably some cross-over in the innards that would have to happen to make either of our feature requests happen.

The tricky stuff about asynchronous flow branches is state reconciliation:
where and how?

@Spencer What I have in mind looks like
exec(
  asyncHttp(
    http("request1")...,
    http("request2")...,
    http("request3")
  )
)

This construction would allow check resolution (and save) to be resolved
all at once at the gather point. It would also allow us to cap the number
of concurrent async requests, just like a browser would do.

@Henri
Your problem is slightly different, and it would be very difficult to
implement (no static gather point). As a workaround, you could send the
lookup request every time you send a request from the main flow branch
(write a def for making this easier).

I think that would meet my needs!

–Spencer

Bumping an old thread.

Started doing some stress testing that models some mobile app behavior, which basically has numerous async chains.

I wasn’t sure if something had slipped into the 2.0 mainline that would allow this to be modeled - either way, I think this is still a good idea.

If it hasn’t been done yet, I don’t mind rolling up my sleeves on this in a bit and giving it a shot.

–Spencer

We can already do scatter-gather (like we do for page resources).

I’m in the process of implementing WebSocket support and damn, that’s hard once you want to break out of the request-response model. But I’m making good process, “just” have the checks to design and implement.

What to you want exactly? I don’t think we’ll go further than simple request scatter-gather for now. Otherwise it would bring way too much complexity.

Scatter gather may be sufficient.

How do I do it?

–Spencer

It’s very simple for now: https://github.com/excilys/gatling/blob/master/gatling-http/src/test/scala/io/gatling/http/HttpCompileTest.scala#L95-L100