Asynchronous REST service

We are trying to load test an asynchronous REST service. We’d like to measure not only HTTP response times, but also processing times, recorded by events generated as the original request is processed in the background. These events are recorded in database tables and have timestamps.

What we currently have is pretty crude (it’s our first attempt to use Gatling). After executing the HTTP request and verifying it has returned successfully, we are looping (using asLongAs) until we observe the expected events (or we reach the max number of attempts). We’re storing the various event times in Gatling’s session and are dumping the results in a CSV file at the end (we’re fine with just recording data points and doing our own reporting).

I’d be interested in hearing suggestions for doing this the ‘right’ way. Aside from not being particularly elegant, this approach has the drawback that it makes it difficult to simulate our real-life use case, where we have a fixed number of clients making requests as fast as they can (meaning they’ll issue a new request within milliseconds of the previous request returning). The problem with the way we have things rigged up is that a Gatling client waits until the request has been fully processed, before issuing a new one (as opposed to doing it as soon as the request has returned). It seems like recording event times should not be part of the scenario, but we need an identifier from the response to identify the associated events.

Thanks,
Alex

Can’t you store the request ids in a concurrent queue, and resolve the events when the simulation is done?

When I thought of that solution, it wasn’t clear to me where I would check for the events, if not as part of the simulation. Looking a bit deeper, an ‘after’ block seems like a good fit. Is that what you’re suggesting?

For example. Or even outside Gatling. How do you fetch the events? Web API? JDBC?

JDBC. I’d like to keep it all inside Gatling.

OK, so DON’T do it the way you’ve done it until now: JDBC is a blocking protocol (except if you’re using some fancy driver, and then, you have to use it properly), and you’re harming the load test.

Populating a concurrent queue, or dumping elsewhere (file, redis) during the run, then fetch the events once everything is done in an “after” block LGTM.

I see. Thanks.

I’m in the same situation.
Could you describe how to measure the background processing time?