We are trying to load test an asynchronous REST service. We’d like to measure not only HTTP response times, but also processing times, recorded by events generated as the original request is processed in the background. These events are recorded in database tables and have timestamps.
What we currently have is pretty crude (it’s our first attempt to use Gatling). After executing the HTTP request and verifying it has returned successfully, we are looping (using asLongAs) until we observe the expected events (or we reach the max number of attempts). We’re storing the various event times in Gatling’s session and are dumping the results in a CSV file at the end (we’re fine with just recording data points and doing our own reporting).
I’d be interested in hearing suggestions for doing this the ‘right’ way. Aside from not being particularly elegant, this approach has the drawback that it makes it difficult to simulate our real-life use case, where we have a fixed number of clients making requests as fast as they can (meaning they’ll issue a new request within milliseconds of the previous request returning). The problem with the way we have things rigged up is that a Gatling client waits until the request has been fully processed, before issuing a new one (as opposed to doing it as soon as the request has returned). It seems like recording event times should not be part of the scenario, but we need an identifier from the response to identify the associated events.