Long running async request timing - Anyone have any thoughts?

Hi, All.

I searched the forum for this topic and didn’t find anything that really addressed my issue. I’d be happy to hear any thoughts on this.
First, I’ve been using Gatling for a couple years. I’m currently upgrading to 3.0.3. Prior to that I used Jmeter and prior to that, LoadRunner and Performance Center for years. I’m not new to this rodeo.

Here’s my issue:

I am building a modular framework using Gatling that tests our web-based application. The app is used for large data analytics and can crunch reports from data stores that are terabytes large. We have customers in many different vertical markets.

Most of the UI is typical user transactions; navigation and such. These, of course are easy to time, since that’s the basic functionality of tools like Gatling.

But, when the reports are being generated, they can take hours to complete and I need to assess the effect of user load on the completion time of report processing. When the report jobs kick off, they are done so in an asynchronous manner. Typically, when one of these jobs is started, the overall time to kick off the job is 5-9 seconds. I have no way of knowing how long the actual job runs.

Were I using a test automation tool, such as Selenium I could just send request after request to the UI, checking on the text on the page. But that’s not a viable option for performance testing.

Does anyone out there have any recommendations for assessing long-running asynchronous user requests?

Thanks!
Tom