Including latency/response times to console output

Is there any way to increase the amount of data in the console output to include response time/latency, even a simple average will suffice.

Our test cases can run for quite some time and allowing us to see the average response time in real time through the console as appose to waiting for it to be collected and parsed by our reporting tool will make early diagnosis of errors possible and automatic validation of results possible.

jMeter for example provides comprehensive output around data throughput including latency, it would be very useful if we could do the same in Gatling.


Have you looked into graphite? It works great! No code changes necessary.

There are some docker images and sample conf here.

Contributions welcome.
Note that average value is a completely useless metric, only percentiles make sense.

The problem I have is we use an existing monitoring tool that does not parse epoch. I have a similar topic raised around defining the timestamp format to datetime but it appears what I am trying to do is wrong(?) apparently. Another reporting tool is not possible as the machine this is run on as no direct access or desktop for security purposes.

I have to disagree there. Many alternatives offer reporting on an average in real time, it is more an indication of problems than true reporting. Why is everyone on here saying I am doing things wrong when our existing toolset provides exactly that. Also, percentiles are less meaningful in that respect. If you have 1 transaction that takes 1000 seconds to respond you want to see that and investigate while it is happening, that will have no impact what so every on a percentile.

As mentioned above, alternative reporting is not an option. Please confirm if it is possible in Gatling native or not. These responses of you’re doing it wrong or use this tool instead when our existing solution provides such and existing processes dictate it moves me further and further away from Gatling as a suitable alternative.

I guess it would be possible to add an option for alternatively supporting a date, but then, as Pierre said, there’s impacts in several places.
Please understand that you’re the first one to ever ask this in 4 years, so your request is very specific to your context and as such is not a priority.
As it’s open source, you can contribute. Otherwise, you can contract with us to have it implemented for you.

Regarding mean/average, I suggest you have a look at recorded conference talks for Gil Tene. Average value don’t give you proper information about your distribution. For example, you could have half of your values very good, have a decent average, and miss the point that half your values are bad.

Let me expand a little more. In order to access my injectors that run on Linux, I have to ssh through 2 other machines to reach it, 1 is a security layer the other is the environment gateway, this is to ensure nothing on the injectors can affect the corporate network or system under test. The logs are pushed up to a reporting tool using dedicated routes and ports (standard network stuff) but this cannot parse epoch timestamps present in simulator.log. The injector does not run a desktop so any access to another Web based reporting tool viewed locally is off the table.

The console shows the number of pass and fail but provides no indication of actual application performance. Some of our tests can run for a couple of hours and at present in Gatling we have to wait for the test to finish before identifying any end user impact.

In all honesty I have worked for a number of large or security conscious organisations where very similar restrictions and complexity exist and I have been faced with and able to overcome similar problems using jMeter so either Gatling is not being used in such environments as a rule or internal development has worked around them, something not available to me.

I am also aware of the importance of percentile from a reporting aspect, however in regards to real time monitoring, percentile will not be affected by a small subset of poor results that could point to areas of concern, especially in a test environment, if you are looking at 90th and 1 of your 10 load balancers is playing up you are not really going to see it.

This is easily comparable to a baseline, so if your 90th is 100ms and your average is 150ms, this means that run is looking good but your next run could be 90th percentile at 100ms and average is 300ms, this is because the top 10% have increased substantially. Not debunking percentile completely but each have their own uses in terms of reporting and I have uses for both.

To be honest I only wanted a yes it can’t be done in the native tool or no it can’t with these questions not a discussion as to what I am doing is right, because it is how we do it now and for Gatling to work it has to fit in with our existing infrastructure.

We have existing policies and restrictions in place that demand certain data formats and restrict our capabilities so telling me something I am forced to do is wrong is not helping (also based on experience what I am asking is completely reasonable), I am asking if something is possible in the native tool not what other tools do I need to deliver it. At present I have 1 product that fills our needs but makes development difficult. Gatling is easier to develop but supports none of our reporting needs.

I understand this may seem quite specific but a lot of large and governmental organisations have similar restrictions both in security and infrastructure so if Gatling is looking to stand up to the likes of LoadRunner and jMeter, it needs to be able to address similar needs. Questioning the requirement does not dilute it’s validity.

If your Gatling box is a Linux box, Perl is available to you. If you script the launching of your scripts (a good practice, in my opinion), then you could add to that script a post-processing step to convert the simulation.log to a suitable format prior to pushing the logs up to the reporting infrastructure. It will be less work than trying to modify gatling to add a switch. The Perl script will take about 20 minutes (tops) to write. You just need to inject the script into your workflow. Let me know if you get stuck.