Gatling report response time is not correct

I use gatling-maven-plugin to run the performance testing of HTTP requests send to a web app deployed in Azure. I noticed in the graphical report the response time is not correct, ex consider this output:
---- Global Information --------------------------------------------------------

request count 6423 (OK=6423 KO=0 )
min response time 62 (OK=62 KO=- )
max response time 7455 (OK=7455 KO=- )
mean response time 696 (OK=696 KO=- )
std deviation 1351 (OK=1351 KO=- )
response time 50th percentile 90 (OK=90 KO=- )
response time 75th percentile 255 (OK=255 KO=- )
response time 95th percentile 3265 (OK=3264 KO=- )
response time 99th percentile 6775 (OK=6775 KO=- )
mean requests/sec 52.22 (OK=52.22 KO=- )
---- Response Time Distribution ------------------------------------------------
t < 800 ms 5086 ( 79%)
800 ms < t < 1200 ms 262 ( 4%)
t > 1200 ms 1075 ( 17%)
failed 0 ( 0%)
================================================================================
I assume the report is lay out like this:

  1. total request count = 6423 and the counts at the bottom are matching; 5068+262+1075
  2. for response time line items, it should show the response time and the actual request count for success and failed, but if you look at “> max response time 7455 (OK=7455 KO=- )”, the OK count is more than the total count and it seems it is taking the number of response time instead.
    This is rendering the report not accurate at all, I have the 99% response time is all above 6 seconds, while the response time distributed graph show almost 70% is under 99 ms.
    Please advise.

BTW: I run the same script in my local windows machine as well as on linux SSH on a Azure web app, local is showing correctly, but all the reports run on linux are showing the same issue.

it should show the response time and the actual request count

Why?

“response time” mean time to have a response…
So 3625 is your response time for any response
3624 is your response time for OK response only (maybe a rounding error here)
And your have no KO, so Gatling cannot compute the response time.

Perhaps should we add the unit to be more clear?
Same line as

response time 95th percentile 3265ms (OK=3264ms KO=- )

A pull request is welcome (if it does not break our parser, of course)

OK, so it is my misinterpretation of the data.
But still dont think it is correct:

  1. 95% is 3265 ms, 99% is 6775 ms, just 4% difference how it make the number twice bigger?
  2. And here is the response time distribution graph from the same report:

The graph show almost 70% of the response time is 99 ms, even there are small number of data was in 3000 ms area, but to be 99% of all response are in 6 seconds, the whole data graph have to be on the right most near the 6013 area right?
Unless the “99th percentile” means some thing else.

I look up “99th percentile” means the highest percentile you can get , so it is not the same as 99% average as I thought.
But there is max already, which is 7455, why we need “99th percentile” that only represent at the high response time end?

  1. 95% is 3265 ms, 99% is 6775 ms, just 4% difference how it make the number twice bigger?

I think you don’t understand what percentiles are.
Percentiles are thresholds, not counts.
p99 means 99% of your values are equal or below this value.

Here, it means 4% ou your requests are between 3265 ms and 6775 ms. Maybe just a few are close to 6775 ms, maybe most of them, can’t say.

I was misunderstood what is “99th percentile”, I took it as “the average of 99% of response time” which is more useful info to me. Let us take a look at some use cases of this info:

  1. If I am a student, my examine score is 90, and the “99th percentile” score of the class is 90, so I should be one of the top 1% of student right, but no exactly, there can be more than half of student’s score all are 90, so this info is not really useful, but if I know the average score of 99% of class is 60 that will confirm that I am at the top half of the class.
  2. As the performance test tool, I want to know more about “average of 99% (or 95%) response time”, as the example I provided, if I know “99% response time” is 2 seconds, I know my overall performance is good, do I really care if there is 1% of high response time is 6 seconds (beside I already know the max response time). Since the “99th of 95th percentile” is really the average of 1% or 5% of highest response time, this info itself does not really give me any idea what is the rest of response time really look like, I have to reference other graphs to get a better picture.
    My point is people always want to look at the data in different angles, more different data that can draw different pictures the better.

if I know “99% response time” is 2 seconds, I know my overall performance is good

No you really don’t.

  • 50 times 1 second
  • 49 times 3 seconds
  • 1 time 10 seconds

Average of the best 99% = (50 * 1 + 49 * 3) / 99 = 1,99

This is below 2 seconds, still 50% of your values are above 150% of your target.

In your case, the 99th percentile is 10 seconds, this is not helping and totally useless.
Even the 1.99 as 99% may also be misleading, but still it is painting a better picture than the single 10 seconds one.
I rest of case than.

Let’s agree we disagree here and stop this conversation.