Percentile analysis


In my dissertation I am trying to compare two web technology stacks. I am currently trying to assess which one is more scalable using the graphs generated by Gatling. I know that percentiles and average are two different things, but can you tell me which percentile value (50, 80, 100 …) is best to use in order to make conclusions on which stack scales better as load increases?

Thank you.

Jamie Tabone

On applications I test, I am more interested in 90th percentiles. That’s the trend I have noticed on most of the applications I have worked on.

Thanks :slight_smile: I will take that into consideration.

Honestly, if you try to point to a single number and use that as justification for making a value judgment, you are doing yourself and your stated purpose a disservice. You want to look at everything. You want to MEASURE everything. And you need to be able to tell the story of what that particular measurement means. For example:

  • number of requests per second that can be serviced at a particular resource utilization level - say, 75% of peak. If one server can process twice as many in the same time period at the same resource level, clearly, it can scale higher
  • ability of the server to recover after being maxed out for an extended period
  • error rates while at peak load
  • ability of the technology to scale by adding hardware resources, such as in the cloud - not every technology can

Thank you for your response.

I really appreciate

you need to look at the requirements, which can be difficult if you are doing a project or benchmarking one system against another without a specific boss/client/customer guiding what is important.

Other than that Latency percentile measurement is an active area in performance engineering:

Good luck!