your thread is very similar to Srujana’s - may be worth keeping track of that one also.
- we started with:
“After running the test, the home page load time shown as 5872 ms, higher than normal browser loading time which is around 400-500 ms (Used Chrome - Developer Tools).”
You provided a screenshot where it said “60 resources 4.8KB”.
The count there includes requests from cache (the “size content” column will say “from cache” if so) which is evident from the small size of the download bytes. —> so that measurement is from a partial or full cached page.
the gatling report above it shows that gatling was making a request starting from an empty cache as all the resources are requested and so unsurprisingly (given a fresh look!) the response time is accordingly longer.
therefore not a valid comparison.
Your original comment was that Gatling overstated the page response time compared with Chrome dev tools.
The most recent multiuser reports you have attached show a response time in that ball park 500ms. the comparison there is valid, as both measurements are from full caches. valid as a comparison only though as the user injection is likely wrong (for you to determine yourself).
“Sorry, I meant to say Response Time (not page load time) shown in Gatling report is incorrect when trying to simulate launching page with all its resources using exec and resources . Response time in report is correct only when using inferhtmlResources , without any further static resources.”
→ see https://github.com/gatling/gatling/issues/2090
Looks like there is a defect here as you have identified.
From the reports you attached it looks like, as Pierre suspected in the other thread, page resources are being cached after a certain time.
What to do depends on your users (your workload [model]) who access the site:
a) if there is a fixed number of identifiable (ie. “Bob”, and “Jane”, “employee number 12”) users who sit in front of a computer and run these scenarios over and over again, then
leave the simulation as is. the response time will be correct (ignoring outstanding reporting issues) as the resources will be cached up quickly and those users (“Joe” and colleagues) will enjoy good performance from their browser caches as they repeatedly do the same thing. Maybe the site is providing some admin or call center features.
b) else, if your users come from the general population of people (typically called “the general public”, a set of independent people) who want to access your site’s features, and they are all independent of each other, and they typically only access the site once, and they do their business on the site then leave to get on with their busy lives, then…
you leave caching on,
but remove any looping in the scenarios and change the user injection as follows.
You need to use the
constantUsersPerSec(20) during(15 seconds), // 4
constantUsersPerSec(20) during(15 seconds) randomized, // 5
rampUsersPerSec(10) to(20) during(10 minutes), // 6
rampUsersPerSec(10) to(20) during(10 minutes) randomized, // 7
constructs to inject independent users into the test, and apply them to the SUT.
As they are all independent, each new user will start with an empty cache, so the caching issue should not be so strong, if present at all.
As a side note, when measuring the page load time in a tool like firebug/chrome dev tools/webpagetest you need to take 3 measurements (tools may provide these measurements also, likely not the 3rd though):
- first hit, empty cache
- repeat hit of exactly the same page, will serve the most possible from cache
- if applicable, repeat hit of 2 different pages of the same type, the different pages will have a partially full cache as there will be some common cached up files, but may have other http calls which are different possibly due to different query parameters which won’t be in the cache.