Gatling 2 - fetchHtmlResources

Hi there,

just had a go at the new fetchHtmlResources and maxConnectionsPerHostLikeFirefox etc. Great stuff! Way to go!

Do you plan to implement this stuff on a per request level as well?

Reporting-wise, I believe it would be nice to have something like:

  • The first html request (statistics for html only, and statistics for html + resources) - collapsible or something
    — resource1 (statistics)
    — …
    — resourceN

  • The next html request (statistics for html only, and statistics for html + resources) - collapsible or something
    — resource1 (statistics)
    — …
    — resourceN


Btw, is there support for conditional GETs?

Thanks again for your great efforts!

Cheers

Stefan

just had a go at the new fetchHtmlResources and
maxConnectionsPerHostLikeFirefox etc. Great stuff! Way to go!

Thanks!
That's just a WIP though. There's still a lot to do and we'll refine step
by step.

Do you plan to implement this stuff on a per request level as well?

WDYM?

Reporting-wise, I believe it would be nice to have something like:

- The first html request (statistics for html only, and statistics for
html + resources) - collapsible or something
--- resource1 (statistics)
--- ..
--- resourceN
- The next html request (statistics for html only, and statistics for html
+ resources) - collapsible or something
--- resource1 (statistics)
--- ..
--- resourceN
...
...

I agree that would be nice, but making those reports evolve is A LOT of
work.
I'm currently more inclined to have this shipped in a commercial product.

Btw, is there support for conditional GETs?

No, not yet. But we plan to.

Thanks again for your great efforts!

Thanks! Stay tuned, more cool stuff to come.

Do you plan to implement this stuff on a per request level as well?

WDYM?

exec(http(“In this case I want to include all the resources”).get(“/whatever”).fetchHtmlResources)
It would make it possible to target exactly the stuff you want.

Reporting-wise, I believe it would be nice to have something like:

  • The first html request (statistics for html only, and statistics for html + resources) - collapsible or something
    — resource1 (statistics)
    — …
    — resourceN

  • The next html request (statistics for html only, and statistics for html + resources) - collapsible or something
    — resource1 (statistics)
    — …
    — resourceN


I agree that would be nice, but making those reports evolve is A LOT of work.
I’m currently more inclined to have this shipped in a commercial product.

Makes sense. I mean, you guys are investing A LOT OF time in this product after all, and it would make sense to have a bunch of commercial add-ons. Any plans for the near future for this kind of thing?

Btw, is there support for conditional GETs?

No, not yet. But we plan to.

Great!

Thanks! Stay tuned, more cool stuff to come.

Yes, I know! You guys rock!

maxConnectionsPerHostLikeFirefox sounds interesting. It hints at least that something similar to the pause functionality is possible here.

What we do with browser emulation is a tad more … complex? Complete? Possible something to borrow, at any rate.

What we do currently is to use a csv input file which contains a list of user agents with weights attached, based on production access logs:

Browser>Percentage>Count>Connections per host|Max connections|User Agent String
Chrome/14|0.05%|169|6|35|“Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.790.0 Safari/535.1”
Chrome/15|0.15%|532|6|60|“Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.860.0 Safari/535.2”
Chrome/16|8.93%|32732|6|40|“Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.36 Safari/535.7”

Firefox/1|0.01%|19|2|24|Mozilla/5.0 (Windows; U; WinNT4.0; en-US; rv:1.7.5) Gecko/20041107 Firefox/1.0
Firefox/10|0.06%|221|6|32|Mozilla/6.0 (Macintosh; I; Intel Mac OS X 11_7_9; de-LI; rv:1.9b4) Gecko/2012010317 Firefox/10.0a4
Firefox/2|0.03%|96|2|24|Mozilla/5.0 (Windows NT 6.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0
Firefox/3|1.21%|4437|6|30|Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.0

MSIE/8|22.39%|82077|6|35|Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 5.0; Trident/4.0; InfoPath.1; SV1; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET CLR 3.0.04506.30)
MSIE/9|5.19%|19023|6|35|Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)

Opera/9|0.16%|590|4|20|Opera/9.00 (Windows NT 5.2; U; en)
Safari/533|0.79%|2878|6|35|“Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_8; it-it) AppleWebKit/533.16 (KHTML, like Gecko) Version/5.0 Safari/533.16”
Safari/534|2.16%|7934|6|35|“Mozilla/5.0 (Windows NT 6.1) AppleWebKit/534.30 (KHTML, like Gecko) Iron/12.0.750.0 Chrome/12.0.750.0 Safari/534.30”
END>>>>>

This (partial) list is actually split with tabs, but I have replaced those with the | character for clarity.

And this list is actually quite old, so don’t try to borrow this - use your own counts / percentages please. (an update is in the works - we need to find time for redoing the analysis … :wink: )

The fields:
Browser name / version number - speaks for itself. We actually had to reverse engineer that information since the access logs don’t contain it.

Percentage - actually that’s for informational purposes only. The code doesn’t look at this field at all

Count - How often this particular user agent string is seen in the access log.
Connections per host|Max connections|User Agent String - speak for themselves, hopefully.

The counts are added up and a random number is generated between 0 and that total to pick the entry.

So the counts determine the chance that a particular browser type is emulated: If you have four entries, each with a count of 1, the chance that each individual entry is chosen is 1 in 4. If you have 20.000 entries, each with a count of 200, odds are 200/20.000, or 1%. Of course, the real counts vary quite a bit (as you can see). And the list itself is actually trimmed - anything that makes up less than 1% of the total is ignored to avoid having a list that is thousands of entries long.

Coming back to the subject at hand: In theory your code could do better simply because the behaviour of browsers is more complex these days than simply differences in the numbers of connections these browsers set up. There are things like SPDY support, timeout values, and other things that differ from version to version even within various iterations of the browsers.

The question is what the right model is to make your performance test tool be capable of emulating them as closely as possibly. “maxConnectionsPerHostLikeFirefox” immediately begs the question: “what version of Firefox??”. Etcetera.

(and in fact, our code is even more complex than what is described here because we also do things like clearing caches and cookies based on a random percentage chance to emulate people returning to the site within the same browser session later… something to keep in mind.)

As an aside: I’m sure I’ve mentioned it before - our LR implementation of this can be found here: https://github.com/randakar/y-lib/blob/master/y_browseremulation.c