Starting with Gatling from a JMeter background

Hello,
First thanks for your product that seems really promising from what I read about it.

I am starting to look at Gatling coming from a JMeter background and have some questions to which maybe you can answer:

  • In JMeter, when you want to debug a scenario you are writing (for correlation for example, you want the N-X response bodies to see where the data comes from and how it looks), for all these I use View Results Tree and Regexp Tester to test the regular expressions I use, what is the equivalent in Gatling, it seems there is nothing as it has no GUI. What is the best way to do that in Gatling Approach ? More globally how do you debug a script in Gatling ?
  • Reading Gatling 2 release notes, I saw this:
  • https://github.com/excilys/gatling/wiki/Gatling-2 => In Gatling 1, connections are shared amongst users. This behavior does not match real browsers, and doesn’t support SSL session tracking.
  • What does this exactly mean ? Are tests done with Gatling realistic ? Does it only concerns HTTPS or is HTTP also concerned ?
  • In JMeter except for Java Impl, every user has his own connection.- Regarding the Performances of Gatling, there is something I don’t really understand. It is said that the fact that you don’t follow 1 User == 1 Thread, performances are better. But what I don’t understand is how in this case do you mimic really parallel users. Looking at Gatling code, I understand it uses AKKA scheduler which will trigger next action for simulated user , but will this model work fine once multiple Threads starts processing response data and extract some content ? Will this really mimic parallel users ? Also from some tests I made using JSON path, CPU was rather high with 300 Users. I compared with JMeter and CPU was nearly at the same level.
  • Is Gatling really made for complex scenario testing or more for basic tests ? All tests that are shown don’t often extract data from response ?
  • Reading release notes, I see a lot of changes in method names, other elements. So there is something that frighten me regarding the tests I would start writing. Can we consider Gatling is in a stable state ? if not when will it be ?
  • Regarding generated graph at end of test,what is the precision ? I mean how much point do you take and can this be configured ?

Thanks
Regards

Hi

Thanks for quick responses Stephane.

My answers below.
Regards

Hi

Ok.I find this a bit hard and low level but I will try to get used to it.

If you use less connections, then in what way is it close to real world scenario ? Take N Browsers they don’t share connections.

Ok will be better then :slight_smile:

Can you give a longer answer or give me some pointers about this ?
I have read AKKA docs on the scheduler used in Gatling and it seems to me once massive processing occurs, delaying could occur in AKKA scheduler.

You are right, I mixed up with my tests.
I was speaking about the Jodd extractor you have in Gatling for the CSS Extractor.
In JMeter I used the CSS/JQuery Extractor + JSoup implementation for my test. I tried it with a response of around 200 Ko.

Ok good to know.

Ok.

OK

You are right, I mixed up with my tests.
I was speaking about the Jodd extractor you have in Gatling for the CSS
Extractor.
In JMeter I used the CSS/JQuery Extractor + JSoup implementation for my
test. I tried it with a response of around 200 Ko.

There's basically 2 css selectors implementations around in Java: JSoup and
Jodd. Gatling has been using Jodd for about a year now, while css selectors
just made it in JMeter. We did benchmark both implementations back then,
and Jodd was performing slightly better.

Now, here's what might happen:

   - we messed up
   - JSoup considerably improved this last year
   - you messed up :wink:

In order to properly investigate the first 2 possibilities, could you
provide us with a dump of the HTML page and the expression you're using,
please?

I must say that I'm a bit sceptical about JMeter being CPU flat on this.
Whatever the implementation, you have to first build a DOM-like tree of the
page, and then traverse it. It's unlikely that doing so for 200 Ko pages
would be without CPU impact (parsing, traversing, GC) under realistic load.
Are you sure that your JSoup assertions where enabled? How many
transactions/sec did you produce with both tools?

Cheers,

Stéphane

Although I’m pretty new to Gatling and developed only a few complex scripts so I’m not in a position to give any reviews,but, I can say it was the best tool for me.

I do not come from Jmeter background, but I had to start with some load testing. I’m a java guy and still I choose Gatling over Jmeter because of it’s DSL and maintainability. Also, I dropped the idea of using selenium on grid or Neustar

The best thing i noticed is:

Jmeter’s XML was no way nearer to Gatling’s DSL in scala.

Also, I noticed that for my scenario’s Jmeter used to be hung at higher loads, while Gatling did not. I did not observed CPU usage or something, but I was satisfied with Gatling more than Jmeter. Jmeter exists since long time and seems reliable,but on the other hand, GAtling is cool and innovative. I gave Gatling a try because of the reasons stated above, and I never regret it.

special thanks to Team Gatling, they responds pretty quick here…

Thanks a lot for your kind words, much appreciated!

Thanks a lot for your kind words, much appreciated!

Maybe but in JMeter you never manipulate XML , so I don’t think this argument is a valid one.
DSL is a good thing from a developer point of view I agree as I have this background.
For a pure tester I don’t know.
I have seen a DSL development is in progress in JMeter:

Well in my experience I didn’t face issues but I hear people do. On this part I would really be interested by Stephane clarifications of my questions below.

Proof of this pretty quick answers is here :slight_smile:

Maybe but in JMeter you never manipulate XML , so I don't think this

argument is a valid one.

Not entirely true, some people do. But you almost never do because that's
not convenient at all, that's more of a consequence.

DSL is a good thing from a developer point of view I agree as I have
this background.
For a pure tester I don't know.

What's a "pure tester"? IMHO, just another lie: "Hey, let's get the tests
done by someone who doesn't know the app, who doesn't know where to dig,
and moreover, who doesn't have a clue about how to interpret the result!
What's important is that we can underpay him. Maybe get someone offshore."
Let's face reality: loading testing has to be done by devs and ops.

I have seen a DSL development is in progress in JMeter:

   - GitHub - timkoopmans/gridinit-jmeter: Moved to ruby-jmeter

Yep, a Ruby DSL by the GridInit guys. They started supporting Gatling too,

then felt like JMeter might benefit a DSL too.
Haven't tried, might be cool, and attract Ruby people.

Also, I noticed that for my scenario's Jmeter used to be hung at higher

loads, while Gatling did not. I did not observed CPU usage or something,
but I was satisfied with Gatling more than Jmeter. Jmeter exists since long
time and seems reliable,but on the other hand, GAtling is cool and
innovative. I gave Gatling a try because of the reasons stated above, and I
never regret it.

Well in my experience I didn't face issues but I hear people do. On this

part I would really be interested by Stephane clarifications of my
questions below.

Which questions do you still have unanswered? Did you get my answer asking
you for your HTML and css selector expression?

Cheers,

Stéphane

Hello Stéphane,
Thanks again for your answers but I am still interested in answers to those 2:

If you use less connections, then in what way is it close to real world scenario ? Take N Browsers they don’t share connections.
If I am to go and use Gatling, I would like to be sure I am simulating accurately and not base this on pure bench or an approach that has higher hits/s but with the drawback of not simulating realistically load. Which seems to have been the case for Gatling 1.

Can you give a longer answer or give me some pointers about this ?
I have read AKKA docs on the scheduler used in Gatling and it seems to me once massive processing occurs, delaying could occur in AKKA scheduler and you might end up not injecting the load the way real users would do.

But maybe I am just no smart enough to understand.

I never said if had flat CPU with JMeter, I said Gatling and JMeter were behaving the same with a slight advantage to JMeter.
I know about DOM -like tree.

I used JSoup impl in CSS/JQuery Extractor of JMeter (JMeter also has jodd option) and the equivalent in Gatling which uses Jodd. I will try to send this in the upcoming weeks.

If you use less connections, then in what way is it close to real world

scenario ? Take N Browsers they don't share connections.

If I am to go and use Gatling, I would like to be sure I am simulating
accurately and not base this on pure bench or an approach that has higher
hits/s but with the drawback of not simulating realistically load. Which
seems to have been the case for Gatling 1.

If you want to be 100% sure you open as many connections as users, use
Gatling 2.

In short: yes. Threads are just a technical resource for processing
messages. You'd have to understand NIO and Akka properly.

Can you give a longer answer or give me some pointers about this ?
I have read AKKA docs on the scheduler used in Gatling and it seems to me
once massive processing occurs, delaying could occur in AKKA scheduler and
you might end up not injecting the load the way real users would do.

   - Classic Scheduler • Akka Documentation
   -
   http://code.alibabatech.com/blog/dev_related_1119/explore-the-scheduling-of-scala-actors.html
   - 404 - Page Not Found

But maybe I am just no smart enough to understand.

Akka is not a Scheduler, Akka is an actor implementation.
The third link explains that scheduling is based on a HashWheelTable,
meaning that event are not scheduled with a 1 nanosecond accuracy (which
doesn't make sense at all once you know what to expect from time
measurement on a computer), but with the one set up (basically, the bucket
width).
Gatling's default value is 50ms, but this can be overridden if needed:
https://github.com/excilys/gatling/blob/master/gatling-core/src/main/resources/akka-defaults.conf

Honestly a 50ms accuracy is plenty enough.

The 2nd and 3rd links are irrelevant: they're about the Scala actors. This
is another implementation which is much less efficient than Akka and that
has been deprecated since Akka became part of the Typesafe stack.

Making it short: actors are all about messages. In Gatling, messages are
the users, that walk down a scenario:

   - when a user reaches a request action, it sends the request
   asynchronously, a callback listens on the response, and once it's
   completed, it performs the checks and then move forward to the next
   action in the scenario
   - when a user reaches a pause, it is simply re-scheduled with the pause
   duration to the next action in the scenario
   - note that if some drift occurs (like long checks processing or Akka
   scheduling drift), it will be subtracted from the next pauses

Get it?

Nearly got it.
Thanks very much for clarifications.

I saw you switched frol Jodd to JSoup :wink:
Nice work for this 1.5.0 and 2.0.0m2 release (not talking about this one )

Maybe you should give the choice as doing this you break existing test plan no ?
Jodd and Jsoup have different syntax

Could you share your experience with the syntax differences, please?
The Spec didn’t need to be changed when we migrated: https://github.com/excilys/gatling/blob/master/gatling-core/src/test/scala/io/gatling/core/check/extractor/css/CssExtractorsSpec.scala

For example I remember I had an issue with :

a[href=“api”]

JODD worked with " , JSOUP without it.

Which is strange, because in the CSS spec, both should work.

Well it was few months ago, I met this ( using JMeter, switching from one impl to the other, don’t say it was JMeter :slight_smile: ), I am reporting it as I think it might help you, do what you want with it.

Also I may be wrong but isn’t there also a risk of breaking existing plans if users have used some CSSelly extension to syntax which do not exist in JSoup ?

2013/5/7 Philippe Bossu <pbossu@gmail.com>

Just a word of notice: Jodd’s developer Igor Spasic just got in touch to let me know that he had made huge performance boost in Jodd. My benchmark did confirm this.

As a consequence, we’ll be reviving Jodd support in Gatling 1.5.1 and make it the default CSS Selector engine.

Regards,

Stéphane

Great discussion thought I’d comment on two items:

Maybe but in JMeter you never manipulate XML , so I don’t think this argument is a valid one.

Not entirely true, some people do. But you almost never do because that’s not convenient at all, that’s more of a consequence.

Interesting, wish some people blogged or wrote about their experiences directly working with JMeter XML. Is the JMeter XML schema that horrible or it’s just XML in general that’s horrible? I imagine that if someone built a good parser for the JMeter XML, you could have alternate options to manipulate it (CLI interpreter shell, web GUI, etc.) but assume that kind of thing is not likely to happen.

DSL is a good thing from a developer point of view I agree as I have this background.
For a pure tester I don’t know.

What’s a “pure tester”? IMHO, just another lie: “Hey, let’s get the tests done by someone who doesn’t know the app, who doesn’t know where to dig, and moreover, who doesn’t have a clue about how to interpret the result! What’s important is that we can underpay him. Maybe get someone offshore.”
Let’s face reality: loading testing has to be done by devs and ops.

It probably is ideal to have devs and/or Ops, or DevOps do load testing. But from my work experience, QA Engineers are also involved in (or sometimes the only ones to do) load testing. And depending on the organization (and individuals), those QA Engineers either have the dev “know how” to program a load test in DSL/code/script, or they’re the novice script kiddies that would be more at home with JMeter GUI or commercial tool like LoadRunner, etc. We’ll probably head more towards the ideal as the industry shifts more towards Software Development Engineers in Test (SDETs) type of QAs rather than the novice ones less skilled in software development and automation. I would point out that in the industry there likely exists some QAs who don’t have a whole lot of good coding experience (hence not so familiar to using Gatling) but who would be knowledgeable in all the aspects of load & performance testing. Knowing software development or programming skill doesn’t necessarily equate to knowing load/performance analysis either, and having both is a real gem.