setUp() for Gatling

Use case:

  1. User logs in and requests a token from an authentication server

  2. User calls a REST service with that token (I correlate it) and asks for a list of products

  3. User calls a REST service to save one of the products

  4. User logs out.

I have estimated the total number of current users to 50 per second as a high-mark / peak. The typical scenario is that users act on offers and so on at the same time, thus gatling seems to be a good fit when it can produce a lot of parallel connections/requests.

My question:

How to I best model this simulation? As in I do not want to ramp up over a long period of time, but on the other hand I don’t want to get into hardware problems.

The duration of the test is less important, only that it can hold a steady state given 50 users doing the scenario described above in parallel and within the response times defined by the project.

Please give me an example on this form and explain why this is the best setUp for my case/question:

setUp(scn.inject(constantUsersPerSec(???) during (??))).protocols(httpConf) or other models is appreciated.

Magnus

I think you need to be more specific, as strange as that may sound.

Your scenario is a series of 4 actions. How long do those actions take, beginning to end? I mean in the real world.

Are you estimating 50 users BEGINNING the process per second, or at most 50 users IN the process at any one time? Or at most 50 requests per second?

Another way to think about this: Instead of trying to define it all up front, explore to see what makes sense. Like so:

`
val scn =
forever {
exec( login ).pause( … )
.exec( getProductList ).pause( … )
.exec( saveOneProduct ).pause( … )
.exec( logout )
}

`

Then set up your simulation to ramp from 0 to 100 users over 100 minutes (or more) so that there is an appreciable period of time at each load level. Set a maxDuration, that way the scenario will stop once you are satisfied. Monitor not only the stats that Gatling is gathering, but also capture stats on the system under test.

When the test is complete, look at the graph. If your simulation has come anywhere near saturating the system under test, the graph will level out at a certain number of transactions per second. The more users you throw at it, the longer the response times, but the number of transactions per second stay relatively fixed.

That test run will not only tell you average response time when you have 50 users, it will tell you how many users your system can actually sustain.

Armed with that, you should be able to figure out how many concurrent users the system can handle before response times exceed your target, and you can use those parameters to run another test, a long, flat test, to prove that under a sustained load, the system will continue to perform as expected.

Does that make sense?

Hello John, thanks for answering. It makes sense, but I do have some questions.
To be more specific our system must have a response time where 95% of the calls to each service does not exceed 1 second to accomplish, the remaining 5% could be distributed from 1-3 seconds.

In real world the scenario takes about 1 sec to login, then I want to pause for 1 second.Then I do the list-call (takes about 3 seconds), then I pause for one second and then do the save-call that takes 3 seconds. The total duration being 9 seconds.

I am not very familiar with the ramp up function and how to use it. Lets say I want to (as you suggest) ramp up from 0-100 users over 100 minutes. How do i define that in scala with also the maxDuration?

When you say “If your simulation has come anywhere near saturating the system under test” do you to the point where the system no longer respond? or that the system is having higher response time than 1 sec for login, 3 secs for get-list and so forth?

Yes am am equipped with some tools to monitor the SUT (a stack of different technologies). And I have some idea on what to look for with regards to tuning.

So given:

`
val scn =
forever {
exec( login ).pause( … )
.exec( getProductList ).pause( … )
.exec( saveOneProduct ).pause( … )
.exec( logout )
}

`

How do I define the scenario if I want to ramp up from 0-100 in 100 minutes?

Thanks again,
Magnus

And I mean “at most 50 users IN the process at any one time” as in constant 50 users calling the scenario for the duration I set to a long flat scenario.

Looking at the 'active' count in my console output in intellij, this is the number that should be 50 if I want to have constant 50 users calling my services at 'all time' as in constant active connections making requests.
If thats correct I can i.e ramp it up until I see this number in active, but given the short duration of my scenario I need a lot of users in admission-rate (constantuserspersecond). Right? Sorry if I dvelve with this question and it might be a good idea to experiment as u suggested (ramping up)

Look back at the code I sent before. Notice the loop. Because of that loop, your virtual user will never exit (unless forced to do so by using maxDuration. Alternatively, you can replace the “forever” with “during( time )” - whatever works for you.

So first you start up one user. That one user will step through your scenario. When it finishes, it will loop back and do it again. There will never be more than one active user. Then you ramp up a second user. Now there are two that are doing the action at the same time, but each will be in a different part of the process. Repeat until you have 50 concurrent users.

Let’s do some math. With 50 concurrent users (with pauses, a 4 transaction scenario over 9-10 seconds, means 20-25 transactions per second. Unless your system is badly misconfigured, your system ought to be able to do that without even breathing heavy. If the system can’t do at least 100-200 TPS, then something is probably not optimized.

What typically happens with resource-constrained systems is, as you give it more work to do, it takes longer to get the work done, but the overall throughput remains relatively constant. So if you have 100 users trying to do 100 things per second, and that happens to be the limit of what the system can do, then if you give it 200 users trying to do 200 things, it will happily do it, but instead of being able to process each request in a second, it will take 2 seconds each. End result is still 100 transactions per second.

My suggestion is, ramp up your scenario (with the “forever” loop) from 1 user to, say, 1000 users, at a rate of 1 user per minute. That will take about a day (16.66 hours). Set maxDuration to 17 hours. Maybe do it over the weekend. Then look at the graph that Gatling produces. Unless your system is insanely performant, you should see responses per second level off at some point. At that point, look at how many users were active at once. That is your maximum concurrent user load before response times start increasing. Do this exercise, and you will see what I mean.

Armed with that, you can do a long-running scenario where you ramp from 1 to X users over X-1 seconds, and then let it run for a few hours. This gives it time to “bake” and get through a GC run or two (if that applies to your application).

Before you do that run, I suggest tweaking the Gatling configuration so that the first report category is 1 second, the second is 3 seconds, that way you will be able to tell if 95% were under 1 second, and how many were more than 3 seconds.

Here is a code snippet where I am ramping up users and running them for a minute.

`
val s4users = scenario(“S4Search”).during(1 minute){
pace(3 seconds, 5 seconds)
exec(S4Search.s4Search)
}

setUp(
s4users.inject(rampUsers(3) over (10 seconds)
)).protocols(httpProtocol)
`

You can change scenario by increasing duration, increase users by ramp up.

Just a quick question before I move on:

given my scenario:

`

package no.psz

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class testSimulation extends Simulation {

val httpConf = http
.baseURL(“https://dnd.psz.no”)
.acceptEncodingHeader(“gzip,deflate”)
.headers(Map(“Content-Type” → “application/json”, “charset” → “UTF-8”, “User-Agent” → “Android(4.4)/Coop(0.4)/0.1”))

val scn = scenario(“Scenario”)

.feed(csv(“memberInfo.csv”).random)

forever(

exec(_.set(“TOKEN”, “B67CE0xxxxx”))

.exec(
http(“getProfile”)
.get("/user/profile")
.header( “X-Token”, “${TOKEN}” )
.check(status.is(200))
.check(jsonPath("$.resultCode").is(“SUCCESS”))

).pause(1)

)

setUp(scn.inject(rampUsersPerSec(1) to(10) during(1 minutes)).protocols(httpConf)

)

}

`

Using without ‘forever’ this scenario generates a report.

Using with ‘forever’ gives me the error (when all user have finished):

`

Exception in thread “main” java.lang.UnsupportedOperationException: There were no requests sent during the simulation, reports won’t be generated
at io.gatling.charts.report.ReportsGenerator$.generateFor(ReportsGenerator.scala:42)
at io.gatling.app.Gatling.generateReports$1(Gatling.scala:175)
at io.gatling.app.Gatling.start(Gatling.scala:247)
at io.gatling.app.Gatling$.fromMap(Gatling.scala:55)
at Engine$delayedInit$body.apply(Engine.scala:13)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at Engine$.main(Engine.scala:4)
at Engine.main(Engine.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)

`

Usage of the ‘forever’ function, do I stop it manually to generate report?

Magnus

If you use forever, you have to add maxDuration, or use exitHereIfFailed and then manually trigger failure, or something. Alternatively, you can use “during”

ok, like this:

`

package no.psz

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class testSimulation extends Simulation {

val httpConf = http
.baseURL(“https://dnd.psz.no”)
.acceptEncodingHeader(“gzip,deflate”)
.headers(Map(“Content-Type” → “application/json”, “charset” → “UTF-8”, “User-Agent” → “Android(4.4)/Coop(0.4)/0.1”))

val scn = scenario(“Scenario”)

.feed(csv(“memberInfo.csv”).random)

during(3 minutes)

exec(_.set(“TOKEN”, “B67CE0xxxxx”))

.exec(
http(“getProfile”)
.get("/user/profile")
.header( “X-Token”, “${TOKEN}” )
.check(status.is(200))
.check(jsonPath("$.resultCode").is(“SUCCESS”))

)

)

setUp(scn.inject(rampUsersPerSec(1) to(10) during(1 minutes)).protocols(httpConf)

)

}

`

Where do I put the during? Where I put the forever? like I did above?

Whether you should have the (during) or (forever) before or after the (feed) depends on whether you want every one to use a different user or not. You probably want the feeder inside the loop, I would think.

Otherwise, yes, looks good. Except the chaining is not right. I will leave finding the mistake as an exercise for you. :slight_smile:

Ok, thanks.

I’m a beginner to Gatling and would like to seek some help from you.

Usecase:

  1. User logs in

  2. User calls a REST service and asks for a list of menu items

  3. User logs out.

I want to execute this scenario repeatedly for an hour (for about 100 users hitting the server over a min) to handle sustain load and see the report (graph) how the performance is varying for the same set of load executed repeatedly i.e., for the first 100 users executing the scenario for the first time and for next 100 users executing it for the second time, so on and so forth.

Could someone please send some code of snippet for this.

Thanks!

Just go through the tutorial once. You should be able to sort this out quickly.

http://gatling.io/docs/2.1.7/quickstart.html

I am doing it like this:

val scn = (myScenario)

.asLongAs(true) {
exec( login ).pause( … )
.exec( getProductList ).pause( … )
.exec( saveOneProduct ).pause( … )
.exec( logout )
} //end asLongAs

setUp(scn.inject(rampUsers(10) over(50 seconds))).protocols(httpProtocol).maxDuration(5 minutes)

-Jay

Hi all,

Is there any way to run the Gatling Test with constant users for a specific time? I am not interested in the RampUp option.

Current it is :


setUp(scn.inject(atOnceUsers(1)).protocols(httpConf))

In JMeter we have this option which is more convenient.