Recommended gatling.conf settings for REST api testing (2.0.2)

I just downloaded version 2.0.2 of Gatling (congratulations!) and see there are a few more settings available than there were in previous versions of Gatling (specifically the ahc section)

Using the assumption that the values in here are current defaults, I’d like to find out more about them.

Here is my situation:
I’m using Gatling to test a REST api hosted on Amazon.

From what I’ve learned of the SUT and my requirements, I need to be able to send a high volume of short-lived requests.

No browser-type behavior, authentication or cookies are used, and virtual users/actors will only make one request apiece. Most of my scripts will require ability to target throughputs upwards of 400 RPS. I’d really like to be able to do this from a single machine, because the scripts will be used in a Continuous Delivery model. I’ve been following some of the discussion threads on this group about “open” versus “closed” models, and I think my system is more along the “open” model. I have full control over the Gatling settings and scripts, but less control over the build server because it is shared with other users.

What settings should I modify to get best tuning of Gatling for this kind of use case?

I found a section on the User Docs that addresses OS tuning and TCP/IP tuning, but nothing for the gatling.conf file settings.

http://gatling.io/docs/2.0.0-RC2/general/operations.html?highlight=tcp

If the gatling.conf settings are covered elsewhere, please advise.

It’s not a matter of RESTfulness, it’s a matter of expected load profile. Load profile is about 2 things:

  • request per second profile
  • connection profile: how many concurrent connections, if your clients use keep-alive and your SUT allows it and how long, how many of them get open and closed per second

The second point really depends on what your clients look like:

  • browsers: one connection pool per user
  • other APIs: one connection pool per API (if it uses keep-alive).
    Opening and closing tons of connections puts a heavy burden on the OS, because of ports recycling that could be not fast enough.

If you’re willing to drop the per user connection pool and only focus on rps (which can be realistic, or not, depending on your use case), you can have one global connection pool: http://gatling.io/docs/2.0.2/http/http_protocol.html?highlight=shareconnections#connection-sharing

Hm ok. The main reason I mentioned REST is because all the scenarios I’ve seen in my requirements are single resource calls.
From what I’ve seen, emphasis is heavy on supporting RPS rather then supporting a specific connection profile.

We are using ‘keep-alive’ , but I have not seen any requirements for connection profiles or concurrent users. The shared connection link might help in this situation. How about other settings in gatling.conf?

We are using 'keep-alive' , but I have not seen any requirements for
connection profiles or concurrent users. The shared connection link might
help in this situation. How about other settings in gatling.conf?

You could disable browser-like features, such as caching, cookies, etc, but
I suspect it wouldn't help much.