Soak tests and large/long runs, - any tips?


I am trying to run some soak and scenarios that test auto-scaling behaviour.

Ideally I would like to test at scale for periods of 24 hours or more. Also, - I have some tests that run for a few hours but go up to 100,000 requests per seconds.

I did a run of a few hours at 100,000 a second, - the simulation.log files were coming out at > 24GB, and parsing taking ages.
TSung for example logs much less, - at least it is not logging every request.

is there any way to tune or optimize for this, - say to support smaller simulation.log files or long runs at scale?




This is typically a use case that FrontLine is designed for.


Thanks Stephane, that is not unreasonable. The product indeed looks interesting.

Is there any documentation/information on how to port an existing Gatling simulation written in scala and executing via frontline? What is involved here?

I see there is an AWS marketplace appliance available so considering using that, I am just not sure how to take what we have and get it running under Frontline.

Also, we currently use the maven plugin to trigger our simulation and there are a few things going on in the gatling scala code, like downloading a feeder file from S3 etc.


Hi Alan, Did you manage to find something to ship things to AWS? I see the documentation for Frontline on gatling website but does not really have detailed installation and migration details on AWS AMI available.


Regarding Gatling FrontLine on AWS MarketPlace, you can find comprehensive documentation on the AWS page:
Look for “Additional Resources”.