Stress a server with big file uploads over a slow tcp connection

Hi

I am trying to stress a server with big uploads (10MB file each time scenario) over slow connections (tcp delay set to 100 ms and tcp buffer size set to 4k) with gatling on the following setup:

setUp(
scn.inject(
constantUsersPerSec(1) during(30),
constantUsersPerSec(2) during(30),
constantUsersPerSec(3) during(30),
constantUsersPerSec(4) during(30),
constantUsersPerSec(5) during(30),
nothingFor(30)
)
)

Because of the slow tcp connections, each 10MB file takes several tens of seconds to send completely, which is making the test run out of memory every time (java heap space). Is there any way in gatling to make the test only store in memory the amount of data it sends at each time as opposed to the whole file size?

Thanks!

Leire

Have you tried increasing the heap size?

You have to realize that you don’t provide any kind of information that would help on this.
First, in the Group’s rules, we ask that you at least provide the Gatling version (amongst other things to check).

Then, you should share your simulation so we know how you perform your uploads.
And finally, as you have an OOM, providing a head dump would also help.

Sorry about the missing information, I thought the use case itself might spring some ideas to mind.

Gatling version is 2.1.3 and this is how I am performing the updloads:

val fileSize = 10000000

var scn = scenario(“Upload 10M Benchmark”)
exec(http(“Upload all”)
.put("${URI}")
.body(InputStreamBody( _ => new InputStream() {
var byteNumber = 0

override def read(): Int = {
byteNumber = byteNumber + 1
if (byteNumber < (fileSize + 1) ) random nextInt(255) else -1
}
}))
.header(“Content-Range”, s"bytes 0-$fileSize/$fileSize")
.check(status.is(204))

I have tried tried to increase the heap size up to 3G, which is as high as it can go in that box, but it still runs out of memory. And even if it had worked, it would only hide the problem: as soon as the tcp delay or the number of users were increased the test would fall over again.

One other thing to note is that this works with smaller file sizes (and fails earlier with large file sizes). I assume Gatling is building up the entire file upload in memory which (due to the really low transmission speeds we are forcing) are building up over time. The initial reason we went with an InputStream generator was that we were hoping Gatling would be able to steam the content on demand rather than prebuild the body but from what we can see that does not seem to be happening.

We analysed the heap dump and it shows a huge number of byte arrays of 8216 bytes, Attached is an example of the path to root of one of these arrays.

Thanks again!
Leire

path-to-root.txt (536 KB)

I’m not sure we can do anything about this.
The underlying Netty component (ChunkedInput/ChunkedWriteHandler) doesn’t seem to have any sort of back-pressure mechanism.
Asking on the Netty ICQ.

@Leire: FYI, I opened an issue on Netty: https://github.com/netty/netty/issues/3413. Don’t expect anything fast though.

This use-case is very similar on what I am trying to do now.
Was anyone able to implement it?
For the InputStream it makes sense to use this implementation:
https://commons.apache.org/proper/commons-io/javadocs/api-2.5/org/apache/commons/io/input/NullInputStream.html