After upgrading to gatling 2, we’re seeing some request mixups. We had to introduce a pause between the 2 requests below in order to prevent 505 responses from the server. BTW, we’re not sharing connections
11:08:36.956 [WARN ] i.g.h.a.AsyncHandlerActor - Request ‘Download’ failed : status().in(Range(200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210)) didn’t match: expected Range(200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210) but found 505
Is the response flushed properly?
.foreach("${ids}", “id”) {
exec(
http(“Upload”)
.post(UPD_URL)
.headers(mtomContentType)
.elFileBodyPart(“dummy-file-name.xml”, “upload.ssp”, “application/xop+xml; type=“text/xml””, “<root.message@cxf.apache.org>”)
.rawFileBodyPart(“dummy-file-name.pdf”, “small.pdf”, “application/octet-stream”, “<my-content-id@cxf.apache.org>”)
.check(regex("(\d.)").saveAs(“id”))).exitHereIfFailed.pause(12*)**
.exec(
http(“Download”)
.post(INFO_URL)
.elFileBody(“download.ssp”)
.check(regex(“xop”))
)
}
Gatling 1.5.1 uses AHC 1.7.16 and Netty 3.6.6
Gatling master uses AHC master and Netty 3.6.6
The only thing in AHC that has changed since is the fix for the CRLF bug you reported: https://github.com/AsyncHttpClient/async-http-client/commit/e4c2e0e058285188eaaaaf7d412bf4bdcc460dbd
At this point, I have no idea why it suddenly fails. The 505 HTTP Version Not Supported status is very weird. Do you have anything strange on the server side that might help?
The only funky thing here is rawFileBodyPart that performs zero-copy (but that’s something that hasn’t changed in AHC. Could you try using an elFileBodyPart instead (will perform useless parsing, so less performant, but will use an intermediate in-memory byte array), please?
instead of rawFileBodyPart, could you try the following:
import io.gatling.http.request.{ ByteArrayBodyPart, RawFileBodies }
.bodyPart(ByteArrayBodyPart(“small.pdf”, RawFileBodies.asBytes(“dummy-file-name.pdf”), “application/octet-stream”, Some("<my-content-id@cxf.apache.org>")))
Running through a proxy like charles seems to shape traffic such that request are valid when they arrive at jboss.
I also tried the code below - that also works. Probably som kind of flushing issue or something.
BTW, another issue I’m facing is that I’m running out of memory in gatling when I submit lots of large files in parallell. Are the file contents being read into memory?
Stefan
Running through a proxy like charles seems to shape traffic such that
request are valid when they arrive at jboss.
I also tried the code below - that also works. Probably som kind of
flushing issue or something.
Looks so. I'll have to dig in, I'm not very familiar with this part of
AHC/Netty code.
BTW, another issue I'm facing is that I'm running out of memory in gatling
when I submit lots of large files in parallell. Are the file contents being
read into memory?
If you use a raw file body, no: they are directly fed from the filesystem
into the socket.
But if you use a ByteArrayBodyPart, yes, the whole byte array is loaded in
memory.
Note that these byte arrays are cached (I realize we have to introduce a
parameter to disable this).
So if you're sending tons of different files, this is indeed harmful. Is
that the case?
Until then, could you try the following code:
import io.gatling.http.request.ByteArrayBodyPart
import io.gatling.core.config.GatlingFiles
import io.gatling.core.session.Expression
import org.apache.commons.io.FileUtils
def bytes(path: String) = (session: Session) =>
GatlingFiles.requestBodyResource(path).map(file =>
FileUtils.readFileToByteArray(file.jfile))
.bodyPart(ByteArrayBodyPart("small.pdf", bytes("dummy-file-name.pdf"),
"application/octet-stream", Some("<my-con...@cxf.apache.org>")))
This bypasses the cache.
I just realized that zero-copy isn’t used for multiparts, so AHC uses a ByteBuffer through a ByteArrayOutputStream.
Could you try the solution I proposed earlier, and please tell me if:
- it solves things
- it slows down the engine, or not
Thanks,
Stéphane
Sorry it took me so long. I’ve been struggeling with … well … some performance issues …
It crashed instantly with the following error
[ERROR] [06/14/2013 07:28:26.520] [GatlingSystem-akka.actor.default-dispatcher-3] [akka://GatlingSystem/user/$g] Stream closed
java.io.IOException: Stream closed
at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:162)
at java.io.BufferedInputStream.read(BufferedInputStream.java:325)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1792)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1769)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1744)
at org.apache.commons.io.FileUtils.copyInputStreamToFile(FileUtils.java:1512)
at io.gatling.core.config.ClassPathResource.jfile(Resource.scala:37)
at no.company.scenarios.MyScenario$$anonfun$bytes$1$$anonfun$apply$2.apply(MyScenario.scala:42)
MyScenario line 42:
def bytes(path: String) = (session: Session) => GatlingFiles.requestBodyResource(path).map(file => FileUtils.readFileToByteArray(file.jfile))
If you create a new temporary release, I’ll be able to battle test it again before M3
I’m travelling and won’t be able to do it today, sorry. Maybe later this week-end.
Hi Stefan,
I haven’t been able to investigate your body/request mix up problem yet. If there’s a problem there, it’s not in Gatling, but in Async Http Client.
Until then, you can keep using the hack I gave you to turn the file into a byte array (should be more simple as I added an option to disable caching, so that you don’t get OOM).
I’m think of producing a new timestamp later this week, then test it for about a week before having a M3.
Then, we will be focusing on documentation and tests (something we wanted to do for a very long time) and I’ll also try to clean up parts handling in AHC. If I can come up with something, I’ll deploy some custom ahc version in our maven repo, and I’ll probably ask you to test it (just skip version with a dependencyManagement on your side).
Cheers,
Stéphane