Active users and rps

While running a set of tests, I notice that while I can ramp up the amount of users I have, the amount of calls that seem to go out is still at a same rate. I’m wondering how exactly the amount of active users correlates to rps. You can refer to my 2 screenshots below to see the difference in requests made.

Hi Jason,

Of course, if you use rampUsers, you’ll have a constant arrival rate.
You’re probably looking for rampUsersPerSec.

Regards,

Stéphane,

I have the same problem. For example I use

`
rampUsers(2000) over (30seconds)

`

So I would eventually get 2000 users. I tried:

`
rampUsers(2000) over (30 seconds),
rampUsersPerSec(100) to (1300) during(30 seconds)

`

to get 1300 users per second. I also tried:

`
scn.inject(
nothingFor(1 seconds),
rampUsers(2000) over (30 seconds)
).throttle(
reachRps(1500) in (20 seconds),
holdFor(20 seconds)
))

`

To try to enforce high requests/second.

Everything has failed. Even if I have 15 000 active users I get only get as far as 1000 requests per second.

Stéphane, do you have any idea what I might be doing wrong?
I’m using the jms module.

W dniu wtorek, 26 maja 2015 10:32:31 UTC+2 użytkownik Stéphane Landelle napisał:

Hi,

JMS doesn’t support throttling for now.

Do you get 1.500 requests per sec, or 1.500 responses per sec? Are you sure that you’re system can withstand such load, and that the messages are not piling up in the queue?

Can you provide a reproducer?

Cheers,

Hi,

Gatling manages to produce around 1000 requests per second (usually 850 – 1040), not matter how many users I give him (even with 3000 users). On the other side there is a simple responder that doesn’t do any processing - it just replies. When I see at my queue I observe that there isn’t much of a buildup there. Messages are processed almost as fast as they arrive. JMS and the responder (consumer + producer) don’t seem to be the bottleneck.
As I have only 1000 requests per second I only have the same number of responses per second. It’s not possible for the rest of the system (JMS + responder) to generate replies faster.

I am doing these tests with the following setup:

  • Gatling runs on Windows 8,
  • JMS and responder are installed on Fedora Linuxa in a VirtualBox

Below are attached relevant pieces of code:

Scenario:

`
def jmsProtocol = jms
.connectionFactoryName(“ConnectionFactory”)
.url(mqHost)
.credentials(mqLogin, mqPwd)
.contextFactory(classOf[ActiveMQInitialContextFactory].getName)
.listenerCount(listenersNumber)
.matchByCorrelationID

val scn = scenario(“JMS DSL test”)
// .during(testTimeInSeconds) {
.repeat(200) {
exec(jms(“req reply testing”).reqreply
.queue(queueName)
.replyDestination(queue(queueReply))
.textMessage(msgBody)
.check(checkBodyTextCorrect)
)
}

setUp(scn.inject(
atOnceUsers(20),
nothingFor(1 seconds),
rampUsers(3000) over (20 seconds)
))
.protocols(jmsProtocol)
.pauses(disabledPauses)
.assertions(
global.responseTime.max.lessThan(maxWaitTimeForResponse),
global.responseTime.percentile1.lessThan(avgWaitTimeForResponse), //percentile1 == 50th percentile == median
global.successfulRequests.percent.is(100)
)
.maxDuration(60 seconds)

def checkBodyTextCorrect = simpleCheck {
case tm: TextMessage => tm.getText == msgCompareBody
case _ => false
}
`
Declarations of some fields were omitted for brevity.

And the responder (consumer + producer) that waits for the gatling’s messages (likewise, some fields were omitted for brevity):

`
class QueueSystem {

class Listener {
final Destination dest;
final MessageConsumer consumer;
final Shouter shouter;

public Listener(final Destination dest, final Session session, final Shouter shouter) throws JMSException {
this.dest = dest;
consumer = session.createConsumer(dest);
this.shouter = shouter;
}

public void listen() throws JMSException {
long start = System.currentTimeMillis();
long count = 0;

System.out.println(“Waiting for messages…”);

while(true) {
final Message msg = consumer.receive();

if (msg instanceof TextMessage) {
String body = ((TextMessage) msg).getText();

if (count == 0) {
start = System.currentTimeMillis();
}
if( count % 400 == 0 ) {
System.out.println(String.format(“Received %d messages.”, count));
}
count ++;

shouter.replyTo(msg);
} else {
System.out.println("Unexpected message type: " + msg.getClass());
}
}
}
}

class Shouter {
final Destination destReply;
final MessageProducer producer;
final Session session;
final String replyMsgText = “OK”;

public Shouter(final Destination destReply, final Session session) throws JMSException {
this.destReply = destReply;
this.session = session;
producer = session.createProducer(destReply);
producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
}

public void replyTo(final Message msg) throws JMSException {
final TextMessage msgReply = session.createTextMessage(replyMsgText);
msgReply.setJMSReplyTo(destReply);
msgReply.setJMSCorrelationID(msg.getJMSCorrelationID());

producer.send(msgReply);
}
}

public void runOnQueue() throws JMSException {
final ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(“tcp://” + host + “:” + port);

final Connection connection = factory.createConnection(user, password);
connection.start();
final Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);

final Destination dest = new ActiveMQQueue(destination);
final Destination destReply = new ActiveMQQueue(reply);

final Shouter shouter = new Shouter(destReply, session);
final Listener listener = new Listener(dest, session, shouter);
listener.listen();
}

public static void main(String []args) throws JMSException {
QueueSystem system = new QueueSystem();
system.runOnQueue();
}
}
`

Regards

W dniu środa, 27 maja 2015 09:39:28 UTC+2 użytkownik Stéphane Landelle napisał:

I don’t think I’ll have time to investigate JMS support any time soon, maybe some other community member can.

Ron could produce ~21.000 rps with Gatling other JMS: https://groups.google.com/forum/#!topic/gatling/EBA_vjCk8aA
Can you reproduce his numbers on your set up?

I have investigated the matter further. It seems that the major bottleneck in my test setup was Windows. Go figure…

When tests are run on a Linux running in a VM I get greater number of requests/second, in the limit of 1900. That is a huge increase.
So the hardware and the operating system must be taken into consideration when running tests.

Of course I couldn’t replicate Ron’s numbers as he had a much more powerful machine.

I can mark this case as closed / resolved.

W dniu czwartek, 28 maja 2015 11:14:26 UTC+2 użytkownik Stéphane Landelle napisał:

Cool :slight_smile:

Antivirus? Firewall?

While I do understand how exactly to increase the rps I have, I’m still not sure the interaction between the amount of requests going out, and the amount of active users. I’m still confused at how it relates to each other.

rps depends on the number of active users, pauses durations and response time.

Okay, so in my case, I increase the number of active users that are on the application. My RPS doesn’t increase so does that mean my response time might be my bottleneck? is there any other causes of lowering my request per a sec?

Okay, so in my case, I increase the number of active users that are on the
application. My RPS doesn't increase so does that mean my response time
might be my bottleneck?

That's the most likely explanation: your application can't handle more
concurrent requests and connections.

is there any other causes of lowering my request per a sec?

Only if you force this behavior with throttling

Unfortunately I am unable to check if the low rps was due to firewall or antivirus, because I cannot disable them even for tests on the machine.

W dniu czwartek, 28 maja 2015 16:08:23 UTC+2 użytkownik Stéphane Landelle napisał: