Azure kubernetes node pool

Hi all,

I’m using the gatling enterprise self hosted version where I’m trying to configure a kubernetes pool in our AKS cluster (azure) based on node pool settings like:

I’ve followed the hints on:

  • If using the NodePort mode, firewall rules must be added so that Gatling Enterprise can reach Kubernetes nodes on the configured Kubernetes NodePort range (by default, 30000-32767).

in terms of firewall settings on the dedicated vmss set, but w/o any success.

I’m still getting the following error message:

[15:43:56,717] Couldn’t connect over HTTP to k8s-gatling-runner/10.240.0.200:30808 on port 30808: j.n.SocketTimeoutException: connect timed out. 3 remaining tries, please wait.
[15:44:07,743] Couldn’t connect over HTTP to k8s-gatling-runner/10.240.0.200:30808 on port 30808: j.n.SocketTimeoutException: connect timed out. 2 remaining tries, please wait.
[15:44:18,766] Couldn’t connect over HTTP to k8s-gatling-runner/10.240.0.200:30808 on port 30808: j.n.SocketTimeoutException: connect timed out. 1 remaining tries, please wait.

Hi,

We’re going to drop the NodePort mode in the upcoming release. Doing things this way is typically forbidden by security teams. I recommend that you go with the Service Ingress mode.

you are meaning ingress / route mode?

Yes, sorry. Ingress is for k8s, Route is for OpenShift. Go with Ingress.

what would be a valid ingress route?
Should I create the ingress before?

I assume that’s because of api version mismatch

expected: v1beta1
provided: v1

Would you agree to my assumptions?

We’ve completely revamped our k8s support in 1.18.0 (publishing on the Azure marketplace in progress): Gatling Enterprise Self-Hosted - Gatling Enterprise 1.18 Highlights

I recommend that you upgrade, see Gatling Enterprise Self-Hosted - Installation on a Marketplace

Hi,

We’ve revamped our k8s support in Gatling Enterprise 1.18.0, see Gatling Enterprise Self-Hosted - Gatling Enterprise 1.18 Highlights.

You should probably upgrade, I think it would help a lot.

… I’ve tried that out but w/o success:


I’m really upset since I went to the trouble of reinstalling!

The Azure AKS settings as follow:

Are you sure that your virtual machine can connect to this https://toil-test-cluster-dns-42288bca.hcp.westeurope.azmk8s.io and that this hostname is not only valid inside your AKS cluster? Have you tried doing a curl on this url from your Gatling Enterprise instance?

… as I can see the currently installed version of Azure Gatling Enterprise would be:

but the advertised feature(s) would be available first at 1.18.x ?

… authorization issues should not be asumed because the ‘not found’ hint says the path isn’t existing

Hello @reschrei,

Indeed… Our latest version 1.18.0 is not available on the Azure marketplace at the moment.
We’re in the process of releasing it.

On version 1.18.0, we target the non-deprecated ingresses API under networking/v1, which seem to be the issue here (as you stated before).

I’ll get back to you when the version is available, we’re very sorry for the inconvenient first upgrade.

thx for the clarifications

FYI, the validation on Azure just failed after a 6 days wait (quite often it randomly fails for some unknown reason). We’re doing what we can…

1 Like

THX for the update, in meantime I’ll go with the latest version & on premise vm based option !

After multiple retries due to Azure marketplace bugs, Gatling Enterprise 1.18.0 is finally available there.

Ok, I got it working now, including k8s injector based on traefik ingress controller.

The report output during simulations depict some exceptions in regard of invalid ActorName:

Any Suggestions?

If I run the same simulation on Local (Demo purpose Only) Pool I don’t get any issues!

I suppose the k8s injector image would be a little bit buggy!

I suppose the k8s injector image would be a little bit buggy!

No, this is unrelated. The failure happens in Gatling and has nothing to do with the deployment kind.

Which version of Gatling are you using?
And which protocol are you using? WebSocket? JMS? MQTT?
Would it be possible for you to share a reproducer, maybe privately?

package computerdatabase

import ComputerDatabase.{deviceMessageToByteArray, fromFile}
import com.tsystems.toil.device.toil.DeviceMessage
import io.circe._
import io.circe.optics.JsonPath.root
import io.circe.parser._
import io.gatling.core.Predef._
import io.gatling.core.structure._
import io.gatling.mqtt.Predef._
import scalapb.json4s.JsonFormat

import scala.concurrent.duration._
import scala.util.Random

class ToilMqttSimulation extends Simulation {

  before {
    println("Simulation is about to start!")
  }
  after {
    println("Simulation is finished!")
  }

//  manualVerification()


  val mqttProtocol_test_ram = mqtt
    .broker(System.getenv("MQTT_HOST").toString, System.getenv("MQTT_PORT").toInt)
    .useTls(true)
    .cleanSession(true)
    .credentials("#{user}", "#{password}")

  val feeder = ssv(s"toil-messages-${System.getenv("TARGET_ENV").toString}-${System.getenv("PRODUCT").toString}.ssv").random


  val chain_pub_sub: ChainBuilder =
    tryMax(2, "chain-retries 2") {
      exec(mqtt("Connecting").connect)
        .exec(mqtt("Subscribing")
          .subscribe("TOIL/#{topic}/1/out/#{deviceId}")
          .qosAtLeastOnce
        )
        .pause(10.milliseconds, 70.milliseconds)
        .exec(
          mqtt("Publishing (correlation/check)")
            .publish("TOIL/#{topic}/1/in/#{deviceId}")
            .message(
              ByteArrayBody(session => {
                val body = fromFile(s"/messages/${session("topic").as[String]}/${session("fName").as[String]}")
                val json: Json = parse(body).getOrElse(Json.Null)
                val randomVersion: Json => Json = System.getenv("PRODUCT") match {
                  case "levelmeter" | "performance" => root.levelMeterEvent.version.int.modify(_ + Random.nextInt(1000))
                  case "dpdhl" | "lct" => root.trackerEvent.version.int.modify(_ + Random.nextInt(1000))
                  case "sb" => root.buttonPressedEvent.version.int.modify(_ + Random.nextInt(1000))
                }
                val modifiedJson = randomVersion(json).toString()
                //            println(s"modified String: ${modifiedJson}")
                deviceMessageToByteArray(JsonFormat.fromJsonString[DeviceMessage](modifiedJson))
              }))
            .qosAtLeastOnce
            .await(FiniteDuration(System.getenv("EXPECT_TIMEOUT").toLong, SECONDS), "TOIL/#{topic}/1/out/#{deviceId}")
            .check(
              bodyBytes.transform {
                (is) => {
                  val version = DeviceMessage.parseFrom(is).deviceConfiguration.orNull.version
                  println(s"version read: ${version}")
                  version
                }
              }.is(session => session("version").as[Int])
            )
        )
        .pause(100.milliseconds)
    }.exitHereIfFailed


  val scn_test_ram: ScenarioBuilder = scenario(s"MQTT Test -> ${System.getenv("TARGET_ENV").toString} devices")
    .feed(feeder)
    .group( s"subscribe-group-${System.getenv("TARGET_ENV").toString}") {
      exitBlockOnFail(chain_pub_sub)
    }

  setUp(
    scn_test_ram.inject(constantConcurrentUsers(System.getenv("CONCURRENT_USER").toInt)
      .during(FiniteDuration(System.getenv("INTERVAL").toLong, SECONDS))).noShard
      .throttle(
        reachRps(10).in(120),
        holdFor(5.minute),
        jumpToRps(20),
        holdFor(6.hours)
      ).noShard
      .protocols(mqttProtocol_test_ram))
    .assertions(forAll.failedRequests.percent.lte(5))
}