Scalable Services
Performance Tests are a key source of information for developers. This post will go over the specific details on how to setup your performance test suite with Gatling as well as writing a Simulation from scratch to test your microservice.
Why Gatling?
Gatling is an open-source load testing framework, written in Scala. This tool works great and is efficient in terms of measuring the performance of an API. One great thing about Gatling is that you can write your performance test scenarios just like you would write your integration tests. Keep in mind that Gatling Simulations can be integrated with Continuous Delivery tools such as Jenkins/Gitlab. The benefit to the team is, we’re able to get quick feedback based on the changes we make to our API & make sure it’s highly available before we promote it to production.
Setup
Let’s start by creating a plugins.sbt file in our project directory. We’re going to be adding the sbt plugin there, here’s an example below:
addSbtPlugin("io.gatling" % "gatling-sbt" % "3.0.0")
Once you’re done doing that, you have to enable the Gatling plugin in your build.sbt file:
lazy val projectName = (project in file("."))
.enablePlugins(GatlingPlugin)
.settings(
libraryDependencies ++= Seq(
"io.gatling.highcharts" % "gatling-charts-highcharts" % "v",
"io.gatling" % "gatling-test-framework" % "v"
)
)
Simulation
I want to kind of demystify as you would say a gatling simulation, the first thing you’re going to want to do is write a Class that extends simulation: the extend simulation is what turns our class into a Gatling performance test script.
A gatling simulation is i a class that is extended in your script that allows you to do:
1.) http protocol definition: We start off with the http protocol definition, which is just defining the baseUrl. This will be prepended to the relative path in our scenario definition.
2.) headers definition: The baseURL will also have headers, which will be added on each request you make when you rampUp your users.
3.) scenario definition: A scenario consists a series of actions the user will execute during the Gatling Simulation.
We want to start off with the http protocol definition, which is just defining the baseUrl, which will be prepended to the relative path in our scenario definition. the base URL will also have your headers, which will be added on each request you make when you rampUp and rampDown your users.
Request & Response Transformation
Once a scenario starts running, request and response transformations to and from a server take place.
How do I make a Scenario?
- A scenario is a series of actions that a single user will take.
- Sessions consist of a series of execs, which define actions that should be taken.
- Multiple users will take these actions in parallel.
- Each user will have a session, which will hold all state for that user (timing, results, state, etc…).
- Each session has an attributes map, which is used to hold values needed by the scenario.
Checks
We can check a few things like HTTP status, any required labels and any data from the server response.
class SimpleRequestSimulation extends Simulation {
private val baseUrl = "https://localhost:PORT/name-of-service" val httpConf = http
.baseURL(baseUrl)
.acceptHeader("application/json") val name = feed(arg)
.exec{
http("requestName")
// Will make a PUT request to the endpoint
.put("/unique-identifier")
.body(StringBody(arg)) // pass the JSON payload which is in the resources dir
.header("Authorization", "value") // pass in headers
.asJSON}
.check(status.is(200) // assert the HTTP status response code
.pause(100 milliseconds, 500 milliseconds) val simpleRequestScenario =
scenario(getClass.getName).repeat(50) { exec(valName) } //The setUp below ramps 100 users and pauses every 100 miliseconds and executes the test in 5 minutes
setUp(simpleRequestScenario.inject(rampUsers(100) over (300 seconds))).protocols(httpConf)
}
Load testing can monitor the system’s response times for each of the transactions during a set period of time. This type of monitoring can provide a lot of useful information, especially for the stakeholders of this product. The testing we will perform will bring attention to any problems in the software, allowing engineers to fix these bottlenecks before they become more problematic.