DOCX

Using Kubernetes distributed load testing

By Tracy Black,2015-12-19 03:36
77 views 0
Using Kubernetes distributed load testing

    Using Kubernetes distributed load testing

    Load testing is one of the most important aspects of the infrastructure development of the background, it will not only demonstrate system performance in front of real demand, can also through the simulated user and device behavior, the application is deployed to the production environment, find out and understand that any possible system bottleneck.

    Dedicated test infrastructure, however, can be very expensive and difficult to maintain, and the conditions of such equipment is usually for a specific performance of one-time investment, after the initial investment want again to extend the load test is very difficult, also may limit experiments, leading to the development team's work efficiency is low, applications before deployment to a production environment cannot be fully effective testing.

    The solution in

    Distributed load test adopt cloud computing method, this solution in various test scenarios are very attractive.Cloud the elasticity of infrastructure platform is highly extensible, want to through a lot of simulation can produce flow of client application and service test is very easy.In addition, the pricing model of cloud computing and load test of elastic characteristics is very consistent.

    No need to run a complete virtual machine instance, provided by the container selection and virtual lightweight client rapidly expanding their perfect match.Because of its lightweight, easy to deploy, rapid available and suitable for a single task traits, such as the container is replaced by running test client good alternatives.

    Google cloud platform is to use a Container for excellent environment of distributed load test, the platform USES Google Container engines (Google Container Engine) as open source Container cluster manager Kubernetes, the Container as the primary object to provide support to it.Use containers engine can quickly build infrastructure, and can be used to manage the application deployment and resource.

    The solution illustrates the use of the container deployment of distributed engine load test framework.This framework using multiple containers, built a simple applied to REST - load test communication -based API.Although it is used to test the simple Web application, but the same model can be used to create more complex load test scenarios, such as games or the Internet of things applications.The scheme based on container load test framework of general architecture is discussed.Please look up at the end of this article, gradually learning the architecture of the sample.

    This scheme mainly by container engine to create a load test, the system under test is a simple Web application, using the REST API.With the help of the existing load testing framework, create a detailed description to the API interaction model in the below.And after

    complete the deployment of the system being tested, using container engine to deploy distributed load testing tasks.

    The system under test

    In software testing terminology, the System Under Test (System Under Test) refers to the Test when the design for the evaluation System.In the scenario, the system under test is a small Web application is deployed to the Google App Engine, basic REST style by publishing the application of the endpoint to capture receives the HTTP POST request (receiving data is not continuous).In a real scenario, the Web application can be complicated, and contains a lot of additional components and services, such as caching, messaging, and persistence, this scheme does not consider these complex situations.More on Google cloud platform to build scalable Web applications related information, please checkThe Building Scalable and Resilient Web Applications

    The sample application source code please see article at the end of the tutorial.

    Sample workload

    Many things have similar to the sample application in the deployment of the model used in the back-end service components, equipment for the first time after registration services, began reporting indicators or detector readings, and registered to restart the service on a regular basis.

    The figure below shows a common backend service component interactions.

    This interaction can be usedLocustThe distributed load testing tools to model, based on the Python Locust can distribute requests to multiple target path, such as to/login/metrics and target path respectively sends the request;There are many load generated package also can choose to use according to the project demand, includingJMeter、,GatlingandTsung

    And the workload will depend on what is said above to interact, the Locust will be in a grouptaskThe model.In order to simulate the real client as far as possible, such as at the same

    time there are thousands of client request access, each Locust task required to pass through the weighted.

    Based on the calculation of container

    From an architectural point of view, the deployment of the distributed load testing scheme has two main components: image, Locust container and the container arrangement and management mechanism.

    Containers of Locust is consists of Locust software imageDockerImage, Dockerfile can be found in the related making library (see tutorial), and Dockerfile use the image based on the Python, and use some of the script file to start the Locust service, carry out the task.

    The scheme using Google engine container used for container arrangement and management mechanism.Container engine is based on open source framework Kubernetes, set the Google in the deployment of the container operation for many years, layout and management experience.Based on the calculation of container allows the developer to focus on the application itself, without having to waste energy on the tedious hosting environment deployment and integration.Containers at the same time also makes the load test is more portable, through integration of container after application can run in multiple cloud environment.Container engine with Kubernetes introduced for container arrangement and management of a number of concepts.

    The container cluster

    A container cluster consists of a set of cloud computing Engine (Compute Engine) as an example, provides the basis for the application.The container engine and Kubernetes document, these instances are called nodes.A cluster contains a master node and one or more than one worker.The master node to worker nodes are running on Kubernetes, therefore the container cluster are sometimes referred to as Kubernetes cluster.Please see more cluster related informationContainer engine document

    Pods

    Pod is a group should be centrally deployed tight coupling of the container, some Pod contains only a single container, such as the case, every Locust container operation in their Pod.But usually, pod will contain more concentrated containers of execution, such as the case, Kubernetes USES a contains three containers of pod provide DNS service.

    In a container,SkyDNSProvide DNS service functions.SkyDNS rely on a man named etcd key-value stores, and it is encapsulated in another container.In the third container pod,kube2skyAs a bridge between Kubernetes SkyDNS.

    Copy the controller

    A copy controller to ensure a certain number of pod "copy" to be able to run at any time.If the quantity is too much copy the controller will turn off some of them;If the quantity is too little, copy the controller will start something new.The scheme has three copy controller: a single pod surviving to ensure that the DNS server;Another maintain Locust single master pod;The third is guaranteed at the same time have the very 10 worker pod operation of the Locust.

    service

    Specific pod may disappear because of various reasons, including the node failure or interrupted by updating maintenance and active node.That is to say, the IP address of the pod did not provide a reliable interface.A more efficient way is to use the interface of the abstract, said even if the underlying pod disappear, new pod, change the IP address, the abstract says it will not change.Container engine service by defining a set of logical pod and access to relevant policies, provide this type of abstract interface.In the scenario, there are some services on behalf of the pod or group pod.For example, a service on behalf of the DNS server, another on behalf of the Locust master pod, there is also a representative that 10 worker pod.

    The figure below shows the master node and the worker node contains content:

    Deployment of the system under test

    The scheme using Google app engine to run the system under test.Deployment of the system under test registration available Google cloud platform account, run to install the

    Google cloud platform SDK, then with a single command to deploy the sample Web application, the source code for the tutorial can be found at the end of the text.

    Deployment of load testing tasks

    Deployment of load test task, first of all need to deploy the load test master, and then a set of 10 load testing worker.With the work load test, can according to the test aims to create a lot of communication, but need to remember: with an external system to produce too much similar communication and the denial of service attack, please be sure to review the Google cloud platformThe terms of serviceAnd Google cloud platformThe user agreement

    Load test master

    Deployment of the first component is the master Locust, it performs the task of the load test.Deployment will be deployed to only contain a single copy of Locust master copy controller, because we only need a master.A copy controller even when deploying a single pod is effective, because it can ensure high availability.

    Copy controller configuration specifies several elements, including the name of the controller (locust - master), the label (name: locust, role: master), released by the port container need (Web interface in 8089, and the worker communications in 5051 and 5052).This information will be used to configure the worker of Locust controller.The following information contained in the configuration of the port:

    ...

     ports:

     - name: locust-master-web

     containerPort: 8089

     protocol: TCP

     - name: locust-master-port-1

     containerPort: 5557

     protocol: TCP

     - name: locust-master-port-2

     containerPort: 5558

     protocol: TCP

    The next step, we will deploy a Service, to ensure that the port can be released by the hostname to access other pod: cluster port and referenceable by descriptive name of the port.Failure, through the use of the service, even in the master copy controller generates a new pod, we can easily find the Locust worker, and communication with the master.Locust master service also includes instructions at the cluster level to create external forwarding rules, provide access to the cluster resources external communication ability.Note: still need to create a firewall rule to provide access to the target sample complete entrance.

    After the deployment of Locust master, can be used in accordance with external forwarding rules of public IP addresses to access the Web interface.After deployment of Locust worker, simulator can be opened, and through the Locust Web interface to view the summary statistics.

    Load testing worker

    Next deployment components are Locust worker, used to perform the load stress testing.Locust worker is through can generate 10 to deploy the pod of a single copy controller.The distribution of pod in Kubernetes cluster.Each pod by environment variables to control the configuration of important information, such as the hostname of the system under test and Locust master the hostname.The replication of the worker controller configuration way please see the following tutorial.Configuration contains the names of the controller, the locust - worker, the label (name: locust, role: the worker), and the environment variables described above.The following code contains the names of the configuration, tag, replications:

    kind: ReplicationController apiVersion: v1 metadata: name: locust-worker

     labels: name: locust role: worker spec: replicas: 10 selector: name: locust

     role: worker ...

    For Locust wouldn't have to deploy additional service worker, because the worker pod itself does not need to support any inbound communication -- they are directly connected to the Locust master pod.

    The figure below shows the Locust master and Locust the relationship between the worker.

    Controller in copying the deployment of Locust after worker, can return to the Locust master Web interface to check the worker deployed number of the corresponding number of slave.

    Load testing tasks

    Open a load test

    Locust's main Web interface allows in view of the system under test load testing tasks, see below:

    Open the specified when the simulated users, users should produce rate.Next, click on Start Start simulation.As time goes on, the user, you can see statistics began aggregation index according to the simulation, such as the number of requests, the number of requests per second, the diagram below:

    Stop simulation simply click on the Stop, the test will be terminated.You can download the form view complete results.

    Extend the client

    Scale up simulated users can lead to Locust worker pod number.Detailed instructions are included in the worker controller of Locust, ten worker Locust pod copy controller deployment.By copying the controller to increase the number of pod, Kubernetes provided does not need to redeploy can adjust the size of the controller.For example, by kubectl command line tools can adjust the number of worker pod.The following command of worker of Locust pod number can be added to the 20:

    $ kubectl scale --replicas=20 replicationcontrollers locust-worker

    After the expansion and issued a command, wait for a few minutes, all the pod to deploy and start within this time period.All the pod after start-up, back to the Locust master Web interface, restart the load test.

    Resource and cost

    This solution USES the four container engine node, each by cloud computing engine VM standard n1 - standard - 1 type of support.You can use Google cloud platform pricing calculator to estimate the cluster running container monthly expense.The above mentioned, can customize the size of the container cluster as needed.Pricing calculator can help you to customize the cluster characteristics, to evaluate the increase or decrease in cost.

    The next step

    Engine can now see how to use the container to create a simple Web application load testing framework.Container engine allows you to specify load test framework container required number of nodes is established.Container engine also allows you to load test work node merge to the pod, and formulate the container engine run time want to keep the number of pod.

    Use the same pattern to create different environment variable load testing framework and application.For example, use the model to create information system, data flow management system and database system load testing framework.Create new Locust task, and even different load testing framework.

    Another method is to custom extension framework collected index.For example, you might want to measure the number of requests per second, or to monitor load increase after the response delay, or view the response rate and error types.There are many optional way of monitoring, includingGoogle cloud monitoringGoogle Cloud Monitoring?。

Report this document

For any questions or suggestions please email
cust-service@docsford.com