Koding Books

Professional, free coding tutorials

Performing performance testing on Kubernetes using Kubemark

In the world of container orchestration, Kubernetes has emerged as a leading platform, providing robust solutions for deploying, scaling, and managing containerised applications. However, as with any system, understanding its performance under varying loads is crucial. This is where performance testing comes into play.

Performance testing determines a system’s responsiveness, throughput, reliability, and scalability under a given workload. It’s crucial for identifying bottlenecks, understanding system capacity, and ensuring reliable and consistent performance.

In the Kubernetes ecosystem, one tool stands out for performance testing: Kubemark. Kubemark is a pseudo cluster that Kubernetes provides for scalability testing. It simulates the behaviour of a cluster by creating ‘hollow’ nodes and pods, essentially shells that mimic the behaviour of actual nodes and pods. This allows users to test the performance of the Kubernetes master components under high-density scenarios without the need for a large number of actual nodes and pods.

Kubemark is particularly useful for testing the control plane’s performance, which includes components like the API server, scheduler, and controller manager. By simulating many nodes and pods, Kubemark can provide valuable insights into how these components will behave under stress.

In the following sections, we will delve deeper into how Kubemark works, how to set it up, and how to interpret the results from a Kubemark test run. Whether you’re a Kubernetes administrator looking to optimise your clusters or a developer aiming to understand the performance implications of your deployments, this guide will provide you with the knowledge you need to leverage Kubemark effectively.

Setting up Kubemark

Setting up and running performance tests using Kubemark involves several steps. Here’s a general guide:

  1. Set up a Kubernetes cluster: Kubemark runs on a real Kubernetes cluster, so the first step is to set up a Kubernetes cluster. You can use any Kubernetes setup guide to do this.
  2. Check out the Kubernetes repository: Kubemark is part of the Kubernetes project, so you must check out the Kubernetes repository on your local machine.

git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes

  1. Create a Kubemark cluster: The next step is to create a Kubemark cluster. This is done using the kubemark.sh script located in the kubernetes/cluster/kubemark/ directory. You need to specify the number of nodes you want to simulate using the NUM_NODES environment variable.
export KUBERNETES_PROVIDER=gce
export KUBE_GCE_ZONE=us-central1-b
export MASTER_SIZE=n1-standard-4
export NODE_SIZE=n1-standard-8
export NUM_NODES=500
export KUBE_ENABLE_CLUSTER_MONITORING=none
export KUBE_ENABLE_CLUSTER_LOGGING=false
export KUBE_ENABLE_NODE_LOGGING=false
export TEST_CLUSTER_LOG_LEVEL=--v=1
export TEST_CLUSTER_RESYNC_PERIOD=12h
./cluster/kubemark/start-kubemark.sh

  1. Run performance tests: Once the Kubemark cluster is up and running, you can run performance tests using the run-e2e.sh script located in the kubernetes/hack/ directory. You need to specify the test you want to run using the --ginkgo.focus option.

./hack/ginkgo-e2e.sh --ginkgo.focus="performance-related test"

  1. Analyse the results: The results of the performance tests are output to the console. You can analyse these results to understand the performance of your Kubernetes cluster.

Understanding the results

Interpreting the results of a Kubemark performance test involves understanding several key metrics. Here are some of the most important ones:

  1. API call latencies: One of the critical metrics that Kubemark provides is the latency of various API calls. This includes the 50th, 90th, and 99th percentile latencies for API calls. Lower latencies are generally better, indicating that the Kubernetes master can handle API requests more quickly.
  2. Pod startup time: This is the time it takes for a pod to go from being scheduled to being running. This includes the time it takes to pull and start the container image. Lower pod startup times are generally better.
  3. Resource usage: Kubemark also provides information about the resource usage of the Kubernetes master components. This includes CPU and memory usage. Lower resource usage is generally better because the Kubernetes master can handle more nodes and pods with fewer resources.
  4. Throughput: The number of operations the Kubernetes master can handle per second. Higher throughput is generally better, as the Kubernetes master can handle a more significant number of operations.
  5. Errors: Kubemark also reports any errors that occurred during the test. These can include API call failures, pod startup failures, and other errors. Ideally, there should be no errors during the test.

Optimising your cluster

Optimising the performance of your Kubernetes cluster based on Kubemark performance test results involves identifying bottlenecks and making appropriate adjustments. Here are some general strategies:

  1. Optimise API call latencies: If the latencies for specific API calls are high, it could indicate a bottleneck in the Kubernetes master. You might need to scale up the master (use a machine with more CPU/memory), scale out the master (add more master nodes), or optimise the master’s configuration.
  2. Reduce pod startup time: If pod startup time is high, it could be due to slow image pulls or container startup. You might need to optimise your container images (make them smaller, use a more efficient base image), use a faster image registry, or optimise your node’s configuration.
  3. Manage resource usage: If the resource usage of the Kubernetes master components is high, it could indicate that they are under-provisioned. You might need to allocate more resources or optimise their configuration to use resources more efficiently.
  4. Increase throughput: If the throughput is low, it could indicate a bottleneck in the Kubernetes master or the network. You might need to scale up/out the master, optimise the network configuration, or optimise the master’s configuration.
  5. Handle errors: If there are errors during the test, you need to investigate and fix them. The specific steps depend on the nature of the errors.

Scaling up the Kubernetes master

Scaling up the Kubernetes master involves increasing the available resources (CPU, memory, etc.). Here’s a general guide on how to do it:

  1. Identify the master node(s): First, you need to identify the master node(s) in your cluster. You can do this using the kubectl get nodes command and looking for nodes with the master role.
  2. Increase the machine type: If you’re using a cloud provider like Google Cloud, AWS, or Azure, you can increase the machine type of the master node(s) to one with more CPU and memory. This is typically done through the cloud provider’s console or CLI. For example, in Google Cloud, you can use the gcloud compute instances set-machine-type command to change the machine type of an instance.
  3. Resize the master node(s): If you’re not using a cloud provider, you can resize the master node(s) by adding more CPU and memory. This is typically done through the hypervisor or operating system. For example, in Linux, you can use the lscpu and free -m commands to check the current CPU and memory and then add more resources as needed.
  4. Update the Kubernetes configuration: After resizing the master node(s), you need to update the Kubernetes configuration to reflect the new resources. This is typically done by editing the kube-apiserverkube-controller-manager, and kube-scheduler manifests on the master node(s) and increasing the --cpu-request and --memory-request flags.
  5. Restart the master components: Finally, you need to restart the master components for the changes to take effect. This is typically done by restarting the kubelet service on the master node(s).

Improving master resource usage

Optimising the resource usage of your Kubernetes master involves adjusting its configuration to use resources more efficiently. Here are some general strategies:

  1. Adjust resource requests and limits: Kubernetes allows you to specify resource requests and limits for Pods. You can adjust these values for the Pods running the Kubernetes master components (apiserver, controller-manager, scheduler) to ensure they have enough resources to operate efficiently but not so much that they’re wasting resources.
  2. Enable Horizontal Pod Autoscaling (HPA): HPA automatically scales the number of Pods in a deployment, replication controller, replica set, or stateful set based on observed CPU utilisation. This can be used to scale the Kubernetes master components as needed automatically.
  3. Tune garbage collection: Kubernetes uses garbage collection to clean up unused resources. You can adjust the garbage collection settings to make it more aggressive, which can help reduce resource usage.
  4. Optimise etcd: The etcd database is a critical component of the Kubernetes master. Optimising etcd, such as by adjusting its memory and CPU usage, tuning its configuration, or running it on dedicated hardware, can help reduce the resource usage of the Kubernetes master.
  5. Use Vertical Pod Autoscaler (VPA): VPA automatically adjusts Pods’ CPU and memory requests based on usage. This can be used to change the resource usage of the Kubernetes master components automatically.

Remember, optimising resource usage is a balance between performance and cost. Constantly monitor your cluster’s performance to ensure your optimisations have the desired effect.

Optimising the Kubernetes network

Optimising the network configuration of your Kubernetes cluster can help improve its performance. Here are some general strategies:

  1. Choose the right network plugin: Kubernetes supports a variety of network plugins, each with its own strengths and weaknesses. Some plugins are optimised for performance, while others are optimised for features or ease of use. Make sure you’re using a network plugin that fits your needs.
  2. Optimise network policies: Network policies can impact the performance of your cluster by adding overhead to network communications. Make sure your network policies are as simple and efficient as possible.
  3. Use Network Policies wisely: Overuse of Network Policies can lead to decreased performance. Use them judiciously and only when necessary.
  4. Use Service Topology for traffic shaping: Service Topology is a feature that allows you to control traffic flow based on the cluster’s node topology. This can be used to ensure that traffic stays within certain boundaries, reducing latency.
  5. Enable jumbo frames: If your network supports it, enabling jumbo frames can improve network performance by allowing more data to be sent in each network packet.
  6. Tune kernel parameters: Several kernel parameters, such as the TCP window size and the number of connections, can affect network performance. Tuning these parameters can help optimise network performance.

Kubemark is a performance testing tool designed to simulate large Kubernetes clusters. It provides valuable metrics such as API call latencies, pod startup time, resource usage, throughput, and error rates. These metrics can be used to identify bottlenecks and optimise the performance of a Kubernetes cluster.

Optimisation strategies can include scaling up the Kubernetes master, reducing pod startup time, managing resource usage, increasing throughput, and handling errors. Network configuration and resource usage of the Kubernetes master can also be optimised based on Kubemark test results. For more detailed information and instructions, refer to the Kubemark section in the Kubernetes repository.

Ali Kayani

https://www.linkedin.com/in/ali-kayani-silvercoder007/

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *