Tasks

Step-by-step instructions for performing operations with Kubernetes.

Documentation for Kubernetes v1.8 is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Tasks
Administer a Cluster
Access Clusters Using the Kubernetes API
Access Services Running on Clusters
Securing a Cluster
Encrypting data at rest
Operating etcd clusters for Kubernetes
Static Pods
Cluster Management
Cluster Management Guide for Version 1.6
Upgrading kubeadm clusters from 1.6 to 1.7
Upgrading kubeadm clusters from 1.7 to 1.8
Share a Cluster with Namespaces
Namespaces Walkthrough
Autoscale the DNS Service in a Cluster
Safely Drain a Node while Respecting Application SLOs
Configure Out Of Resource Handling
Reserve Compute Resources for System Daemons
Guaranteed Scheduling For Critical Add-On Pods
Declare Network Policy
Reconfigure a Node's Kubelet in a Live Cluster
Set Kubelet parameters via a config file
Change the Reclaim Policy of a PersistentVolume
Limit Storage Consumption
Change the default StorageClass
Kubernetes Cloud Controller Manager
Developing Cloud Controller Manager
Set up High-Availability Kubernetes Masters
Configure Multiple Schedulers
IP Masquerade Agent User Guide
Configure private DNS zones and upstream nameservers in Kubernetes
Manage GPUs
Manage HugePages
Extend kubectl with plugins

Edit This Page

Core metrics pipeline

Starting from Kubernetes 1.8, resource usage metrics, such as container CPU and memory usage, are available in Kubernetes through the Metrics API. These metrics can be either accessed directly by user, for example by using kubectl top command, or used by a controller in the cluster, e.g. Horizontal Pod Autoscaler, to make decisions.

The Metrics API

Through the Metrics API you can get the amount of resource currently used by a given node or a given pod. This API doesn’t store the metric values, so it’s not possible for example to get the amount of resources used by a given node 10 minutes ago.

The API no different from any other API:

The API is defined in k8s.io/metrics repository. You can find more information about the API there.

Note: The API requires metrics server to be deployed in the cluster. Otherwise it will be not available.

Metrics Server

Metrics Server is a cluster-wide aggregator of resource usage data. Starting from Kubernetes 1.8 it’s deployed by default in clusters created by kube-up.sh script as a Deployment object. If you use a different Kubernetes setup mechanism you can deploy it using the provided deployment yamls. It’s supported in Kubernetes 1.7+ (see details below).

Metric server collects metrics from the Summary API, exposed by Kubelet on each node.

Metrics Server registered in the main API server through Kubernetes aggregator, which was introduced in Kubernetes 1.7.

Learn more about the metrics server in the design doc.

Analytics

Create an Issue Edit this Page