Reference Documentation

Design docs, concept definitions, and references for APIs and CLIs.

Documentation for Kubernetes v1.8 is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Edit This Page

Using Admission Controllers

What are they?

An admission control plug-in is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object, but after the request is authenticated and authorized. The plug-in code is in the API server process and must be compiled into the binary in order to be used at this time.

Each admission control plug-in is run in sequence before a request is accepted into the cluster. If any of the plug-ins in the sequence reject the request, the entire request is rejected immediately and an error is returned to the end-user.

Admission control plug-ins may mutate the incoming object in some cases to apply system configured defaults. In addition, admission control plug-ins may mutate related resources as part of request processing to do things like increment quota usage.

Why do I need them?

Many advanced features in Kubernetes require an admission control plug-in to be enabled in order to properly support the feature. As a result, a Kubernetes API server that is not properly configured with the right set of admission control plug-ins is an incomplete server and will not support all the features you expect.

How do I turn on an admission control plug-in?

The Kubernetes API server supports a flag, admission-control that takes a comma-delimited, ordered list of admission control choices to invoke prior to modifying objects in the cluster.

What does each plug-in do?

AlwaysAdmit

Use this plugin by itself to pass-through all requests.

AlwaysPullImages

This plug-in modifies every new Pod to force the image pull policy to Always. This is useful in a multitenant cluster so that users can be assured that their private images can only be used by those who have the credentials to pull them. Without this plug-in, once an image has been pulled to a node, any pod from any user can use it simply by knowing the image’s name (assuming the Pod is scheduled onto the right node), without any authorization check against the image. When this plug-in is enabled, images are always pulled prior to starting containers, which means valid credentials are required.

AlwaysDeny

Rejects all requests. Used for testing.

DefaultStorageClass

This plug-in observes creation of PersistentVolumeClaim objects that do not request any specific storage class and automatically adds a default storage class to them. This way, users that do not request any special storage class do not need to care about them at all and they will get the default one.

This plug-in does not do anything when no default storage class is configured. When more than one storage class is marked as default, it rejects any creation of PersistentVolumeClaim with an error and administrator must revisit StorageClass objects and mark only one as default. This plugin ignores any PersistentVolumeClaim updates; it acts only on creation.

See persistent volume documentation about persistent volume claims and storage classes and how to mark a storage class as default.

DefaultTolerationSeconds

This plug-in sets the default forgiveness toleration for pods to tolerate the taints notready:NoExecute and unreachable:NoExecute for 5 minutes, if the pods don’t already have toleration for taints node.kubernetes.io/not-ready:NoExecute or node.alpha.kubernetes.io/unreachable:NoExecute.

DenyExecOnPrivileged (deprecated)

This plug-in will intercept all requests to exec a command in a pod if that pod has a privileged container.

If your cluster supports privileged containers, and you want to restrict the ability of end-users to exec commands in those containers, we strongly encourage enabling this plug-in.

This functionality has been merged into DenyEscalatingExec.

DenyEscalatingExec

This plug-in will deny exec and attach commands to pods that run with escalated privileges that allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and have access to the host PID namespace.

If your cluster supports containers that run with escalated privileges, and you want to restrict the ability of end-users to exec commands in those containers, we strongly encourage enabling this plug-in.

EventRateLimit (alpha)

This plug-in is introduced in v1.9 to mitigate the problem where the API server gets flooded by event requests. The cluster admin can specify event rate limits by:

There are four types of limits that can be specified in the configuration:

Below is a sample snippet for such a configuration:

EventRateLimit:
  limits:
  - type: Namespace
    qps: 50
    burst: 100
    cacheSize: 2000
  - type: User
    qps: 10
    burst: 50

See the EventRateLimit proposal for more details.

GenericAdmissionWebhook (alpha)

This plug-in is related to the Dynamic Admission Control introduced in v1.7. The plug-in calls the webhooks configured via ExternalAdmissionHookConfiguration, and only admits the operation if all the webhooks admit it. Currently, the plug-in always fails open. In other words, it ignores the failed calls to a webhook.

ImagePolicyWebhook

The ImagePolicyWebhook plug-in allows a backend webhook to make admission decisions. You enable this plug-in by setting the admission-control option as follows:

--admission-control=ImagePolicyWebhook

Configuration File Format

ImagePolicyWebhook uses the admission config file --admission-control-config-file to set configuration options for the behavior of the backend. This file may be json or yaml and has the following format:

{
  "imagePolicy": {
     "kubeConfigFile": "path/to/kubeconfig/for/backend",
     "allowTTL": 50,           // time in s to cache approval
     "denyTTL": 50,            // time in s to cache denial
     "retryBackoff": 500,      // time in ms to wait between retries
     "defaultAllow": true      // determines behavior if the webhook backend fails
  }
}

The config file must reference a kubeconfig formatted file which sets up the connection to the backend. It is required that the backend communicate over TLS.

The kubeconfig file’s cluster field must point to the remote service, and the user field must contain the returned authorizer.

# clusters refers to the remote service.
clusters:
- name: name-of-remote-imagepolicy-service
  cluster:
    certificate-authority: /path/to/ca.pem    # CA for verifying the remote service.
    server: https://images.example.com/policy # URL of remote service to query. Must use 'https'.

# users refers to the API server's webhook configuration.
users:
- name: name-of-api-server
  user:
    client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
    client-key: /path/to/key.pem          # key matching the cert

For additional HTTP configuration, refer to the kubeconfig documentation.

Request Payloads

When faced with an admission decision, the API Server POSTs a JSON serialized api.imagepolicy.v1alpha1.ImageReview object describing the action. This object contains fields describing the containers being admitted, as well as any pod annotations that match *.image-policy.k8s.io/*.

Note that webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects. Implementers should be aware of looser compatibility promises for alpha objects and check the “apiVersion” field of the request to ensure correct deserialization. Additionally, the API Server must enable the imagepolicy.k8s.io/v1alpha1 API extensions group (--runtime-config=imagepolicy.k8s.io/v1alpha1=true).

An example request body:

{  
  "apiVersion":"imagepolicy.k8s.io/v1alpha1",
  "kind":"ImageReview",
  "spec":{  
    "containers":[  
      {  
        "image":"myrepo/myimage:v1"
      },
      {  
        "image":"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed"
      }
    ],
    "annotations":[  
      "mycluster.image-policy.k8s.io/ticket-1234": "break-glass"
    ],
    "namespace":"mynamespace"
  }
}

The remote service is expected to fill the ImageReviewStatus field of the request and respond to either allow or disallow access. The response body’s “spec” field is ignored and may be omitted. A permissive response would return:

{
  "apiVersion": "imagepolicy.k8s.io/v1alpha1",
  "kind": "ImageReview",
  "status": {
    "allowed": true
  }
}

To disallow access, the service would return:

{
  "apiVersion": "imagepolicy.k8s.io/v1alpha1",
  "kind": "ImageReview",
  "status": {
    "allowed": false,
    "reason": "image currently blacklisted"
  }
}

For further documentation refer to the imagepolicy.v1alpha1 API objects and plugin/pkg/admission/imagepolicy/admission.go.

Extending with Annotations

All annotations on a Pod that match *.image-policy.k8s.io/* are sent to the webhook. Sending annotations allows users who are aware of the image policy backend to send extra information to it, and for different backends implementations to accept different information.

Examples of information you might put here are:

In any case, the annotations are provided by the user and are not validated by Kubernetes in any way. In the future, if an annotation is determined to be widely useful, it may be promoted to a named field of ImageReviewSpec.

Initializers (alpha)

This plug-in is introduced in v1.7. The plug-in determines the initializers of a resource based on the existing InitializerConfigurations. It sets the pending initializers by modifying the metadata of the resource to be created. For more information, please check Dynamic Admission Control.

InitialResources (experimental)

This plug-in observes pod creation requests. If a container omits compute resource requests and limits, then the plug-in auto-populates a compute resource request based on historical usage of containers running the same image. If there is not enough data to make a decision the Request is left unchanged. When the plug-in sets a compute resource request, it does this by annotating the the pod spec rather than mutating the container.resources fields. The annotations added contain the information on what compute resources were auto-populated.

See the InitialResouces proposal for more details.

LimitPodHardAntiAffinity

This plug-in denies any pod that defines AntiAffinity topology key other than kubernetes.io/hostname in requiredDuringSchedulingRequiredDuringExecution.

LimitRanger

This plug-in will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the LimitRange object in a Namespace. If you are using LimitRange objects in your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. LimitRanger can also be used to apply default resource requests to Pods that don’t specify any; currently, the default LimitRanger applies a 0.1 CPU requirement to all Pods in the default namespace.

See the limitRange design doc and the example of Limit Range for more details.

NamespaceAutoProvision

This plug-in examines all incoming requests on namespaced resources and checks if the referenced namespace does exist. It creates a namespace if it cannot be found. This plug-in is useful in deployments that do not want to restrict creation of a namespace prior to its usage.

NamespaceExists

This plug-in checks all requests on namespaced resources other than Namespace itself. If the namespace referenced from a request doesn’t exist, the request is rejected.

NamespaceLifecycle

This plug-in enforces that a Namespace that is undergoing termination cannot have new objects created in it, and ensures that requests in a non-existent Namespace are rejected. This plug-in also prevents deletion of three system reserved namespaces default, kube-system, kube-public.

A Namespace deletion kicks off a sequence of operations that remove all objects (pods, services, etc.) in that namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in.

NodeRestriction

This plug-in limits the Node and Pod objects a kubelet can modify. In order to be limited by this admission plugin, kubelets must use credentials in the system:nodes group, with a username in the form system:node:<nodeName>. Such kubelets will only be allowed to modify their own Node API object, and only modify Pod API objects that are bound to their node. Future versions may add additional restrictions to ensure kubelets have the minimal set of permissions required to operate correctly.

OwnerReferencesPermissionEnforcement

This plug-in protects the access to the metadata.ownerReferences of an object so that only users with “delete” permission to the object can change it. This plug-in also protects the access to metadata.ownerReferences[x].blockOwnerDeletion of an object, so that only users with “update” permission to the finalizers subresource of the referenced owner can change it.

PersistentVolumeLabel

This plug-in automatically attaches region or zone labels to PersistentVolumes as defined by the cloud provider, e.g. GCE and AWS. It helps ensure the Pods and the PersistentVolumes mounted are in the same region and/or zone. If the plug-in doesn’t support automatic labelling your PersistentVolumes, you may need to add the labels manually to prevent pods from mounting volumes from a different zone.

PodNodeSelector

This plug-in defaults and limits what node selectors may be used within a namespace by reading a namespace annotation and a global configuration.

Configuration File Format

PodNodeSelector uses the admission config file --admission-control-config-file to set configuration options for the behavior of the backend.

Note that the configuration file format will move to a versioned file in a future release.

This file may be json or yaml and has the following format:

podNodeSelectorPluginConfig:
 clusterDefaultNodeSelector: <node-selectors-labels>
 namespace1: <node-selectors-labels>
 namespace2: <node-selectors-labels>

Configuration Annotation Format

PodNodeSelector uses the annotation key scheduler.alpha.kubernetes.io/node-selector to assign node selectors to namespaces.

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/node-selector: <node-selectors-labels>
  name: namespace3

PersistentVolumeClaimResize

This plug-in implements additional validations for checking incoming PersistentVolumeClaim resize requests. Note: Support for volume resizing is available as an alpha feature. Admins must set the feature gate ExpandPersistentVolumes to true to enable resizing.

After enabling the ExpandPersistentVolumes feature gate, enabling the PersistentVolumeClaimResize admission plug-in is recommended, too. This plug-in prevents resizing of all claims by default unless a claim’s StorageClass explicitly enables resizing by setting allowVolumeExpansion to true.

For example: all PersistentVolumeClaims created from the following StorageClass support volume expansion:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gluster-vol-default
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://192.168.10.100:8080"
  restuser: ""
  secretNamespace: ""
  secretName: ""
allowVolumeExpansion: true

For more information about persistent volume claims, see “PersistentVolumeClaims”.

PodPreset

This plug-in injects a pod with the fields specified in a matching PodPreset. See also PodPreset concept and Inject Information into Pods Using a PodPreset for more information.

PodSecurityPolicy

This plug-in acts on creation and modification of the pod and determines if it should be admitted based on the requested security context and the available Pod Security Policies.

For Kubernetes < 1.6.0, the API Server must enable the extensions/v1beta1/podsecuritypolicy API extensions group (--runtime-config=extensions/v1beta1/podsecuritypolicy=true).

See also Pod Security Policy documentation for more information.

PodTolerationRestriction

This plug-in first verifies any conflict between a pod’s tolerations and its namespace’s tolerations, and rejects the pod request if there is a conflict. It then merges the namespace’s tolerations into the pod’s tolerations. The resulting tolerations are checked against the namespace’s whitelist of tolerations. If the check succeeds, the pod request is admitted otherwise rejected.

If the pod’s namespace does not have any associated default or whitelist of tolerations, then the cluster-level default or whitelist of tolerations are used instead if specified.

Tolerations to a namespace are assigned via the scheduler.alpha.kubernetes.io/defaultTolerations and scheduler.alpha.kubernetes.io/tolerationsWhitelist annotation keys.

Priority

The priority admission controller uses the priorityClassName field and populates the integer value of the priority. If the priority class is not found, the Pod is rejected.

ResourceQuota

This plug-in will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the ResourceQuota object in a Namespace. If you are using ResourceQuota objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.

See the resourceQuota design doc and the example of Resource Quota for more details.

It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is so that quota is not prematurely incremented only for the request to be rejected later in admission control.

SecurityContextDeny

This plug-in will deny any pod that attempts to set certain escalating SecurityContext fields. This should be enabled if a cluster doesn’t utilize pod security policies to restrict the set of values a security context can take.

ServiceAccount

This plug-in implements automation for serviceAccounts. We strongly recommend using this plug-in if you intend to make use of Kubernetes ServiceAccount objects.

Yes. For Kubernetes >= 1.6.0, we strongly recommend running the following set of admission control plug-ins (order matters):

--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds

For Kubernetes >= 1.4.0, we strongly recommend running the following set of admission control plug-ins (order matters):

--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota

For Kubernetes >= 1.2.0, we strongly recommend running the following set of admission control plug-ins (order matters):

--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota

For Kubernetes >= 1.0.0, we strongly recommend running the following set of admission control plug-ins (order matters):

--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,PersistentVolumeLabel,ResourceQuota

Analytics

Create an Issue Edit this Page