Kubernetes version 1.5 introduces support for Windows Server Containers. In version 1.5, the Kubernetes control plane (API Server, Scheduler, Controller Manager, etc) continue to run on Linux, while the kubelet and kube-proxy can be run on Windows Server.
Note: Windows Server Containers on Kubernetes is an Alpha feature in Kubernetes 1.5.
In Kubernetes version 1.5, Windows Server Containers for Kubernetes is supported using the following:
Network is achieved using L3 routing. Because third-party networking plugins (e.g. flannel, calico, etc) don’t natively work on Windows Server, existing technology that is built into the Windows and Linux operating systems is relied on. In this L3 networking approach, a /16 subnet is chosen for the cluster nodes, and a /24 subnet is assigned to each worker node. All pods on a given worker node will be connected to the /24 subnet. This allows pods on the same node to communicate with each other. In order to enable networking between pods running on different nodes, routing features that are built into Windows Server 2016 and Linux are used.
The above networking approach is already supported on Linux using a bridge interface, which essentially creates a private network local to the node. Similar to the Windows side, routes to all other pod CIDRs must be created in order to send packets via the “public” NIC.
Each Window Server node should have the following configuration:
The following diagram illustrates the Windows Server networking setup for Kubernetes Setup:
To run Windows Server Containers on Kubernetes, you’ll need to set up both your host machines and the Kubernetes node components for Windows and setup Routes for Pod communication on different nodes.
Windows Host Setup
apprenda/pause
image from https://hub.docker.com/r/apprenda/pause
.Internal
, by running New-VMSwitch -Name KubeProxySwitch -SwitchType Internal
command in PowerShell window. This will create a new Network Interface with name vEthernet (KubeProxySwitch)
. This interface will be used by kube-proxy to add Service IPs.Linux Host Setup
Requirements
kubelet
To build the kubelet, run:
cd $GOPATH/src/k8s.io/kubernetes
KUBE_BUILD_PLATFORMS=windows/amd64 make WHAT=cmd/kubelet
go build cmd/kubelet/kubelet.go
kube-proxy
To build kube-proxy, run:
cd $GOPATH/src/k8s.io/kubernetes
KUBE_BUILD_PLATFORMS=windows/amd64 make WHAT=cmd/kube-proxy
go build cmd/kube-proxy/proxy.go
The below example setup assumes one Linux and two Windows Server 2016 nodes and a cluster CIDR 192.168.0.0/16
Hostname | Routable IP address | Pod CIDR |
---|---|---|
Lin01 | <IP of Lin01 host> |
192.168.0.0/24 |
Win01 | <IP of Win01 host> |
192.168.1.0/24 |
Win02 | <IP of Win02 host> |
192.168.2.0/24 |
Lin01
ip route add 192.168.1.0/24 via <IP of Win01 host>
ip route add 192.168.2.0/24 via <IP of Win02 host>
Win01
docker network create -d transparent --gateway 192.168.1.1 --subnet 192.168.1.0/24 <network name>
# A bridge is created with Adapter name "vEthernet (HNSTransparent)". Set its IP address to transparent network gateway
netsh interface ipv4 set address "vEthernet (HNSTransparent)" addr=192.168.1.1
route add 192.168.0.0 mask 255.255.255.0 192.168.0.1 if <Interface Id of the Routable Ethernet Adapter> -p
route add 192.168.2.0 mask 255.255.255.0 192.168.2.1 if <Interface Id of the Routable Ethernet Adapter> -p
Win02
docker network create -d transparent --gateway 192.168.2.1 --subnet 192.168.2.0/24 <network name>
# A bridge is created with Adapter name "vEthernet (HNSTransparent)". Set its IP address to transparent network gateway
netsh interface ipv4 set address "vEthernet (HNSTransparent)" addr=192.168.2.1
route add 192.168.0.0 mask 255.255.255.0 192.168.0.1 if <Interface Id of the Routable Ethernet Adapter> -p
route add 192.168.1.0 mask 255.255.255.0 192.168.1.1 if <Interface Id of the Routable Ethernet Adapter> -p
To start your cluster, you’ll need to start both the Linux-based Kubernetes control plane, and the Windows Server-based Kubernetes node components.
Use your preferred method to start Kubernetes cluster on Linux. Please note that Cluster CIDR might need to be updated.
To start kubelet on your Windows node: Run the following in a PowerShell window. Be aware that if the node reboots or the process exits, you will have to rerun the commands below to restart the kubelet.
Set environment variable CONTAINER_NETWORK value to the docker container network to use
$env:CONTAINER_NETWORK = "<docker network>"
Run kubelet executable using the below command
kubelet.exe --hostname-override=<ip address/hostname of the windows node> --pod-infra-container-image="apprenda/pause" --resolv-conf="" --api_servers=<api server location>
To start kube-proxy on your Windows node:
Run the following in a PowerShell window with administrative privileges. Be aware that if the node reboots or the process exits, you will have to rerun the commands below to restart the kube-proxy.
Set environment variable INTERFACE_TO_ADD_SERVICE_IP value to vEthernet (KubeProxySwitch)
which we created in Windows Host Setup above
$env:INTERFACE_TO_ADD_SERVICE_IP = "vEthernet (KubeProxySwitch)"
Run kube-proxy executable using the below command
.\proxy.exe --v=3 --proxy-mode=userspace --hostname-override=<ip address/hostname of the windows node> --master=<api server location> --bind-address=<ip address of the windows node>
Because your cluster has both Linux and Windows nodes, you must explicitly set the nodeSelector constraint to be able to schedule Pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example:
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "iis",
"labels": {
"name": "iis"
}
},
"spec": {
"containers": [
{
"name": "iis",
"image": "microsoft/iis",
"ports": [
{
"containerPort": 80
}
]
}
],
"nodeSelector": {
"beta.kubernetes.io/os": "windows"
}
}
}
kube-proxy
implementation uses netsh portproxy
and as it only supports TCP, DNS currently works only if the client retries DNS query using TCP.