This page shows how to configure and deploy CoreDNS to be used as the DNS provider for Cluster Federation.
LoadBalancer
services in member clusters of federation is
mandatory to enable CoreDNS
for service discovery across federated clusters.CoreDNS can be deployed in various configurations. Explained below is a reference and can be tweaked to suit the needs of the platform and the cluster federation.
To deploy CoreDNS, we shall make use of helm charts. CoreDNS will be deployed with etcd as the backend and should be pre-installed. etcd can also be deployed using helm charts. Shown below are the instructions to deploy etcd.
helm install --namespace my-namespace --name etcd-operator stable/etcd-operator
helm upgrade --namespace my-namespace --set cluster.enabled=true etcd-operator stable/etcd-operator
Note: etcd default deployment configurations can be overridden, suiting the host cluster.
After deployment succeeds, etcd can be accessed with the http://etcd-cluster.my-namespace:2379 endpoint within the host cluster.
The CoreDNS default configuration should be customized to suit the federation. Shown below is the Values.yaml, which overrides the default configuration parameters on the CoreDNS chart.
Values.yaml
|
---|
|
The above configuration file needs some explanation:
isClusterService
specifies whether CoreDNS should be deployed as a
cluster-service, which is the default. You need to set it to false, so
that CoreDNS is deployed as a Kubernetes application service.serviceType
specifies the type of Kubernetes service to be created
for CoreDNS. You need to choose either “LoadBalancer” or “NodePort” to
make the CoreDNS service accessible outside the Kubernetes cluster.middleware.kubernetes
, which is enabled by default by
setting middleware.kubernetes.enabled
to false.middleware.etcd
by setting middleware.etcd.enabled
to
true.middleware.etcd.zones
as shown above.middleware.etcd.endpoint
Now deploy CoreDNS by running
helm install --namespace my-namespace --name coredns -f Values.yaml stable/coredns
Verify that both etcd and CoreDNS pods are running as expected.
The Federation control plane can be deployed using kubefed init
. CoreDNS
can be chosen as the DNS provider by specifying two additional parameters.
--dns-provider=coredns
--dns-provider-config=coredns-provider.conf
coredns-provider.conf has below format:
[Global]
etcd-endpoints = http://etcd-cluster.my-namespace:2379
zones = example.com.
coredns-endpoints = <coredns-server-ip>:<port>
etcd-endpoints
is the endpoint to access etcd.zones
is the federation domain for which CoreDNS is authoritative and is same as –dns-zone-name flag of kubefed init
.coredns-endpoints
is the endpoint to access CoreDNS server. This is an optional parameter introduced from v1.7 onwards.Note: middleware.etcd.zones in CoreDNS configuration and –dns-zone-name flag to kubefed init should match.
Note: The following section applies only to versions prior to v1.7
and will be automatically taken care of if the coredns-endpoints
parameter is configured in coredns-provider.conf
as described in
section above.
Once the federation control plane is deployed and federated clusters
are joined to the federation, you need to add the CoreDNS server to the
pod’s nameserver resolv.conf chain in all the federated clusters as this
self hosted CoreDNS server is not discoverable publicly. This can be
achieved by adding the below line to dnsmasq
container’s arg in
kube-dns
deployment.
--server=/example.com./<CoreDNS endpoint>
Replace example.com
above with federation domain.
Now the federated cluster is ready for cross-cluster service discovery!
Create an Issue Edit this Page