How to deploy Kubernetes Dashboard?

2022-02-21|By Artur Spychalla|Code

If you are working with K8s, you probably faced the problem of monitoring the entire cluster: pods, services, etc. There are a lot of resources to keep an eye on. Kubernetes Dashboard is the official web-based application, using which you can troubleshoot your containers and take a look at applications running on your cluster.


Moreover, using Dashboard, you can easily restart a pod, scale a deployment or deploy a new application using deploy wizard - and it can also be deployed on cloud-based clusters: AWS (EKS), Azure (AKS), GCP (GKE).

Dashboard has a lot of information about your whole cluster.

Dashboard has a lot of information about your whole cluster.

A quick rundown of Dashboard categories

Workloads

In this section, you can see all applications running in the selected namespace. This view lists applications by workload kind (for example: Deployments, ReplicaSets, StatefulSets). Those lists summarize information about workloads, such as the number of ready pods for a ReplicaSet, or current memory usage for a Pod. Additionally, detailed views for workloads show status, specification information and relationships between objects (e.g. pods controlled by ReplicaSet or HorizontalPodAutoscaler for Deployments).

Service

Services and Ingress show pods targeted by them, internal endpoints for cluster connections and external endpoints for external users.

Config and Storage

Storage section shows PersistentVolumeClaim resources which are used by applications for storing data.

Cluster

This section describes the cluster internals - roles and policies, namespaces, information about nodes, volumes...

How to deploy Kubernetes Dashboard UI?

While this might be a surprise, the Kubernetes Dashboard is not deployed by default. To deploy Dashboard, first ensure that you have installed kubectl on your machine, and configured it to work with your Kubernetes cluster. Kubectl is a command-line tool which allows you to manage many Kubernetes objects and interact with its inner workings.


For this article, we will use minikube - a miniature, yet very capable version of a Kubernetes cluster, which you can install on your MacOS/Win/Linux machine and run locally.


So, first things first - let's check, if kubectl is configured properly and has access to our cluster. Type in:


kubectl cluster-info

The output should look very similar to this:

If it doesn't look similar - check the official documentation for support.

If it doesn't look similar - check the official documentation for support.

Deployment using manifests

Now, when you can reach the cluster, it's time to install Kubernetes Dashboard. Download required manifests using curl. Version included in the URL (v.2.4.0) might be obsolete by the time you will read this - remember to check for updates before executing the command.


curl https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml --output k8s.yaml

Install the application with this single kubectl command:


kubectl apply -f k8s.yaml

And just enjoy the magic.

Kubernetes will automatically create all required resources.

Kubernetes will automatically create all required resources.

Great, you have just installed Kubernetes Dashboard UI. But it's not over. There's one more thing to do, before you will be able to access your new K8s Dashboard.


Let's take a look at Kubernetes Dashboard Authentication. Dashboard deploys a minimal RBAC configuration by default. That means you should create a service account. At first, you should create a new manifest for Service Account (e.g. sa-dashboard.yml):


apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

That means you will create a Service Account named admin-user in kubernetes-dashboard namespace. To apply this manifest, type the below command:


kubectl apply -f sa-dashboard.yml

After that, check if ClusterRole cluster-admin exists. Usually it already does, but in some cases it might not - you can easily verify this with the following command:


kubectl get clusterroles cluster-admin

The output should be similar.

The output should be similar.

In case that ClusterRole does not exist, you will need to add it - for this purpose, i have prepared a manifest that will create cluster-admin role, which will be granted admin privileges for all resources:

BEWAREOF DOG
This ClusterRole might pose a significant security threat if exploited. Please, be aware, that using overly permissive roles is very dangerous, and should only be done for testing purposes, never in real scenarios!

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-admin
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'

To apply this manifest, copy the content from above to a file (For this example, we'll name it cr-dashboard.yml) and execute the following command:


kubectl apply -f cr-dashboard.yml

If you already have ServiceAccount and ClusterRole, the last step is to create a manifest for ClusterRoleBinding (e.g. crb-dashboard.yml):

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

In this case, I will use ClusterRoleBinding, which means that I will bind that role to all namespaces in the cluster. Of course, if you want, you can bind this role to your dedicated namespace using RoleBinding, just remember to define the namespace name in your manifest.

To apply this manifest, do:


kubectl apply -f crb-dashboard.yml

Every ServiceAccount object has a Secret with a valid Bearer Token (which is created automatically) - you can use it to log in to Dashboard. It's time to get that secret token. To do so, execute the following command (Use your defined names of Namespace and ServiceAccount):


kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

The result should look somewhat like this:


eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXY1N253Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwMzAzMjQzYy00MDQwLTRhNTgtOGE0Ny04NDllZTliYTc5YzEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.Z2JrQlitASVwWbc-s6deLRFVk5DWD3P_vjUFXsqVSY10pbjFLG4njoZwh8p3tLxnX_VBsr7_6bwxhWSYChp9hwxznemD5x5HLtjb16kI9Z7yFWLtohzkTwuFbqmQaMoget_nYcQBUC5fDmBHRfFvNKePh_vSSb2h_aYXa8GV5AcfPQpY7r461itme1EXHQJqv-SN-zUnguDguCTjD80pFZ_CmnSE1z9QdMHPB8hoB4V68gtswR1VLa6mSYdgPwCHauuOobojALSaMc3RH7MmFUumAgguhqAkX3Omqd3rJbYOMRuMjhANqd08piDC3aIabINX6gP5-Tuuw2svnV6NYQ

If you know the name of the secret that holds your Bearer Token, you can use another method. Moreover, you can also check the existing secrets in your cluster. Additional tip: Using -A flag at the end of the command allows you to search across all namespaces:


kubectl get secrets -A

Now, if you know what the name of the secret is, you can use the following command to describe that secret:


kubectl describe secret kubernetes-dashboard-token-lxq6q -n kubernetes-dashboard

In this case, the output will be a bit different:

The output should be similar to this

The output should be similar to this

Write the token down, it will come in handy later.

Deployment using Helm Chart

Another way to deploy Kubernetes Dashboard is using Helm. But what is Helm? It is a very popular tool, that helps you manage K8s applications - you can easily create, distribute, install and upgrade them. Helm makes complex Kubernetes installs much, much easier.


Let's try to install the K8s Dashboard using Helm Chart. First, you need to install Helm on your machine. Accordingly to your platform of choice, the install process will be differ. For your convenience, i've described a few of them below.


From script:


curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

From homebrew (MacOS):


brew install helm

From Chocolatey (Windows):


choco install kubernetes-helm

If those methods won't cut it for you - check the official Helm website. For now, let's move on with the installation. First, take a look at what's inside the K8s Dashboard chart:

File tree of Kubernetes Dashboard Helm chart.

File tree of Kubernetes Dashboard Helm chart.

You can notice that this chart has another chart inside, named "metrics-server". Moreover, this entire package contains every manifest necessary for the K8s Dashboard. Of course, not everything has be used during chart installation. You can decide, which resources should be installed, and which should not. To select what you wish to install, toggle appropriate values in in values.yaml - for example, if you don't want to install an Ingress, set its enabled property to false:


ingress:
  enabled: false

Now, the time has come to install the Kubernetes Dashboard chart. First, add the kubernetes-dashboard repository to Helm:


helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/

Then, deploy a Helm Release named “kubernetes-dashboard" using the kubernetes-dashboard chart:


helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard

The output should be similar to this:

The output should be similar to this

The output should be similar to this

Kubernetes will create a secret token for your ServiceAccount automatically. Remember, you should use your own namespace and service name. Moreover, ensure that you have generated a proper Bearer Token. To check the token for your ServiceAccount, type following command:


kubectl -n default get secret $(kubectl -n default get sa/kubernetes-dashboard -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

The output should be similar to this:


eyJhbGciOiJSUzI1NiIsImtpZCI6IlNVa2QwNVZlQ3NLbzdwSXE1VFRBQVdheHRVMDh0QzdDdHl2OWtvNldZMFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Ims4cy1kYXNoYm9hcmQta3ViZXJuZXRlcy1kYXNoYm9hcmQtdG9rZW4tY2pqNGsiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiazhzLWRhc2hib2FyZC1rdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjM0MmQ4M2I2LTAxNWQtNDNlYS04NjA4LWE1YjIxYmY3OTk2ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Oms4cy1kYXNoYm9hcmQta3ViZXJuZXRlcy1kYXNoYm9hcmQifQ.J7w9Kux94UT61QGsstXmJRFrh661-UADJgew58IZCfDcSqf-yX8sP7TVTVTYD4FhEJQl2nCuJrV1hAHNk1xr5am1E74g2HCDvj_Sxabi4xagIjcR2RRBuYIqmu92FUY6FH3jPWuZNw5vwZoCzfO-Y6m4CwH3RuZNA6NNSciVCRRW12UX-PdEKpCY5tr7EJH_iteewMDDYiImSM8DNbngxgwYCASoTOSeZHAl6eStkgQRAikmAQenjhs6yVs8-_EhLiefUFBWAl16RFVTYlsf5GZErcQGAMdZJYHg2V8lBk6qL8Q_-znZRwcm3IMrekIMXsnFwwKo4M92lsxbyKlhJQ

If you know your secret name, you can use another method. Just like described above with manual installation, you can check the existing secrets in your cluster using this command:


kubectl get secrets -A

Now, as you know what the name of the secret is, you can use the following command to describe that secret:


kubectl describe secrets k8s-dashboard-kubernetes-dashboard-token-b4f7q

The output should be similar to this

The output should be similar to this

Let's write the token down, we will need this later.

How to access Kubernetes Dashboard remotely?

In this section, you can find two ways to access K8s Dashboard: internally and externally.

BEWAREOF DOG
Exposing anything from your cluster to external traffic should be well thought-out and planned. There are a lot of endpoints that could pose immense security risks if exposed externally, including takeover of the entire cluster. Be careful.

Internal access to the Kubernetes Dashboard

Let's try the local connection first. Of course, there are a lot of ways to access Kubernetes Dashboard, but in this article we'll use kubectl with bearer token that you have written down in previous steps. At first, you should enable access to the Dashboard by running the following command:


kubectl proxy

This command allows access to the Kubernetes web interface only from the local machine. UI will be available at localhost:8001. The whole Kubernetes Dashboard URL is:


http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/


If you go to this link, you should see the Kubernetes Dashboard Login page. Take note of URL details - if you are using K8s Dashboard from the default Helm chart, you won't be able to use the above URL.

Chose "token" from the dropdown, paste the token you wrote down, and just click "Sign In"

Chose "token" from the dropdown, paste the token you wrote down, and just click "Sign In"

External Access to the Kubernetes Dashboard

Now, you know how to configure and locally access Kubernetes Dashboard. But you might also be interested in accessing Kubernetes Dashboard remotely from outside the cluster. The best way to expose Dashboard externally is using Ingress resources.

BEWAREOF DOG
Although you can also use HTTP, HTTPS offers additional security without much additional hassle. HTTPS is the recommended way to go here!

So, we'll focus on HTTPS endpoint. First, you should create a cert-manager. Cert-manager adds certificate issuers as resource types in K8s clusters. Moreover, it simplifies the process of obtaining, renewing and using certificates. Cert-manager will ensure that they are up-to-date and valid. It will also attempt to renew them before they expire. To do so, we will use curl once again. Version included in the URL (v.1.7.1) might be obsolete by the time you read this - remember to check for latest stable version, before executing this command:


kubectl apply -f  https://github.com/jetstack/cert-manager/releases/download/v1.7.1/cert-manager.yaml

If you wish to learn more about Cert-manager, check the official website at https://cert-manager.io/docs/

This is how it most probably will look like

This is how it most probably will look like

As you can see, a lot of resources have been created, but we're not done yet. The next step is to create ClusterIssuer. We will use ACME (Automated Certificate Management Environment) type. In a few words, if you create a new ACME ClusterIssuer, the cluster-manager will generate a private key. This key will be used to identify you with the ACME server. Additionally, certificates issued by ACME server should be trusted by client's computers by default, and ACME certificates are typically free.


ACME CA server will verify that the client owns the domain by checking if the client will successfully complete so-called "challenges". Let's take a look at the solvers section. Cert-manager offers two challenge validations:

  • HTTP01 challenge validation - challenges are completed by presenting a computed key that should be present at an HTTP URL endpoint and is routable over the internet. Once the ACME server is able to get this key from the URL, the ACME server will validate you are the owner of this domain.

  • DNS01 challenge validation - challenges are completed by providing a computed key (present at a DNS TXT record). Once this TXT record has been propagated across the internet, the ACME server will be able to retrieve this key via a DNS lookup. In the email section, please type your own email, because Let's Encrypt will use this email to contact you about expiring certificates and issues related to your account.

Take a look at the ClusterIssuer manifest below:


apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
 name: letsencrypt-dashboard
spec:
 acme:
   email: [email protected]
   server: https://acme-staging-v02.api.letsencrypt.org/directory
   privateKeySecretRef:
     name: dashboard-issuer-account-key
   solvers:
   - http01:
       ingress:
         class: nginx


To install ClusterIssuer from this manifest, all you have to do is:


kubectl apply -f cluster-issuer.yaml

The next step is to create Ingress, which manages external access to the cluster services. This will be accomplished by the manifest below:


apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
 name: ingress-dashboard
 namespace: kubernetes-dashboard
 annotations:
   cert-manager.io/cluster-issuer: letsencrypt-dashboard
   nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
   kubernetes.io/ingress.class: nginx
spec:
 tls:
 - hosts:
   - dashboard.example.com
   secretName: ingress-dashboard-cert
 rules:
 - host: dashboard.example.com
   http:
     paths:
     - pathType: Prefix
       path: /
       backend:
         service:
           name: kubernetes-dashboard
           port:
             number: 443

If you are using K8s Dashboard from the default Helm chart, keep in mind that you should change the namespace definition. This manifest declares, that you want to use a ClusterIssuer named letsencrypt-dashboard (which you have created earlier) to validate certificates. Moreover, your host is dashboard.example.com and every hit on that URL should redirect to service kubernetes-dashboard.


Ready? Then let's install this Ingress:


kubectl apply -f ingress.yaml

After that you should be able to verify if the Ingress resource exists, by using the following command:


kubectl get ingress ingress-dashboard -n kubernetes-dashboard

The output should be similar to this.

The output should be similar to this.

External Access to the Kubernetes Dashboard hosted on Minikube

If you are testing kubernetes dashboard external access on Minikube, there is an additional required step you should remember - enable the NGINX Ingress controller. Run:


minikube addons enable ingress

The installation process is quite quick.

The installation process is quite quick.

As you might have noticed from the installation tooltip, minikube tunnel command is needed to create a network route on the host.

The tunnel will be created after confirmation.

The tunnel will be created after confirmation.

Additional tip: when testing Ingress on Minikube, you should remember to change /etc/hosts config - your domain name should be redirected to 127.0.0.1.


127.0.0.1       example.com

Now, you should be able to log in into your K8s Dashboard at https://dashboard.example.com/

Same as previously, use your token to log in.

Same as previously, use your token to log in.

Conclusion

The Kubernetes Dashboard is a really powerful web-based application. Easy to install, and very simple in management. Of course, this application is not perfect, but it provides very valuable data visualization, much clearer than the kubectl raw output. A great dashboard, from which you can manage your whole cluster - everything from one place. Very comfortable.


Please remember, that you should not grant admin privileges to the ServiceAccount dedicated for K8s Dashboard on real, production-grade clusters. Moreover, be careful when exposing your dashboard to external traffic. Safety first!


I hope this article will prove useful. Thank you for your time, and enjoy your new Kubernetes Dashboard :)

References