Debugging

Debugging

Check if the Kubermatic components are running

  1. Check on the kubermatic-pods by issuing a kubectl get pod -n kubermatic
  2. If any of them is not running, execute kubectl logs -n kubermatic $PODNAME to find out the issue

The individual components and their purpose are:

  • kubermatic-ui: Provides the UI
  • kubermatic-api: Provides the API
  • master-controller: Sets up access for users to projects and clusters
  • controller-manager: Creates all the components required for a cluster control plane

Check for problems with an individual user cluster

  1. Find the cluster-id by selecting going to the details view of your cluster in the UI. The URL looks something like this, the cluster id is the last part: `https://kubermatic/projects/project-id/dc/dc-name/clusters/cluster-id
  2. Get the kubeconfig for your seed cluster
  3. Check if there are any errors in the events for the cluster in question by issuing a kubectl describe cluster cluster-id
  4. Check if all pods for the cluster are running by executing kubectl get pods -n cluster-$CLUSTER_ID
  5. If that is not the case, check the log of the pod in quesiton by issuing a kubectl logs -n cluster-$CLUSTER_ID $PODNAME
  6. If you want to play around with flags or other settings for a pod, you can make Kubermatic stop managing the cluster by running kubectl edit cluster $CLUSTER_ID and setting .spec.pause to true
  7. If you want more detailled logs from Kubermatic, you can edit one of its deployments, e.G. kubectl edit deployment kubermatic-controller-manager-v1 -n kubermatic, and set the verbosity by adjusting the default of -v=2 to e.G. -v=4

Check for problems with machines for an individual user cluster

  1. Get the kubeconfig to your cluster via the UI
  2. Configure kubectl to use it by running export KUBECONFIG=$DOWNLOADED_KUBECONFIG_FILE
  3. Get the machines via kubectl get machine -n kube-system
  4. Check the events for the machines by running kubectl describe machine -n kube-system $MACHINE_NAME