Installation guide for KubeLB

Installation

Prerequisites for KubeLB

Consumer cluster

  • KubeLB manager cluster API access.
  • Registered as a tenant in the KubeLB manager cluster.

Load balancer cluster

  • Service type LoadBalancer implementation. This can be a cloud solution or a self-managed implementation like MetalLB.
  • Network access to the consumer cluster nodes with node port range (default: 30000-32767). This is required for the envoy proxy to be able to connect to the consumer cluster nodes.

Installation for KubeLB manager

KubeLB manager is deployed as a Kubernetes application. It can be deployed using the KubeLB manager Helm chart in the following way:

Prerequisites

  • Create a namespace kubelb for the CCM to be deployed in.

Install helm chart for KubeLB manager

Now, we can install the helm chart:

helm pull oci://quay.io/kubermatic/helm-charts/kubelb-manager --version=v1.0.0 --untardir "kubelb-manager" --untar
## Create and update values.yaml with the required values.
helm install kubelb-manager kubelb-manager/kubelb-manager --namespace kubelb -f values.yaml

Values

KeyTypeDefaultDescription
affinityobject{}
autoscaling.enabledboolfalse
autoscaling.maxReplicasint10
autoscaling.minReplicasint1
autoscaling.targetCPUUtilizationPercentageint80
autoscaling.targetMemoryUtilizationPercentageint80
fullnameOverridestring""
image.pullPolicystring"IfNotPresent"
image.repositorystring"quay.io/kubermatic/kubelb-manager"
image.tagstring"v1.0.0"
imagePullSecretslist[]
kubelb.debugboolfalse
kubelb.enableLeaderElectionbooltrue
kubelb.envoyProxy.affinityobject{}
kubelb.envoyProxy.nodeSelectorobject{}
kubelb.envoyProxy.replicasint3The number of replicas for the Envoy Proxy deployment.
kubelb.envoyProxy.resourcesobject{}
kubelb.envoyProxy.singlePodPerNodebooltrueDeploy single pod per node.
kubelb.envoyProxy.tolerationslist[]
kubelb.envoyProxy.topologystring"shared"Topology defines the deployment topology for Envoy Proxy. Valid values are: shared, dedicated, and global.
kubelb.envoyProxy.useDaemonsetboolfalseUse DaemonSet for Envoy Proxy deployment instead of Deployment.
kubelb.propagateAllAnnotationsboolfalsePropagate all annotations from the LB resource to the LB service.
kubelb.propagatedAnnotationsobject{}Allowed annotations that will be propagated from the LB resource to the LB service.
kubelb.skipConfigGenerationboolfalseSet to true to skip the generation of the Config CR. Useful when the config CR needs to be managed manually.
nameOverridestring""
nodeSelectorobject{}
podAnnotationsobject{}
podLabelsobject{}
podSecurityContext.runAsNonRootbooltrue
podSecurityContext.seccompProfile.typestring"RuntimeDefault"
rbac.allowLeaderElectionRolebooltrue
rbac.allowMetricsReaderRolebooltrue
rbac.allowProxyRolebooltrue
rbac.enabledbooltrue
replicaCountint1
resources.limits.cpustring"100m"
resources.limits.memorystring"128Mi"
resources.requests.cpustring"100m"
resources.requests.memorystring"128Mi"
securityContext.allowPrivilegeEscalationboolfalse
securityContext.capabilities.drop[0]string"ALL"
securityContext.runAsUserint65532
service.portint8001
service.protocolstring"TCP"
service.typestring"ClusterIP"
serviceAccount.annotationsobject{}
serviceAccount.createbooltrue
serviceAccount.namestring""
serviceMonitor.enabledboolfalse
tolerationslist[]

Installation for KubeLB CCM

Pre-requisites

  • Create a namespace kubelb for the CCM to be deployed in.
  • The agent expects a Secret with a kubeconf file named kubelb to access the load balancer cluster. To create such run: kubectl --namespace kubelb create secret generic kubelb-cluster --from-file=<path to kubelb kubeconf file>. The name of secret can’t be overridden using .Values.kubelb.clusterSecretName
  • Update the tenantName in the values.yaml to a unique identifier for the tenant. This is used to identify the tenant in the manager cluster. This can be any unique string that follows lower case RFC 1123.

At this point a minimal values.yaml should look like this:

kubelb:
    clusterSecretName: kubelb-cluster
    tenantName: <unique-identifier-for-tenant>

Install helm chart for KubeLB CCM

Now, we can install the helm chart:

helm pull oci://quay.io/kubermatic/helm-charts/kubelb-ccm --version=v1.0.0 --untardir "kubelb-ccm" --untar
## Create and update values.yaml with the required values.
helm install kubelb-ccm kubelb-ccm/kubelb-ccm --namespace kubelb -f values.yaml

Values

KeyTypeDefaultDescription
affinityobject{}
autoscaling.enabledboolfalse
autoscaling.maxReplicasint10
autoscaling.minReplicasint1
autoscaling.targetCPUUtilizationPercentageint80
autoscaling.targetMemoryUtilizationPercentageint80
extraVolumeMountslist[]
extraVolumeslist[]
fullnameOverridestring""
image.pullPolicystring"IfNotPresent"
image.repositorystring"quay.io/kubermatic/kubelb-ccm"
image.tagstring"v1.0.0"
imagePullSecretslist[]
kubelb.clusterSecretNamestring"kubelb-cluster"
kubelb.enableLeaderElectionbooltrue
kubelb.nodeAddressTypestring"InternalIP"
kubelb.tenantNamestringnil
nameOverridestring""
nodeSelectorobject{}
podAnnotationsobject{}
podLabelsobject{}
podSecurityContext.runAsNonRootbooltrue
podSecurityContext.seccompProfile.typestring"RuntimeDefault"
rbac.allowLeaderElectionRolebooltrue
rbac.allowMetricsReaderRolebooltrue
rbac.allowProxyRolebooltrue
rbac.enabledbooltrue
replicaCountint1
resources.limits.cpustring"100m"
resources.limits.memorystring"128Mi"
resources.requests.cpustring"100m"
resources.requests.memorystring"128Mi"
securityContext.allowPrivilegeEscalationboolfalse
securityContext.capabilities.drop[0]string"ALL"
securityContext.runAsUserint65532
service.portint8443
service.protocolstring"TCP"
service.typestring"ClusterIP"
serviceAccount.annotationsobject{}
serviceAccount.createbooltrue
serviceAccount.namestring""
serviceMonitor.enabledboolfalse
tolerationslist[]