Ingress and Load Balancing in Karbon Platform Services

Ingress and Load Balancing in Karbon Platform Services

Applications deployed as containers in Kubernetes might have to be exposed to external clients. Just think of a web application which is accessed by browsers in a private company network or from the entire internet. By default a container inside a Kubernetes pod is however only accessible from within the pod network. In fact it is highly undesirable exposing all micro services to external networks. Only services like an API server with proper authentication in place should be made accessible externally.

Kubernetes offers multiple constructs to facilitate external access to containers:

  • Services of type LoadBalancer for access via external IP
  • Services of type NodePort for access via node IP and port
  • HostPort of pods for access via node IP and port
  • Ingress for managing external access typically via HTTP(S)

LoadBalancers make services available through virtual IP(s) in an external network and are typically provided as services by cloud vendors or provisioned on-prem as hardware, virtual appliances or deployed within Kubernetes cluster itself.

NodePort exposes a service on each Node’s IP at a static port typically in the range of 30,000 – 32,767. The port can be dynamically allocated by Kubernetes.

HostPort unlike a NodePort is neither dynamically allocated nor available on all nodes. HostPort is associated with a pod not a service and only opened up on hosts on which pod has been scheduled. Also onus is on the Kubernetes user to avoid port conflicts. It’s not a recommended way to access containers in Kubernetes.

Ingress controllers typically leverages on of the previous mechanisms and provide an additional layer of more HTTP-centric routing and load balancing based on HTTP host names and paths. Features like TLS termination can be provided by either a load balancer or ingress controller.

This article demonstrates how to externally expose applications running on a KPS service domain using F5’s BigIP platform in conjunction with an ingress controller service. We chose F5 BigIP as popular LoadBalancer option but any external load balancer can be used.

Ingress, ingress and ingress

Ingress is a rather confusing term. In Kubernetes “ingress” usually refers to an API resource of kind ingress that manages external access to the services in a cluster, typically HTTP. In networking ingress is any traffic originating from an external network. The Kubernetes ingress API resource is heavily focused on HTTP and may provide load balancing, SSL termination and name-based virtual hosting. Adding to the confusion is the fact that Kubernetes doesn’t even implement the ingress API resource. Kubernetes only defines API resource for ingress and does not offer an implementation of an actual ingress controller which implements the API.

Karbon Platform Services as curated Kubernetes platform provides two managed ingress controller, namely Traefik and ingress-nginx. Both implement the ingress API resource but go beyond ingress as defined by Kubernetes.

Enabling an Ingress Controller

KPS offers two managed ingress controllers:

For basic ingress capabilities either one can be chosen. Both implement the ingress Kubernetes API resource. However due to limitation of ingress API resource, ingress controller offer countless extensions as annotations or additional custom resources. “ingress-nginx” uses additional annotations whereas “Traefik” chose the custom resources option.

An ingress controller is enabled per project and ingress controller type must not conflict across projects on same service domain. Ingress controller is deployed to all KPS nodes and listens on host port 80 and 443.

Services are managed under “Project Home” in KPS UI:

Ingress controllers are deployed per service domain. “Traefik” will show up in the sidebar once enabled. “Deployments” shows all service domains on which Traefik has been provisioned.

Testing Ingress Controller

Let’s deploy a simple app for testing ingress controller. The following Kubernetes manifest can be directly deployed as “Kubernetes Application” in a KPS project:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
  labels:
    app: whoami
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: containous/whoami
          ports:
            - name: web
              containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
spec:
  ports:
    - protocol: TCP
      name: web
      port: 80
  selector:
    app: whoami
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: whoami
  labels:
    app: whoami
spec:
  rules:
    - http:
        paths:
        - path: /whoami
          backend:
            serviceName: whoami
            servicePort: web

This app although very basic demonstrates how any applications serving HTTP requests can be exposed by using an ingress API resource. In this case we expose a service “whoami” on HTTP path “/whoami”. Ingress related entities are reported in UI next to ingress controller deployments. In case of ingress “Rules” is the primary entity of interest.

Different service domains can report different rules for same ingress in app. UI takes this into account by displaying multiple entries if they differ. We allow templating in app YAMLs which can render same resource differently across service domains. In this simple example we can see a path-based rule with single service as destination.

On KPS ingress controllers are configured to listen on host port 80 and 443 for HTTP and HTTPS traffic respectively. That is any KPS node in the service domain can server HTTP requests. For instance I have three nodes in my service domain:

Service domains nodes in KPS UI

I will pick the first node for accessing the demo app.

C02S60NCFVH7:~ heiko.koehler$ curl 10.45.60.153/whoami
Hostname: whoami-6957d8fb47-c4rcb
IP: 127.0.0.1
IP: 10.32.64.12
RemoteAddr: 10.32.0.5:39444
GET /whoami HTTP/1.1
Host: 10.45.60.153
User-Agent: curl/7.64.1
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.40.38.242
X-Forwarded-Host: 10.45.60.153
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-qbg6t
X-Real-Ip: 10.40.38.242

All nodes are equal with respect to HTTP(S) access. Any external load balancer can hide all nodes behind one ore more virtual IPs. This allows clients to access services on service domains by DNS names which map to one or more virtual IPs.

Creating a BigIP Virtual Server in front of a Service Domain

F5 can be deployed multiple ways. For this demo I’ve used BigIP virtual edition at 10.45.60.195. Instructions on how to set up a standalone BigIP can be found at the end of this article.

F5 like most load balancers has a concept of node pools. Those are a set of addresses which are treated equally for accessing a service. First we have to configure all nodes of a service domain in BigIP

Next we groups the nodes into a pool for load balancer to forward traffic. Under “Local Traffic”/Pools a new pool can be created from nodes. We must specify a service port for each node. This must be 80 since ingress controller listen on port 80 for HTTP traffic. HTTPS can be set up similarly by pointing to port 443.

Last but not least a “Virtual Server” is added. I chose a virtual server of type “Performance (HTTP)” listening on port 80. Since we’ve been using a standalone BigIP we choose address of my BigIP as “Destination Address”.

Each virtual server has a default pool which I set to the previously created pool with “Default Persistence Profile” set to “cookie”.

We now can access any service exposed by ingress controller through BigIP’s virtual IP.

C02S60NCFVH7:~ heiko.koehler$ curl 10.45.60.195/whoami
Hostname: whoami-6957d8fb47-c4rcb
IP: 127.0.0.1
IP: 10.32.64.12
RemoteAddr: 10.32.192.50:51148
GET /whoami HTTP/1.1
Host: 10.45.60.195
User-Agent: curl/7.64.1
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.45.60.195
X-Forwarded-Host: 10.45.60.195
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-7f

Setting up F5 BigIP Virtual Edition VM

F5 BigIP platform is a popular choice for load balancing and can set an example on how to use an external load balancer in front of a service domain. For this step we’ve chosen to deploy a virtual BigIP appliance on an AHV cluster.

QCOW2 image for BigIP VE can be downloaded and installed similar to KVM. To that and we can follow KVM instructions for BigIP virtual edition. After the QCOW2 image has been downloaded it can be published to AHV cluster under Settings/”Image Configuration”.

I’ve configured 4 vCPU with 2 threads each and dedicated 16GB to my BigIP VM:

Boot disk has been cloned from BigIP’s QCOW2 image:

Clone AHV VM from BigIP QCOW2 image

For testing purposes we run a standalone BigIP with a single NIC:

After BigIP VM booted you can log into VM via console as user “root” and password “default”.

You are required to change password on first successful login. The “config” utility lets you configure web access.

We configure current IP as management IP for web configuration utility:

The web configuration utility will come up on port 8443 rather than 443 on a standalone BigIP VM. You can log into the configuration utility as “admin” using same password as “root”. SSH access is enabled as well for “root” but not “admin” user. In our case we point the browser to “https://10.45.60.195:8443” (mind https no http).
A license is needed for any BigIP. After entering the license key via configuration utility the BigIP is ready for use.

Where to go from here

The combination of ingress controller and load balancer can facilitate access to any service in Kubernetes as long as it serves HTTP requests. Ingress controllers can perform advanced HTTP request routing based on path, headers or host names. We didn’t go into TLS based server authentication as this deserves its own post.