Automating Karbon Deployments via API

Automating Karbon Deployments via API

You might be wondering – why is the featured image for this article a violin? Primarily it’s because I love the image itself (lol) but also because it’s a really nice, abstract way of looking at orchestration in detail. Ok, so that was bad – with that out of the way, let’s jump into today’s subject – Deploying Kubernetes clusters with the new Nutanix Karbon API.

This article is reasonably long, so grab your favourite beverage, sit back and see how the Karbon APIs can be used to deploy Kubernetes.

Quick Karbon Intro

The Nutanix Karbon homepage describes Karbon as follows:

Fast-track your way to production-ready cloud native infrastructure with Karbon, an enterprise Kubernetes management solution. Dramatically simplify provisioning, operations, and lifecycle management.

https://www.nutanix.com/products/karbon

What does that mean for us as developers? Kubernetes itself is such an enormous topic on its own that going into detail about Kubernetes is beyond the scope of this particular article. That said, Kubernetes (or K8s), can be a fairly daunting prospect for new players. This can include things like understanding why K8s exists in the first place, what the requirements are and how to get a K8s cluster up and running. Karbon dramatically simplifies what can be quite a complex process, especially if that process is completed manually.

Like the developers that decided to refer to Kubernetes as K8s, I’m going to be lazy and assume you’ve got a script or third-party integration that needs to programmatically deploy a K8s cluster. To be clear, we are deploying a K8s cluster but we’re going to use Nutanix Karbon to do it. Even easier!

Karbon APIs

The Nutanix Karbon APIs are a relatively new release, only being made available in July 2020. Those of you with an interest in the Nutanix APIs will have noticed a new resource on the API Reference page – this gives quick access to the Karbon API documentation. Everything covered in today’s article is available in official form there, including other API endpoints not covered here.

GA vs alpha/beta

As you go through the various endpoints that are exposed by the Karbon APIs, you’ll see various endpoint prefixes:

  • /karbon/v1
  • /karbon/v1-alpha.1
  • /karbon/v1-beta.1

As the prefixes suggest, some of the available endpoints are Generally Available (GA) and are prefixed with /karbon/v1, whereas other endpoints are still considered alpha or beta and are prefixed with /karbon/v1-beta.1 or /karbon/v1-alpha.1. The appropriate caution should be taken before using the alpha or beta API endpoints in a production environment.

It’s worth noting that the main APIs endpoint we’ll use today are prefixed with /karbon/v1 and are considered GA. There’s a single exception to that, but I’ll mention it at the appropriate time.

Test Environment

As with all my API testing and development, I’m using Postman collections to organise my requests. If you are new to Postman, please consider reading the article titled “So many variables! How I test Nutanix APIs with Postman“. It covers how I use Postman to do exactly what we’ll be doing today.

Versions

Here are the software versions I’m using on my development cluster:

  • AOS 5.17.1
  • Prism Central 5.17.0.3
  • Karbon 2.1.0
  • Nutanix Karbon OS image ntnx-0.6

Assumptions

Karbon does have prerequisites, so here are some high-level assumptions I’m making. These are for anyone following this article in their own environment.

  • Your environment meets the requirements and has Karbon enabled. Please see the Requirements section of the Nutanix Karbon Guide for more information as this article won’t cover getting Karbon running in your environment.
  • Nutanix Karbon OS image ntnx-0.6 has already been downloaded and is available for deployment. Please see Downloading Images the Nutanix Karbon Guide for more information.

List Existing Clusters

Before creating new clusters, let’s take a quick look at seeing what clusters may already be running in our environment.

The request URI for this is as follows (this is the single request in today’s article that is accessed via a beta endpoint):

https://[prism_central_ip_address]:9440/karbon/v1-beta.1/k8s/clusters

By sending this GET request to Prism Central, the JSON response will contain details on existing K8s clusters that are being managed by Karbon, if any exist. This request, when run against my test environment shows there are three existing K8s clusters being managed by Karbon. The entire response is quite long – please note the screenshot below only shows the first cluster in the response.

JSON array containing all 3 Kubernetes clusters being managed by Karbon

As you can see in the screenshot above, we now have details about the visible cluster’s etcd configuration, the API server’s IP address, the type of master deployment (single master) as well as information about the workers and Kubernetes version.

Building the API Requests

Since we are going to deploy Kubernetes clusters using Karbon and the Karbon APIs, we can now get started on building out the request. To get started, here is the request URI we’ll be using to deploy K8s clusters using Karbon:

https://[prism_central_ip_address]:9440/karbon/v1/k8s/clusters

This is a POST request and therefore requires a corresponding JSON payload that tells the Karbon APIs what we actually want to do. As a small point of interest, let’s look at what happens if we submit a request to that URI without any JSON payload at all:

{
    "code": 602,
    "message": "body in body is required"
}

As you can see, we are politely told that a JSON body is missing and is required before we can go any further. Along the way, you’ll see similar messages if you accidentally omit a required parameter or field within the payload. It’s not overly relevant but I often interchange the words “payload” and “body” when talking about HTTP POST requests. In the context of these articles, they’re the same thing.

Single Master Kubernetes Cluster

If you have some exposure to Kubernetes already, you’ll be aware that the Master Node configuration can be single-master or multi-master. This is an oversimplification, but single-master essentially matches the “Development” deployment in Nutanix Karbon. A single-master development cluster will be made up of the following components:

  • 1x Master
  • 1x Worker
  • 1x etcd

Please see Kubernetes Components for information about the various Kubernetes components.

For this initial deployment, it’s important to note this configuration is not considered highly available. Any one of the components virtual machines could fail, resulting in cluster unavailability.

Let’s first look at the JSON payload that can be used to deploy a single-master Kubernetes cluster managed by Nutanix Karbon.

{
    "cni_config": {
        "flannel_config": {},
        "pod_ipv4_cidr": "172.20.0.0/16",
        "service_ipv4_cidr": "172.19.0.0/16"
    },
    "etcd_config": {
        "node_pools": [
            {
                "ahv_config": {
                    "cpu": 1,
                    "disk_mib": 122880,
                    "memory_mib": 8192,
                    "network_uuid": "{{subnet_uuid}}",
                    "prism_element_cluster_uuid": "{{cluster_uuid}}"
                },
                "name": "dev_etcd_node_pool",
                "node_os_version": "ntnx-0.6",
                "num_instances": 1
            }
        ]
    },
    "masters_config": {
        "node_pools": [
            {
                "ahv_config": {
                    "cpu": 1,
                    "disk_mib": 122880,
                    "memory_mib": 8192,
                    "network_uuid": "{{subnet_uuid}}",
                    "prism_element_cluster_uuid": "{{cluster_uuid}}"
                },
                "name": "dev_master_node_pool",
                "node_os_version": "ntnx-0.6",
                "num_instances": 1
            }
        ],
        "single_master_config": {}
    },
    "metadata": {
        "api_version": "v1.0.0"
    },
    "storage_class_config": {
        "default_storage_class": true,
        "name": "default-storageclass",
        "reclaim_policy": "Delete",
        "volumes_config": {
            "file_system": "ext4",
            "password": "{{pc_password}}",
            "prism_element_cluster_uuid": "{{cluster_uuid}}",
            "storage_container": "{{container_name}}",
            "username": "{{username}}"
        }
    },
    "workers_config": {
        "node_pools": [
            {
                "ahv_config": {
                    "cpu": 1,
                    "disk_mib": 122880,
                    "memory_mib": 8192,
                    "network_uuid": "{{subnet_uuid}}",
                    "prism_element_cluster_uuid": "{{cluster_uuid}}"
                },
                "name": "dev_worker_node_pool",
                "node_os_version": "ntnx-0.6",
                "num_instances": 1
            }
        ]
    },
    "name": "single01",
    "version": "1.16.10-0"
}

There a lot of information in that JSON payload, so let’s check out the highlights of what it will do.

  • Cluster named single01
  • Components (as mentioned previously)
    • 1x Master
    • 1x Worker
    • 1x etcd
    • The master, worker and etcd nodes are all configured as follows:
      • 1x vCPU
      • 122880 MiB storage
      • 8GiB vRAM
      • Connected to the same VM network, indicated by {{subnet_uuid}}
      • Deployed to the same Prism Element cluster, indicated by {{cluster_uuid}}
  • Kubernetes version 1.16.10-0
  • Nutanix Host OS version ntnx-0.6
  • CIDR ranges
    • Service: 172.19.0.0/16
    • Pod: 172.20.0.0/16
  • Storage class named default-storageclass
    • ext4 filesystem
    • Reclaim policy set to Delete
    • Storage container indicated by {{container_name}}
  • Karbon API version v1.0.0

It’s worth noting that configuration details such as ahv_config for the master/worker/etcd nodes can of course be different for each node type. I’m just using the same configuration for all so that the payload is a little simpler.

Those that have gone through the deployment of a Kubernetes cluster using the Karbon UI will notice each parameter matches 1:1 with what you see in the Karbon UI. In other words, the options you select while using the Karbon UI have all been specified in the JSON payload, too.

Sending the request

Sending the request to Prism Central will return a JSON response similar to what is shown below:

{
    "cluster_name": "single01",
    "cluster_uuid": "30efbfd8-5544-4a58-53c4-2545658cc981",
    "task_uuid": "21da123c-8911-41e8-acef-7653cbdc2aaa"
}

The cluster name as per the JSON payload is clearly visible, as is the UUID of our new cluster as well as the UUID of the task that was created to handle the process. For the purposes of this article we can ignore the task_uuid and cluster_uuid – the Karbon API can be used to get info about what’s currently happening. We’ll do that shortly.

Multi Master Kubernetes Cluster

The JSON payload for a multi master Kubernetes cluster is slightly different. In addition to the information specified in the single master payload, we can also specify a few additional properties. In a production you’d be more likely to use something similar to this, with the various parameters tuned to provide a highly-available configuration.

  • The external IPv4 address of the master node cluster, if an active/passive configuration is being used (this is what will be demonstrated with this payload).
  • The number of master nodes instances. This is set to 1 in the single master configuration but is set to 2 here.
  • Note the single_master_config object has been removed from the payload – it doesn’t apply to multi master configurations.

Here’s what the complete JSON payload looks like:

{
    "cni_config": {
        "flannel_config": {},
        "pod_ipv4_cidr": "172.20.0.0/16",
        "service_ipv4_cidr": "172.19.0.0/16"
    },
    "etcd_config": {
        "node_pools": [
            {
                "ahv_config": {
                    "cpu": 1,
                    "disk_mib": 122880,
                    "memory_mib": 8192,
                    "network_uuid": "{{subnet_uuid}}",
                    "prism_element_cluster_uuid": "{{cluster_uuid}}"
                },
                "name": "dev_etcd_node_pool",
                "node_os_version": "ntnx-0.6",
                "num_instances": 1
            }
        ]
    },
    "masters_config": {
        "active_passive_config": {
            "external_ipv4_address": "10.42.250.49"
        },
        "node_pools": [
            {
                "ahv_config": {
                    "cpu": 1,
                    "disk_mib": 122880,
                    "memory_mib": 8192,
                    "network_uuid": "{{subnet_uuid}}",
                    "prism_element_cluster_uuid": "{{cluster_uuid}}"
                },
                "name": "dev_master_node_pool",
                "node_os_version": "ntnx-0.6",
                "num_instances": 2
            }
        ]
    },
    "metadata": {
        "api_version": "v1.0.0"
    },
    "storage_class_config": {
        "default_storage_class": true,
        "name": "default-storageclass",
        "reclaim_policy": "Delete",
        "volumes_config": {
            "file_system": "ext4",
            "password": "{{pc_password}}",
            "prism_element_cluster_uuid": "{{cluster_uuid}}",
            "storage_container": "{{container_name}}",
            "username": "{{username}}"
        }
    },
    "workers_config": {
        "node_pools": [
            {
                "ahv_config": {
                    "cpu": 1,
                    "disk_mib": 122880,
                    "memory_mib": 8192,
                    "network_uuid": "{{subnet_uuid}}",
                    "prism_element_cluster_uuid": "{{cluster_uuid}}"
                },
                "name": "dev_worker_node_pool",
                "node_os_version": "ntnx-0.6",
                "num_instances": 1
            }
        ]
    },
    "name": "multi01",
    "version": "1.16.10-0"
}

Sending the request

Because we’re using the same API endpoint that we used when creating the single-master cluster, the response follows the same exact format to the response from the previous request – cluster_uuid, task_uuid and cluster_name.

Getting Cluster Information

With our cluster or clusters now running or still being deployed, it helps to be able to query them.

The Karbon APIs also expose an endpoint for getting information about a specific cluster. This is a GET request and must be made to the following URI:

https://[prism_central_ip_address]:9440/karbon/v1/k8s/clusters/[k8s_cluster_name]

For this part we’ll use the name of our multi master cluster – multi01. Replacing [k8s_cluster_name] above with the name of our cluster will return a response similar to the one shown below:

{
    "etcd_config": {
        "node_pools": []
    },
    "kubeapi_server_ipv4_address": "10.42.250.49",
    "master_config": {
        "deployment_type": "multi-master-active-passive",
        "node_pools": []
    },
    "name": "multi01",
    "status": "kDeploying",
    "uuid": "90ac2d10-a83c-42d0-69d4-c0382531b247",
    "version": "v1.16.10",
    "worker_config": {
        "node_pools": []
    }
}

There’s quite a lot of useful information shown here:

  • The Kubernetes API server IP address as specified in our payload i.e. 10.42.250.49
  • The deployment type i.e. multi-master-active-passive
  • Most importantly for this section, the status i.e. kDeploying – this cluster is currently being deployed and isn’t yet ready to be used

Once the deployments have completed, both will show as “Healthy” in the Karbon UI, just as they would if you had created them using the UI itself. Below we can see both the demo clusters from this article are deployed and healthy, along with another cluster owned by one of my colleagues.

Both single-master and multi-master clusters deployed and Healthy

Downloading Kubeconfig

The K8s clusters created using the Karbon API don’t differ at all from the K8s clusters created using the Karbon UI. For this reason, we can still download the Karbon-generated “kubeconfig” file. This file is used to run kubectl commands against the deployed K8s cluster. Please see Downloading The Kubeconfig in the Nutanix Karbon Guide for official documentation.

For what we’re doing today, though, the kubeconfig file can also be downloaded using the Karbon API. The GET request URI is as follows:

https://[prism_central_ip_address]:9440/karbon/v1/k8s/clusters/[cluster_name]/kubeconfig

Sending this request to Prism Central and substituting [cluster_name] with a valid cluster name will return a JSON response containing the kubeconfig. An example is shown below – the certificate authority data and token have been removed to make the example a little easier to read:

{
    "kube_config": "# -*- mode: yaml; -*-\n# vim: syntax=yaml\n#\napiVersion: v1\nkind: Config\nclusters:\n- name: single01\n  cluster:\n    server: https://10.42.250.85:443\n    certificate-authority-data: CERT_AUTHORITY_DATA_HERE\nusers:\n- name: default-user-single01\n  user:\n    token: TOKEN_HERE\ncontexts:\n- context:\n    cluster: single01\n    user: default-user-single01\n  name: single01-context\ncurrent-context: single01-context\n\n"
}

Wrapping Up

This article went into quite a lot of detail about the JSON payloads and Karbon API endpoints that can be used to programmatically deploy K8s clusters, as well as get information about clusters that are already managed by Karbon. While this is not a difficult process in terms of API concepts, the payloads themselves can take time to figure out at first.

Hopefully the examples above have demonstrated how you can integrate the Karbon APIs and K8s deployment automation into your own workflows.

Thanks for reading and have a great day! 🙂

SSH PUBLIC KEY

Copy the SSH public key below.  In BASH shell environments, for example, this file could be saved to ~/.ssh/nutanix_demo.pub

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCm+7N2tjmJw5jhPmD8MS6urZQJB42ABh73ffGQSJ0XUHgdEDfjUDFkLK0wyJCe0sF5QJnh07UQn0F0BUnBi+VwehPGeODh6S43OP5YS/14L0fyntFI06B9lckx/ygRNu82sHxXCX+6VVUFPOPC+sz6j1DQswKY9d4cEYnaMBGSzqRxrqAIf6aWIKTJTYKPFY0zaUZ6ow2iwS0Nlh5EqaXsEBWkqMmr7/auP9GV/adUgzFrGLJklYBdfH575SIK6/PZL6wNT0jE9LmFlEm7dI01ZWPclBuV16FzRyrnzmWr/ebY62A04vYBtR0vyfEfsW2ZgxgD6aAE6+ytj0v19y0elRtOaeTySN/HlXh7owKWCHnlXNpTUiSDP8SQ8LRARkhQu3KEDL0ppGCrSF87oFkp1gPzf92U+UK3LaNMMjZXMOy0zLoLEdLtbQo6S8iHggDoX4NI4sWWxcX0mtadvjy/nIOvskk9IXasQh0u0MT9ARQY5VXPluKDtEVdeow9UbvgJ1xxNkphUgsWjCiy+sjgapsuZvWqKM6TPT1i24XYaau+/Fa0vhjLb8vCMWrrtkRwGt4re243NDYcYWTzVZUFuUK0w1wqt77KgjCCeyJdsZNwrh15v780Fjqpec3EGVA0xyNbF0jn/tsnYy9jPh/6Cv767EratI97JhUxoB4gXw== no-reply@acme.com