From there to here, from here to there, Containers are everywhere!

From there to here, from here to there, Containers are everywhere!

I quoted Dr. Seuss with a twist, to setup context for this blog post. Ah! Yes, I’m going to talk about (Linux) Containers or Containerized Apps. Containers are anything but funny things, and they are now everywhere. Every enterprise IT team has been thinking or already thought about formulating their strategy to welcome Containers in their environment. Depending on when they started, they are at different stages of their own Container journey. Teams are investing in learning more about Containers and new tooling required to make successful transition to Containers.

Containers fundamentally changed how applications are designed, packaged and operated. Along came a set of problems to be solved in management of Compute, Network, Storage, Observability and Security of Containers, and integration of new tools into traditional operating environments supported by virtual machines.

Thanks to popularity of Kubernetes — the container orchestrator, we have seen significant growth in active investments for building new kind of tooling for running Containers with Kubernetes framework at the center. With a strong interest from the public clouds and the enterprise infrastructure software companies, along with the strong community around open source cloud native projects, Kubernetes and Containers have become the buzzwords of current times.

Gartner article reports “Gartner predicts that by 2022, more than 75% of global organizations will be running containerized applications in production, up from less than 30% today.

That’s a very interesting prediction, and quite realistic given the active community support that has fueled rapid expansion of portfolio of tools and technical skill-building resources around Containers.

.. but will it be an easy transition?

Kubernetes is playing a vital role in speeding up Containers adoption. Currently, it is far ahead of other orchestrator engines in the pack when it comes to wider adoption and community support. It is fair to say that majority of us use the terms Containers and Kubernetes in conjunction for most of the times. Kubernetes provides a wonderful environment to run Containers. Kubernetes’s system design is built with modularity at its center. With its extensible design for compute, storage, network, observability and security subsystems, innovators in the software industry are building their own value added implementations. The pace of innovation is very high, leading to extremely fast evolution of entire operating environment.

For instance, checkout CNCF landscape page. (Click on the picture below to get most recent version of the landscape.)

It’s a lot to keep up with all the innovation, and figuring out what exactly you need from myriad of options implemented under compute, storage, network, security and observability subsystems of Kubernetes. Once Kubernetes cluster is deployed, you will spend good amount of time researching best of the breed tools under each of these subsystems, install and manage them forever on your Kubernetes Cluster. You’ll not only manage your Apps but also manage all the necessary components to support your App.

I think, the real complexity in this new environment is in the recurring choices you need to make as you begin your journey, constantly having to keep up with the newer implementations of these components, different versions of implementations, security of these new components, inter-component version dependencies, and functional compatibilities and so on.

Wouldn’t it be nice if there exists a way that helps you get started quickly with your Container journey by bundling everything you need in a nice package but also provides gradual way to gain more control on how you want to customize your environment?

Taming the complexity with Karbon Platform Services

We launched Karbon Platform Services (KPS) in September 2020. KPS delivered a promise of Kubernetes based multi-cloud platform for running Containers, as a service, which can be deployed in customer’s Private, Public or Edge datacenters. KPS bundles everything you’ll need to deploy your containers, on infrastructure of your choice. Just bring your Apps!

This image has an empty alt attribute; its file name is Screen-Shot-2020-12-14-at-2.35.27-PM-1024x562.png

To quickly explain KPS’s value proposition, I love to use analogy provided by my colleague, Amit Jain. When you’re traveling you have option to rent a car or call an Uber. When you rent a car, you get to choose its make and model, map exact route to your destination and make every effort not to miss any vista points along your journey. Alternately, you can just call an Uber. Both are valid choices, and you make the choice based on where you want to focus – enjoy the long drive or enjoy as many vistas possible.

Image Credit: Amit Jain

KPS respects both choices. You can start managing your existing Kubernetes Clusters (on any cloud including Karbon clusters on Nutanix AOS, EKS on AWS, AKS on Azure, GKE on Google Cloud or Kubernetes clusters on bare-metal servers) with KPS’s SaaS Management Plane ( and retain full control of every aspect of your Kubernetes cluster. You can choose components (under Network/Storage/Security and so on) based on your liking.

On the other hand, you can deploy a KPS Service Domain, where you’ll get a carefully crafted Kubernetes cluster with everything you need to run your Cloud Native Apps.

If you have reasonably complex applications which are designed with modularity in mind, you already know how many moving parts you’ve to deal with just to make this application run. If you’re going to run such App on Kubernetes, the list of moving parts gets longer. In addition, micro-services based App design yields to many interesting scenarios for operating App and its components. e.g. Blue/Green deployment for individual micro-services, non-disruptive rolling upgrades and so on. Each of these constructs are very useful but can be hard to implement in your environment, depending upon how complex your App blueprint is.

To prove my point, here is an architectural blueprint of a reasonably complex eCommerce Store – WoodKraft which shows its number of components – processes which hold business logic, supporting open source components like sql databases, and system services for network traffic management, logging and performance monitoring. Operator of this eCommerce store has a lot to think when it comes to deploying the App in production. Besides core business logic processes, operator has to ensure all the services are up and running in this environment even before App starts running. You would have to manage life cycle and availability of every service listed in this picture. As your App matures, the number of services it depends upon, will continue to increase, resulting into major overhead to manage them.

KPS ‘Service Domain’ provides everything you need to run your App on a Kubernetes cluster. The picture below says it all. These are “Managed Platform Services”. It means, KPS takes responsibility of provisioning and managing their life cycle (with your consent). KPS Team picks the versions of underlying open source components, providing simple ways to upgrade them, taking one big chunk of management overhead out of your way.

The benefit of deploying Apps with KPS – The Simplicity Aspect

Application Blueprint becomes simpler. Here is the same eCommerce App blueprint, that looks much simpler when deployed with KPS Service Domain. The “platform” provides all the necessary services (all dark blue boxes), and lets you focus on App’s business logic (Gray and light blue boxes).

The “platform” will naturally evolve and will have newer useful services, allowing you to simplify your App’s blueprint even further.

Hopefully I’ve managed to convince you how KPS could make your journey with Containers, little less daunting. Now, let’s spend few minutes exploring various aspects of KPS.

The PaaS Aspect

KPS brings its service-rich environment to life on any infrastructure of your choice – whether its Nutanix AOS (with AHV or ESXi), vmware vSphere, AWS, Azure, GCP or bare-metal servers. KPS Service Domain provides a very clean, consistent view of your development and operating environment by hiding disparity of infrastructure underneath each cloud environment.

For example, take a look at this picture that shows what KPS Service Domain is actually composed of. In absence of such abstraction, you’ll have to select individual components in the underlying layers and manage their life cycle.

KPS Service Domain exposes a very simple abstraction or interface which allows you to deploy Apps and consume platform services but if you take a look under the hood, it builds on top of standard Kubernetes and its components. This abstraction makes the usage of Service Domain future proof.

How so? As you can see the underlying technology stack will keep evolving, and that too very fast, but the abstraction provided won’t change. The abstraction makes you a promise that you’ll continue to deploy Apps and consume the platform services the same way even if underlying components are changing. That’s a big deal!

The SaaS Aspect

With its SaaS Management Plane, KPS gives you a consistent model for development and operations of containerized Apps, across multiple clouds. SaaS Management Plane means one less thing for you to manage and one less thing to worry about for its high availability and security. SaaS Management plane is a one stop shop for managing and observing your multi-cloud infrastructure and Apps. It provides clean GUI and API to perform day to day management tasks.

No matter where you have deployed Service Domains across the globe, you can manage them from convenience of SaaS management plane.

The Services Aspect

PaaS or Platform as a Service has its value in the services it provides. KPS brings that value by providing 1-Click services needed to quickly get your App fully running in production.

With 1-click, you can enable service like Kafka or Prometheus or Istio Service Mesh (and all other included services) on 1 or more Service Domains deployed on multiple clouds, ready to be consumed by the Apps. It’s that simple!

The Openness Aspect

KPS doesn’t add any barrier to how platform services should be consumed. For instance, Apps can use any standard Kafka client with Kafka-aa-Service provided. KPS system ensures availability of Kafka, and access control around it. Any other tools using standard Kafka client, e.g. Kafdrop can be used as is by deploying them in same KPS Project

In another example, you can publish App’s performance metrics to Prometheus-aa-Service by adding standard spec for defining App’s metrics endpoint (to be scraped by Prometheus instance presented to the App).

Standard tools like Grafana can be configured with this Prometheus instance as its metrics data source.

Other KPS Platform Services like Ingress Controller (Nginx and Traefik), Istio are also consumed exactly the same way how you would consume them on any other standard Kubernetes distribution, by creating respective Kubernetes resources.

You can be assured that there is no proprietary code needed to run your Apps on KPS Service Domains. KPS respects and honors Kubernetes’s fundamental philosophy around portability of Apps.

The Multi-cloud Aspect

Provisioning a PaaS stack on any cloud is simple. KPS product team makes Service Domain (node) OS images available to deploy Service Domains on Nutanix AOS (AHV or ESXi), AWS, GCP, Azure and bare-metal servers. You can access the images from here (requires account).

In addition, we released Terraform support for deploying KPS Service Domain quickly onto your Nutanix AOS cluster or AWS. We will continue to add support for additional cloud infrastructure (e.g. vmware, Azure, GCP, bare-metal servers etc). In addition we released a Nutanix Calm Blueprint for Calm users. We will continue to bring improved ways to make it extremely easy to deploy Service Domains on any cloud infrastructure.

The Security Aspect

Disparate cloud infrastructure could result into security holes. KPS plugs these holes with its clean security model for Apps and the data. KPS Service Domains can be divided into multiple Projects (think of them as resource groups), with developers and operators given specific role in that project. Projects can be mapped to DEV, STAGE or PROD environments. or Projects can be mapped to teams. It’s up to you how you want to organize your development and operating environment. Projects make it easy to group together Apps, Data and Users, across all of your Service Domains, deployed across multiple clouds. That’s a very powerful construct. When you look into Kubernetes cluster underneath the Service Domain, KPS ensures Apps and Data from one Project cannot be accessed by Users of other Projects. It accomplishes this goal by deploying all apps in the given Project into one Kubernetes namespace (chosen by KPS for this project), and setting up access control for data-services so that only the Containers running in this specific namespace can read or write data into the respective data-services like Kafka or NATS. KPS uses Kubernetes’s network policies to restrict access to anything else outside of Project’s namespace. Apps simply cannot access anything outside of the namespace they are deployed into. (Apps cannot dictate what namespace they can be deployed, instead KPS creates a namespace for each Project and deploys Project’s Apps in it.)

Similarly, services like Prometheus are isolated such that monitoring and performance metric data stays isolated from Project to Project.

You can of course change this security setting by choosing to expose Data Services outside of Project’s boundary.

As a platform administrator, you can be assured that Apps deployed on the platform will not compromise the security by accessing underlying layers (Host, OS, Physical Ports, Hardware etc). By default, KPS platform blocks any Apps, which require more privileged access like ClusterRole and ClusterRoleBinding, or try to create new namespaces in the Kubernetes cluster. Platform Administrator gets a knob to turn on Privileged mode so that they can deploy any Apps which genuinely need to access other namespaces in the Kubernetes cluster or access Host specific features.

The CI/CD Aspect

With emergence of new breed of continuous integration and development (CI/CD) tools, specially based on GitOps way (i.e. raise a Git Pull Request and git commit to deploy code.), it is important you have chosen a tool that works great for your development team. It is important for platforms like KPS to support these tools. Developer teams are very opinionated (for all the right reasons) and it doesn’t make sense to force them to change the tools of their choice just to accommodate new operating environment.

For example, here is how Gitlab integration with KPS looks like. (Credit: Sharvil Kekre)


KPS Product Team has been working to bring out new CI/CD integrations with KPS platform. Checkout following integrations with KPS:

We will continue to add more integrations in near future.

The Automation aspect

Any enterprise grade platform must provide a way to automate day to day tasks. KPS provides Python SDK to automate everything you could do using platform GUI. You can also generate bindings for any other programming language by using this swagger json file.

The Developer aspect

We have been adding more code samples, apps, reference implementations, automation around KPS in our public github repository. I invite you as a contributor to this repository, whether you’re a Nutanix partner or ISV or an end customer.

What’s next?

The KPS product team will stay focused to ensure this brand new platform fulfills its promise to simplify customer’s journey with Kubernetes and Containers. We will continue to act on the product feedback, improvement suggestions and feature requests we have received. We will also look into various aspects of seamless integration into vast Kubernetes Ecosystem of open source components so that customers get balanced mix of opinionated and customizable platform. We’re also working to build better ways to standardize the way “data” flows in and out of the system, from and to disparate sources and targets.

Wrapping up year 2020

As I wrap up this blog post, I want to take a moment to reflect on Year 2020. It has been special year on all accounts. It threw down the gauntlet of personal and professional challenges to deal with. Every single person has been impacted in one way or other. It changed the way we live, work and play. We adjusted to new normal, found positivity amidst all the negativity we were subjected to and tried our best to be optimistic to what’s next to come.

It impacted how teams work together when being remote. It brought out challenges in making things work in ways we thought couldn’t be possible in previous normal times.

We, the Team Sherlock @ Nutanix, continued to do what we do the best – keep building great products! To me personally, it’s been real pleasure to support an awesome team of professionals who have given their best to bring out the best of the technology to our customers.

If you haven’t experienced KPS yet, I recommend signing up for a free trial. You can experience its simplicity when you’ll run through this workshop. We would love to hear feedback from you. Drop us a note at

I’m looking forward to working closely with our field teams, partners and customers to build amazing solutions on top of core KPS platform.

Happy holidays, I wish you all a very happy and healthy new year ahead!