Kubernetes in Production—Finding the Best Platform for Containerized Applications

Nutanix.dev - Kubernetes in Production—Finding the Best Platform for Containerized Applications

Table of Contents

For orchestrating containerized microservices, open-source Kubernetes (K8s) solutions now fuel most new digital businesses. Each vendor that offers a K8s solution typically provides a platform that integrates other open-source and proprietary tools to help users deploy digital services quickly and securely. The best platforms meet the needs of most organizations, no matter their level of experience with K8s orchestration or enterprise containerization for production workloads. 

At Nutanix, we offer a virtualized infrastructure platform that runs any distribution of K8s to orchestrate any containerized, cloud-native application. In this post, we’ll examine the various aspects of running containerization in public-cloud, bare-metal, and virtualized environments. Then, we’ll explain how Nutanix provides a unique platform to meet the needs of any containerized application.

Why Now?

Most IT Leaders would agree that, to remain competitive, organizations must deliver containerized software code faster and more securely. However, containerization for cloud-native workloads has been difficult because of the associated complexity and cost challenges, as reported by many surveys on K8s adoption. Running K8s on the right platform is more critical now than ever. Decision makers include line-of-business owners, cloud architects, IT operations managers, and developers. 

Containerization in Production

Containerization in production must scale to meet new performance demands while reducing downtime and increasing availability and security. At first, organizations envisioned containerized apps to be stateless, without the need to consider the scale and cost of persistent storage. The goal was to share reusable code across many environments, if necessary. Today, as organizations shift their mission-critical apps to containerized architectures, stateful containerized apps must access volumes of data services from dedicated data sources. The cost to run these apps using a K8s platform is also a huge factor. 

Recently, K8s topped this DevOps list of 11 Open Source DevOps Tools We Love For 2021. K8s will continue to be a driving force in how businesses get secure digital services to market rapidly.

K8s on Public Clouds

By running containerized applications in the public cloud, you don’t have to worry about deploying, managing, and securing your own virtualized or bare-metal infrastructure. For a monthly subscription fee, you can use all of the underlying technologies and compute power you need to run your apps on demand. 

However, the one main drawback of using public cloud environments is that you can easily rack up an expensive IT bill. You can potentially pay for unused resources due to overprovisioning or poor application architecture design. Without cost controls in place, spikes in demand can cause resources to dynamically scale, running up your cloud consumption costs.

This lack of cost control can cause unplanned expenses, and it’s why many organizations are keen to bring some workloads back on-premises. As a result, organizations have to decide whether to run their K8s workloads on bare metal—either in a public cloud or in an on-premises environment—or through virtualization on-premises.

K8s on Bare Metal

Some public clouds provide bare-metal options for running specialized workloads that are memory-intensive or need ultra-low latency. It’s possible to run a bring-your-own managed K8s distribution and containerized workloads in this environment, independent of the K8s distributions of the cloud providers. 

On the other hand, organizations invest in bare metal on-premises depending on their strategy for capital expenditures and operating expenses, or CapEx and OpEx, respectively. It just depends on the specific needs of the digital initiative. These organizations also want to avoid any lock-in scenarios with cloud providers and their mission-critical applications. The goal is to be able to develop and test in a small-scale environment and then deploy to production environments where it’s optimal in the long run.

At first glance, a bare-metal system for running containers might seem more efficient than virtualization. Bare metal doesn’t have the added layer of a hypervisor, and the containerization platform is tied to a physical server host’s operating system (OS).

When deciding between bare metal versus virtualization, consider factors like CPU overhead, pod density, and latency for storage access. Depending on the workload, a bare-metal system could be more favorable when low latency is critical or if you need direct access to specific hardware. 

Bare-metal systems aren’t optimized for any particular workload. When setting up K8s clusters on bare metal, consider CPU and bandwidth requirements for the physical resources. Sizing bare-metal systems is not a trivial undertaking, especially in planning for running certain workloads in production environments. Storage admins and application architects need to have the proper expertise to size a bare-metal environment properly. Once sized, the storage solution itself can add complexity to the environment.

Performance on bare metal varies based on the size of the K8s clusters and what workloads are running. If the reads and writes for the application don’t need to traverse a complex network, and the clusters have the capacity to meet the demands of the service calls, then performance could likely be very good. On the other hand, there’s no easy way to address issues like containerized workloads that need to scale beyond the physical resources available or physical resource failures. As the environment scales, the likelihood of hardware changes (and the associated configuration changes) can require further expertise to manage. 

In addition to upholding performance when running K8s in production on bare metal, there’s also the requirement to maintain a source of truth with etcd—the distributed database for K8s configurations—as a key value datastore. Additional nodes and architecture for any failure scenario can add more complexity and potentially create a technology silo that few other teams know how to manage. 

A particularly good use case for running K8s on bare metal is in an edge environment that might have apps for processing data from Internet-of-Things devices in real time. Many telco apps that need to leverage high-bandwidth, low-latency networks benefit from bare metal for isolated workloads in specific use cases. The workloads in these environments can avoid fluctuations in demand that would require the scale-out capability of hypervisor virtualization.

The following figure provides a quick summary of the differences between bare-metal and virtualized architectures for cloud-native workloads that use K8s orchestration:

Bare metal is the right choice for specific use cases. However, a virtualization platform that can be easily managed, scaled, and upgraded is a better solution for enterprises running containerized, mission-critical applications.

K8s on Virtualization

A virtualization platform uses a hypervisor to abstract the physical, bare-metal servers into a software-defined virtualization platform. The platform presents a standardized OS to pods running containerization and hides the underlying infrastructure’s storage and networking. 

Virtual machines (VMs) run their own operating systems and can dynamically provision more memory and CPU based on the raw available pool of resources, which the hypervisor can share between other VMs running in the environment. On a virtualization platform, you can easily resize and add worker nodes to help cloud-native applications react to changing conditions more rapidly.

Bare-metal advocates might argue that the management layer of a hypervisor and VM control can add complexity to a system. They might also make the case that a hypervisor is less efficient than bare metal because it consumes server CPU and other system resources. In truth, it depends.

Regarding the complexity of virtualization, most IT organizations have been using virtualization technologies for quite some time. They likely have existing tools that can easily merge with cloud-native workloads. For example, infrastructure automation tools like Ansible and Terraform seamlessly integrate with virtualization. 

Furthermore, virtualization provides for simpler management of the multiple upstream environments that most organizations use for the development, testing, and production of continuous integration/continuous deployment (CI/CD) pipelines of code. You can also address security concerns more easily by isolating VMs within the virtualized environment.

The benefits of running containers on a virtualization platform far outweigh those of using a bare-metal infrastructure. Virtualization provides scheduling, resource management, load balancing, and migration capabilities. While these features are possible on bare metal, they’re less available. Virtualization also provides better support for day-2 operations, such as life cycle management (LCM) and the management of consistent, reusable templates for cloning. 

Using a hypervisor technology typically means running on infrastructure that’s part of an organization’s private cloud. Many of the existing tools for IT operations (such as backup, disaster recovery, and security) are already in place. A key consideration in this scenario is how developers already access and consume resources to deliver application code from test and development environments to production. 

K8s on a Hybrid Cloud Platform 

What if you could have a best-in-class, scalable, and resilient persistent storage platform? What if that platform coupled with the industry’s leading container orchestration platforms right in your datacenter, but with the ease of use that’s afforded to public cloud hyperscalers?

The reality is that organizations typically have a number of different teams in the process of adopting K8s and a containerization strategy for any new digital services. Many DevOps teams have started development using public cloud offerings for testing and development. Deploying to a scalable production environment with the best business value is the goal. What if those same workloads developed in the public cloud were better suited for a private-cloud environment because of cost and access to data? Enter Nutanix.

Consider one platform as a K8s adoption maturity model for running and managing containerized applications. The flexibility and options for meeting the demands of any level of containerization adoption are available. 

Using the Nutanix Cloud Platform (NCP), users can start developing and testing containerized applications by deploying Nutanix Kubernetes Engine (NKE) on-premises and can scale to production as needed. NKE provides the easiest, fastest way to K8s adoption using a fully compliant K8s distribution. Using the NCP automation platform, K8s clusters can be deployed and managed in any multicloud environment. 

While able to run any of the major K8s distributions (like Rancher and Amazon EKS), Nutanix has a formal partnership with Red Hat. OpenShift, Red Hat’s enterprise K8s platform, is considered to be the leading multicloud containerization platform by Forrester Research. NCP is now a preferred hyperconverged infrastructure (HCI) choice for Red Hat OpenShift, and Red Hat OpenShift is the preferred enterprise full-stack K8s solution on NCP with AHV. 

Red Hat and Nutanix Press Release Highlights

Red Hat’s Summary

Because of its distributed architecture, Nutanix Cloud Platform delivers an IT environment that is highly scalable and resilient, and well-suited for enterprise deployments of Red Hat OpenShift at scale. The platform also includes fully integrated unified storage, addressing many tough challenges operators routinely face in configuring and managing storage for stateful containers.”  

Kohl’s Quote

As we manage the complexities of hybrid cloud, we believe this relationship will unlock new hosting and deployment options for VM- and container-based workloads. These new options will support our goals of being fast, efficient, and friction-free as we deliver new experiences to our customers.

Ritch Houdek, Senior Vice President, Technology, Kohl’s

For more information, see Red Hat and Nutanix Announce Strategic Partnership to Deliver Open Hybrid Multicloud Solutions and Build Best-in-class Hybrid Cloud Infrastructure With Red Hat And Nutanix.

Comparison Table of K8s Capabilities in Production

Capabilities for Production K8sBare MetalPublic CloudNutanix 
Easy to get started and manageNo. K8s on bare metal requires manual setup, configuration, and tuning.Yes. Multiple options with completely managed infrastructure including security and performance features.Yes. Deploy clusters to any cloud, get started with NKE, and leverage OpenShift and other K8s platforms. 
ScalingNo. K8s doesn’t provide a mechanism to scale clusters with bare metal; it requires manual intervention to scale.Yes. Public cloud environments are meant to scale by design.Yes. Worker nodes can easily scale up/down and scale out/in. Scaling hypervisor clusters is also easier to operate.
ResilienceNo. Resilience for worker and etcd are design challenges. Resilience for persistent data requires additional storage services that add complexity to set up and manage.Yes. Public cloud environments provide options for resilience with backup and failover workflows.Yes. Worker and etcd should be well-designed, but they are backed by hypervisor cluster features. Data resilience is natively handled by AOS.
Persistent storage (PVs, CSI, CNS) Need additional storage services for persistent storage and would require custom CSI integration.Yes, with the potential to be very expensive.Yes. Nutanix provides an operator to integrate OpenShift with AOS.
MonitoringMonitoring hardware isn’t natively integrated and would vary depending on the hardware vendor. It requires additional effort and might  create a fragmented monitoring approach.Yes. Public clouds provide monitoring capabilities natively and with third-party solutions.Yes. Nutanix natively integrates monitoring for hardware and virtualization layers.
CostStarting K8s on bare metal would cost less from a CapEx perspective by avoiding  premium hypervisor costs. However, bare-metal production environments require additional redundant management nodes, which increases costs. Multiple environments that can’t be consolidated would require more hosts and have lower usage efficiency, lowering cost benefits of bare metal.Can be very expensive if a cost management system is not in place.Adding to the CapEx consideration, ease of management and reactiveness provided by virtualization deliver a better OpEx cost. Especially to run K8s at scale, the TCO is favorable to virtualization.
PerformanceRunning containers on bare metal might perform better without virtualization overhead but could be different under heavy load (many containers). Pods have access directly to hardware (like GPU). Latency would also be better. Hypervisor overhead is negligible (< 10%), but virtualization offers mechanisms to perform better under heavy loads (such as balanced workload usage, live migration, and memory oversubscription). Pods have access to GPU processing through virtualization. The performance gap is not needed for most use cases.
Pods on worker nodesLimited to 110 pods per physical bare-metal worker node. Unlimited based on 110 pods per provisioned worker node (on demand).Not limited by physical nodes (Multiple worker nodes can be provisioned on hypervisors, each with 110 pods).
Isolation between environmentsPartial isolation based on namespace. Full isolation, running different K8s clusters on hypervisors. Namespace isolation is available, too. 

Summary

IT leaders typically make decisions based on three basic factors: cost, time, and productivity.

Cost

With the evolution of microservices and app modernization—even before K8s orchestration—containerized application development first started in public clouds. Originally there was no question of choosing public clouds as the best platforms to run containers because it was simple to get started with no infrastructure to manage. However, cost and data location constraints soon revealed that public clouds are not always the best option.

Time

With the introduction of K8s to orchestrate and scale containerization, organizations are looking for the best platform to quickly set up and run K8s. This need for rapid K8s deployment spans from test and development environments to production on-premises, along with some workloads in the public cloud. Organizations want a faster time to value to carry out lucrative digital transformation initiatives for line-of-business owners. A unified K8s offering that supports both public and private cloud workloads is where the industry is headed. No organization wants to be left behind without a hybrid cloud strategy for containerized apps orchestrated by K8s.

Productivity

After starting containerized workloads on-premises, datacenter personnel might find bare metal infrastructure to be convenient, capable, and cost-effective. In this post, we’ve identified a few use cases where bare metal is the proper choice. But to run this platform in production—to manage it at scale in a secure, simple, flexible, consolidated, and hybridizable way—one quickly reaches bare metal’s limitations. Physical and virtual server advocates debate over similar tradeoffs for various types of enterprise workloads. A common theme of these discussions is that virtualization can solve many infrastructure challenges (such as flexibility, innovation, and effective data management) but not all of them.

Nutanix addresses all three major IT concerns without compromise. We provide a cloud-like solution that’s uncomplicated and scalable, all while making infrastructure invisible and without relying on the cost of a premium hypervisor.

References:

© 2024 Nutanix, Inc. All rights reserved. Nutanix, the Nutanix logo and all Nutanix product, feature and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. Other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s). This post may contain links to external websites that are not part of Nutanix.com. Nutanix does not control these sites and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such a site. Certain information contained in this post may relate to or be based on studies, publications, surveys and other data obtained from third-party sources and our own internal estimates and research. While we believe these third-party studies, publications, surveys and other data are reliable as of the date of this post, they have not independently verified, and we make no representation as to the adequacy, fairness, accuracy, or completeness of any information obtained from third-party sources.