Compare the functionalities of Helm and Kustomize and learn the best practices for using each tool for its intended use case

Kustomize vs Helm: Comparison & Tutorial

Kubernetes continues to be the most widely-adopted container orchestration platform in the cloud-native world. But the complexities of managing production-grade clusters and their copious configuration manifests are often enormous. Configuration management tools can eliminate this complexity by handling the packaging and customization of the manifest files that Kubernetes uses.

This article will compare two popular configuration tools, Kustomize and Helm, and dives into their features, benefits, and use cases. For quick reference, we have provided a table below that summarizes their main characteristics.

Kustomize Helm
Method of operation overlays templating
Ease of use simple complex
Support for packaging no yes
Native kubectl integration yes no
Declarative/ imperative declarative imperative

What is Kustomize?

Kustomize is a configuration customization tool for Kubernetes clusters. It allows administrators to make declarative changes using untemplated files, leaving original manifests untouched. All customization specifications are contained within a kustomization.yaml file, which superimposes specifications on top of existing manifests to generate custom versions of resources.

Kustomize also ships with resource generators (secretGenerator and configMapGenerator) that use environment files or key-value pairs to create secrets and ConfigMaps. To inject these secrets and ConfigMaps into Kubernetes infrastructure, you define them within the customization file using secretGenerator and configMapGenerator fields, with attributes that specify source files or key-value pairs.

Kustomise project structure

Kustomize uses shared base resources and overlays to provide reusability and quick config generation. The typical directory structure of a Kustomize project configuration will look something like this:

└── base
│   ├── shared-manifest-file-1.yaml
│   ├── kustomization.yaml
│   └── shared-manifest-file-2.yaml
└── overlays
    ├── env-1
    │   ├── unique-manifest-file-1.yaml
    │   └── kustomization.yaml
    ├── env-2
    │   ├── unique-manifest-file1.yaml
    │   ├── kustomization.yaml
    │   ├── unique-manifest-file2.yaml
    │   └── unique-manifest-file3.yaml
    └── env-3
        ├── unique-manifest-file1.yaml
        ├── kustomization.yaml
        └── unique-manifest-file3.yaml

A Kustomize project structure typically comprises a base and overlays directory. In our sample specification above, the base directory contains a file named kustomization.yaml and manifest files for shared resources.

The base/kustomization.yaml file declares the resources that Kustomize will include in all environments, while the shared manifest files define specific configurations for these resources.

The overlays directories include customization files (also named kustomization.yaml) that reference configurations within the shared manifests of the base folder and apply defined patches to build custom resources. The overlays directory also includes individual manifest files, which Kustomize uses to create resources specific to the environment where the files reside.

Comprehensive Kubernetes cost monitoring & optimization

Kustomize deployment example

The example below demonstrates how to use Kustomize for a minimal Kubernetes deployment that deploys resources to a development and production environment.

Prerequisites

You will need an existing Kubernetes cluster (version 1.14+) with the kubectl CLI installed.

Use the command below to clone the example Git repository and download the required manifests into your working environment:

$ git clone https://github.com/ssengupta3/kustomize-demo

A successful clone operation will display the response below:

The expected response to our clone command

The expected response to our clone command

> Note: The repository we’re using already contains the base and overlays folders, including the required resource manifests and customization files.

Once the resource manifest and files are cloned, navigate to the manifests folder using the command below:

$ cd kustomize-demo

Then to the base folder:

$ cd base

Apply the configurations:

$ kubectl apply -k .

You should then see that the resources were successfully created via the response below:

service/darwin created
deployment.apps/darwin created

> Note: The `-k` or `--kustomize` flag is used by kubectl to recognize Kustomize. resources. The base folder contains a deployment.yaml and service.yaml file that Kustomize uses to create shared resources.

As before, navigate to the `/overlays/dev` folder and apply the config, as shown:

$ kubectl apply -k .
service/dev-darwin created
deployment.apps/dev-darwin created

Repeat the same step on the` /overlays/prod` folder to apply the configuration:

$ kubectl apply -k .
service/prod-darwin created
deployment.apps/prod-darwin created

> Note: This builds different resources in the production and development environments. Kustomize appends the names of each resource with the value provided in the namePrefix specification of each kustomization.yaml file.

Now we will push a small change to each environment, demonstrating how Kustomize uses base manifests to apply environment-specific changes. In our example, we’ll specify a different number of replicas for Dev and Prod.

Verify the creation of resources by checking for new cluster services and deployments, like so:

$ kubectl get deployments
NAME          READY   UP-TO-DATE   AVAILABLE   AGE
darwin        1/2     2            1           11m
dev-darwin    1/2     2            1           47s
prod-darwin   1/2     2            1           18s
$ kubectl get services
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
darwin        ClusterIP   10.105.74.59    <none>        80/TCP    11m
dev-darwin    ClusterIP   10.103.2.12     <none>        80/TCP    65s
kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP   9d
prod-darwin   ClusterIP   10.110.53.253   <none>        80/TCP    36s

What is Helm?

Helm is a deployment tool that simplifies the installation, packaging, and management of Kubernetes workloads using chart-based templates. A Helm chart is a set of files used to describe related Kubernetes resources that can be deployed as a single unit. Helm simplifies deploying resources to different environments and allows the installation of production-ready workloads into a Kubernetes cluster using a single command.

Helm relies on a client-server model to install application packages into the cluster. The client (Helm CLI) pushes resource configurations to the server, which ensures the cluster state matches the desired state by changing resources from the chart. By packaging YAML manifests into charts and pushing them to the Kubernetes cluster, Helm helps automate the management of resource configurations.

K8s clusters handling 10B daily API calls use Kubecost

Helm project structure

The directory structure for a typical Helm project looks like this:

project-folder
|-- Chart.yaml
|-- charts
|-- templates
|   |-- NOTES.txt
|   |-- _helpers.tpl
|   |-- cluster-resource1.yaml
|   |-- cluster-resource2.yaml
|   `-- cluster-resource3.yaml
`-- values.yaml
  • The Chart.yaml file contains information about the application being packaged. This information could include the version number, chart name, or keywords used by the resource configurations
  • The charts folder references all other charts on which the current chart depends
  • The templates folder includes the manifests deployed by the Helm chart
  • values.yaml defines any configuration values injected into the templates

Helm demonstration

Our demonstration below will show how a simple web application can be packaged and deployed by Helm.

Prerequisites

You will need a functional Kubernetes cluster which can be either production-grade or playground (Minikube, Killercoda, Play with Kubernetes, etc.) and the Helm CLI.

First, clone the required project files into the working directory using the command below. We’re using a demo Git repo for this tutorial, but you could choose any other project.

$ git clone https://github.com/ssengupta3/helm-demo

> Note: The above repo clones a templates folder, a Chart.yaml file, and values.yaml file into the working directory. If you choose a project other than the one used here, it may result in a different directory structure. In our instance, the templates folder includes manifests for our deployment and service objects (yet to be created), which are pre-defined in separate files.

To verify that the repo has been cloned successfully and Helm recognizes it, run the command below:

$ helm lint

A successfully cloned repo will return the following response:

==> Linting .
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

Package the charts for production by running this command:

$ helm package helm-demo --debug

Which returns the response:

## helm-demo-0.1.0.tgz file was created

> Note: Helm packages files into a compressed folder, where the folder name is derived by combining the repository name (helm-demo), the version number (0.1.0) and the .tgz extension.

Once Helm has created the helm-demo-0.1.0.tgz package, install it using the following command:

$ helm install helm-demo-0.1.0.tgz --name helloworld

Upon successful installation, Helm will return the following response:

NAME: helloworld
LAST DEPLOYED: Thu Sep 8 10:57:23 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1

Key differences

Method of operation

Kustomize relies on directory-specific kustomization.yaml files to build and make changes to individual resources. These files apply patches and overlays to resources declared within the shared base folder to provide automated multi-environment configuration.

Helm uses templates to generate valid Kubernetes configurations by referencing a values.yaml file as a source of variables. The template directory hosts files that the Helm chart uses to create resources during deployment.

Ease of use

Starting with Kubernetes version 1.14, Kustomize comes bundled with the kubectl CLI, and as a result, operations teams will not need to master any additional tools. Kustomize supports declarative deployments and uses plain YAML for each artifact, making it easier to use by departments that already run Kubernetes clusters.

Helm adds additional abstraction layers to Kubernetes package management tasks, steepening the learning curve for teams looking to simplify cluster configuration and release automation. Helm charts are also complex by nature and can be prone to misconfiguration.

Packaging

Kustomize lacks any innate packaging capability, and each resource has to be declared within the base folder, with variations stated separately in the overlay kustomization.yaml file.

Conversely, Helm packages all of the required Kubernetes resources into a single folder which can be reused as often as needed. Helm also allows cluster administrators to set application defaults, which can be injected into individual resources using the values.yaml file.

Native kubectl integration

Kustomize has been prepacked with kubectl since Kubernetes version 1.14.

Helm does not come pre-integrated with Kubernetes, so developers must install Helm manually. Furthermore, Helm relies on external dependencies to run.

Declarative vs. imperative

Kustomize uses a declarative mechanism to deploy cluster resources, which aligns with the Kubernetes philosophy of consistency and simple version control. To change the configuration of deployed resources, you define updated values within the kustomization.yaml file, and these are then applied seamlessly to the respective environment.

Helm uses an imperative approach and injects configuration values into resource templates during deployment. As a result, template changes may cause a redundant runtime configuration and lead to potential disruption of the application.

Learn how to manage K8s costs via the Kubecost APIs

Kustomize vs. Helm - when to use

Kustomize and Helm may have the same purpose, but due to their contrasting methods of operation, each can be better applied to specific scenarios.

When to use Kustomize

Kustomize adds layers to existing manifest files, allowing for precise changes without altering the originals. Kustomize also allows for complete control and maintainability of resource manifests, making it ideal for teams with a good understanding of YAML and in-house developed applications.

When to use Helm

Helm encapsulates all Kubernetes objects into a single unit, reducing the time and effort required to interact with individual manifests. As well as this, most third-party vendors offer pre-built Helm charts to simplify the deployment of their products into Kubernetes. As a result, Helm is often the preferred choice for installing off-the-shelf solutions like monitoring agents, databases, and security applications.

Using Kustomize with Helm

While Kustomize allows teams to perform quick customizations, Helm reduces the time spent writing YAML manifests. Even though the two tools are different, there is no reason that they cannot be used together. For instance, you could use Helm to download and install third-party applications and Kustomize to tailor them for a specific environment.

Conclusion

Managing configurations in a large-scale distributed Kubernetes cluster is often more complex than initially assumed. Helm and Kustomize are two Kubernetes tools that simplify resource management by automating the deployment of objects into clusters.

While Helm abstracts away the complexity of managing manifests, Kustomize provides a configuration veneer over existing manifests to allow for precise changes and customizations.

Because the tools are used for different purposes, they can complement each other when used side by side, making up for features that each lacks.

Before deciding to use either tool, careful analysis of your technical requirements and business objectives should be the first step. Your configuration tool (or tools) should reduce overheads and streamline operations rather than add needless complexity.

Comprehensive Kubernetes cost monitoring & optimization

Continue reading this series