helmfile/docs/index.md

10 KiB

Helmfile

Tests Container Image Repository on GHCR Go Report Card Slack Community #helmfile Documentation

Deploy Kubernetes Helm Charts

Status

May 2025 Update

  • Helmfile v1.0 and v1.1 has been released. We recommend upgrading directly to v1.1 if you are still using v0.x.
  • If you haven't already upgraded, please go over this v1 proposal here to see a small list of breaking changes.

About

Helmfile is a declarative spec for deploying helm charts. It lets you...

  • Keep a directory of chart value files and maintain changes in version control.
  • Apply CI/CD to configuration changes.
  • Periodically sync to avoid skew in environments.

To avoid upgrades for each iteration of helm, the helmfile executable delegates to helm - as a result, helm must be installed.

NOTE: Helmfile supports both Helm 3.x and Helm 4.x.

Highlights

Declarative: Write, version-control, apply the desired state file for visibility and reproducibility.

Modules: Modularize common patterns of your infrastructure, distribute it via Git, S3, etc. to be reused across the entire company (See #648)

Versatility: Manage your cluster consisting of charts, kustomizations, and directories of Kubernetes resources, turning everything to Helm releases (See #673)

Patch: JSON/Strategic-Merge Patch Kubernetes resources before helm-installing, without forking upstream charts (See #673)

Installation

  • download one of releases
  • run as a container
  • Archlinux: install via pacman -S helmfile
  • openSUSE: install via zypper in helmfile assuming you are on Tumbleweed; if you are on Leap you must add the kubic repo for your distribution version once before that command, e.g. zypper ar https://download.opensuse.org/repositories/devel:/kubic/openSUSE_Leap_\$releasever kubic
  • Windows (using scoop): scoop install helmfile
  • macOS (using homebrew): brew install helmfile

Running as a container

The Helmfile Docker images are available in GHCR. Make sure you pick the right tag for your version. Example:

$ docker run --rm --net=host -v "${HOME}/.kube:/helm/.kube" -v "${HOME}/.config/helm:/helm/.config/helm" -v "${PWD}:/wd" --workdir /wd ghcr.io/helmfile/helmfile:v1.1.0 helmfile sync

You can also use a shim to make calling the binary easier:

$ printf '%s\n' '#!/bin/sh' 'docker run --rm --net=host -v "${HOME}/.kube:/helm/.kube" -v "${HOME}/.config/helm:/helm/.config/helm" -v "${PWD}:/wd" --workdir /wd ghcr.io/helmfile/helmfile:v1.1.0 helmfile "$@"' |
    tee helmfile
$ chmod +x helmfile
$ ./helmfile sync

Getting Started

Prerequisites

  • A Kubernetes cluster (e.g., minikube, kind, or a cloud provider)
  • Helm 3+ installed (helm version to verify)

Step 1: Install Helmfile

Choose one of the following:

# macOS
brew install helmfile

# Linux - download from GitHub releases
curl -L https://github.com/helmfile/helmfile/releases/latest/download/helmfile_$(uname -s)_$(uname -m) -o /usr/local/bin/helmfile && chmod +x /usr/local/bin/helmfile

# Windows
scoop install helmfile

Verify: helmfile version

Step 2: Create your first helmfile.yaml

Use helmfile create to generate a project scaffold with best-practice directory structure:

# Scaffold a new project
helmfile create my-project
cd my-project

Or create a helmfile.yaml manually:

repositories:
  - name: prometheus-community
    url: https://prometheus-community.github.io/helm-charts

releases:
  - name: my-prometheus
    namespace: monitoring
    createNamespace: true
    chart: prometheus-community/prometheus
    values:
      - server:
          persistentVolume:
            enabled: false

What does this do?

  • repositories — tells Helm where to find charts (like a package registry)
  • releases — each entry is a Helm release to deploy
    • name — a unique name for this deployment
    • namespace — which Kubernetes namespace to deploy into
    • chart — which Helm chart to use (repository-name/chart-name)
    • values — customize the chart's default settings

Step 3: Initialize and deploy

# Initialize - checks helm and installs required plugins
helmfile init

# See what would be deployed (dry-run)
helmfile diff

# Deploy to your cluster
helmfile apply

Congratulations! You now have Prometheus running in your cluster.

Step 4: Make changes and re-apply

Edit helmfile.yaml to add another release:

repositories:
  - name: prometheus-community
    url: https://prometheus-community.github.io/helm-charts

releases:
  - name: my-prometheus
    namespace: monitoring
    createNamespace: true
    chart: prometheus-community/prometheus
    values:
      - server:
          persistentVolume:
            enabled: false

  - name: my-grafana
    namespace: monitoring
    chart: grafana/grafana

Run helmfile apply again — Helmfile will detect the new release and only deploy what changed.

Step 5: Clean up

# Remove everything
helmfile destroy

Next Steps

Now that you have the basics, explore these topics:

Core concepts (read in order):

  1. Writing Helmfile — patterns and best practices for structuring helmfiles
  2. Values and Merging — how Helmfile merges values from multiple sources
  3. Environments — manage dev/staging/production with a single helmfile

Reference material (look up as needed):

Labels Overview

A selector can be used to only target a subset of releases when running Helmfile. This is useful for large helmfiles with releases that are logically grouped together.

Labels are simple key value pairs that are an optional field of the release spec. When selecting by label, the search can be inverted. tier!=backend would match all releases that do NOT have the tier: backend label. tier=frontend would only match releases with the tier: frontend label.

Multiple labels can be specified using , as a separator. A release must match all selectors in order to be selected for the final helm command.

The selector parameter can be specified multiple times. Each parameter is resolved independently so a release that matches any parameter will be used.

--selector tier=frontend --selector tier=backend will select all the charts.

In addition to user supplied labels, the name, the namespace, and the chart are available to be used as selectors. The chart will just be the chart name excluding the repository (Example stable/filebeat would be selected using --selector chart=filebeat).

commonLabels can be used when you want to apply the same label to all releases and use templating based on that. For instance, you install a number of charts on every customer but need to provide different values file per customer.

templates/common.yaml:

templates:
  nginx: &nginx
    name: nginx
    chart: stable/nginx-ingress
    values:
    - ../values/common/{{ .Release.Name }}.yaml
    - ../values/{{ .Release.Labels.customer }}/{{ .Release.Name }}.yaml

  cert-manager: &cert-manager
    name: cert-manager
    chart: jetstack/cert-manager
    values:
    - ../values/common/{{ .Release.Name }}.yaml
    - ../values/{{ .Release.Labels.customer }}/{{ .Release.Name }}.yaml

helmfile.yaml:

{{ readFile "templates/common.yaml" }}

commonLabels:
  customer: company

releases:
- <<: *nginx
- <<: *cert-manager

Advanced

Community

Join our friendly slack community in the #helmfile channel to ask questions and get help. Check out our slack archive for good examples of how others are using it.

Experimental Features

Some experimental features may be available for testing in perspective of being (or not) included in a future release. Those features are set using the environment variable HELMFILE_EXPERIMENTAL. Here is the current experimental feature :

  • explicit-selector-inheritance : remove today implicit cli selectors inheritance for composed helmfiles, see composition selector

If you want to enable all experimental features set the env var to HELMFILE_EXPERIMENTAL=true

Examples

For more examples, see the examples/README.md or the helmfile distribution by Cloud Posse.