# Integrations ## Integrations * [renovate](https://github.com/renovatebot/renovate) automates chart version updates. See [this PR for more information](https://github.com/renovatebot/renovate/pull/5257). * For updating container image tags and git tags embedded within helmfile.yaml and values, you can use [renovate's regexManager](https://docs.renovatebot.com/modules/manager/regex/). Please see [this comment in the renovate repository](https://github.com/renovatebot/renovate/issues/6130#issuecomment-624061289) for more information. * [ArgoCD Integration](#argocd-integration) * [Azure ACR Integration](#azure-acr-integration) ### ArgoCD Integration Use [ArgoCD](https://argoproj.github.io/argo-cd/) with `helmfile template` for GitOps. ArgoCD has support for kustomize/manifests/helm chart by itself. Why bother with Helmfile? The reasons may vary: 1. You do want to manage applications with ArgoCD, while letting Helmfile manage infrastructure-related components like Calico/Cilium/WeaveNet, Linkerd/Istio, and ArgoCD itself. * This way, any application deployed by ArgoCD has access to all the infrastructure. * Of course, you can use ArgoCD's [Sync Waves and Phases](https://argoproj.github.io/argo-cd/user-guide/sync-waves/) for ordering the infrastructure and application installations. But it may be difficult to separate the concern between the infrastructure and apps and annotate K8s resources consistently when you have different teams for managing infra and apps. 2. You want to review the exact K8s manifests being applied on pull-request time, before ArgoCD syncs. * This is often better than using a kind of `HelmRelease` custom resources that obfuscates exactly what manifests are being applied, which makes reviewing harder. 3. Use Helmfile as the single-pane of glass for all the K8s resources deployed to your cluster(s). * Helmfile can reduce repetition in K8s manifests across ArgoCD application For 1, you run `helmfile apply` on CI to deploy ArgoCD and the infrastructure components. > helmfile config for this phase often reside within the same directory as your Terraform project. So connecting the two with [terraform-provider-helmfile](https://github.com/mumoshu/terraform-provider-helmfile) may be helpful For 2, another app-centric CI or bot should render/commit manifests by running: ``` helmfile template --output-dir-template $(pwd)/gitops//{{.Release.Name}} cd gitops git add . git commit -m 'some message' git push origin $BRANCH ``` > Note that `$(pwd)` is necessary when `helmfile.yaml` has one or more sub-helmfiles in nested directories, > because setting a relative file path in `--output-dir` or `--output-dir-template` results in each sub-helmfile render > to the directory relative to the specified path. so that they can be deployed by Argo CD as usual. The CI or bot can optionally submit a PR to be review by human, running: ``` hub pull-request -b main -l gitops -m 'some description' ``` Recommendations: * Do create ArgoCD `Application` custom resource per Helm/Helmfile release, each point to respective sub-directory generated by `helmfile template --output-dir-template` * If you don't directly push it to the main Git branch and instead go through a pull-request, do lint rendered manifests on your CI, so that you can catch easy mistakes earlier/before ArgoCD finally deploys it * See [this ArgoCD issue](https://github.com/argoproj/argo-cd/issues/2143#issuecomment-570478329) for why you may want this, and see [this helmfile issue](https://github.com/roboll/helmfile/pull/1357) for how `--output-dir-template` works. ### Azure ACR Integration Azure offers helm repository [support for Azure Container Registry](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-helm-repos) as a preview feature. To use this you must first `az login` and then `az acr helm repo add -n `. This will extract a token for the given ACR and configure `helm` to use it, e.g. `helm repo update` should work straight away. To use `helmfile` with ACR, on the other hand, you must either include a username/password in the repository definition for the ACR in your `helmfile.yaml` or use the `--skip-deps` switch, e.g. `helmfile template --skip-deps`. An ACR repository definition in `helmfile.yaml` looks like this: ```yaml repositories: - name: url: https://.azurecr.io/helm/v1/repo ``` ## OCI Registries In order to use OCI chart registries firstly they must be marked in the repository list as OCI enabled, e.g. ```yaml repositories: - name: myOCIRegistry url: myregistry.azurecr.io oci: true ``` It is important not to include a scheme for the URL as helm requires that these are not present for OCI registries Secondly the credentials for the OCI registry can either be specified within `helmfile.yaml` similar to ```yaml repositories: - name: myOCIRegistry url: myregistry.azurecr.io oci: true username: spongebob password: squarepants ``` or for CI scenarios these can be sourced from the environment with the format `_USERNAME` and ``, e.g. ```shell export MYOCIREGISTRY_USERNAME=spongebob export MYOCIREGISTRY_PASSWORD=squarepants ``` If `` contains hyphens, the environment variable to be read is the hyphen replaced by an underscore., e.g. ```yaml repositories: - name: my-oci-registry url: myregistry.azurecr.io oci: true ``` ```shell export MY_OCI_REGISTRY_USERNAME=spongebob export MY_OCI_REGISTRY_PASSWORD=squarepants ``` ### OCI Chart Caching OCI charts are automatically cached in the shared cache directory (`~/.cache/helmfile` by default, or the directory specified by `HELMFILE_CACHE_HOME`). This improves performance by avoiding redundant downloads. **Multi-Process Safety:** When running multiple helmfile processes in parallel (e.g., as an ArgoCD plugin), charts in the shared cache are not deleted or refreshed to prevent race conditions where one process might delete a chart that another is using. To force a cache refresh, run `helmfile cache cleanup` first. See the [cache](cli.md#cache) section for more details on cache management. ## Attribution We use: * [semtag](https://github.com/pnikosis/semtag) for automated semver tagging. I greatly appreciate the author(pnikosis)'s effort on creating it and their kindness to share it!