GrepMyMind

Feel free to grep & grok your way through my thoughts on Kubernetes, programming, tech & other…

Follow publication

Argo CD’s ApplicationSet: Dynamic Deployments Across The Fleet

--

A series of square cubes representing a deployment across multiple Kubernetes clusters. Generated with https://firefly.adobe.com/

Argo CD provides a wide variety of methods to deploy your application(s) to Kubernetes clusters. An Application defines the source of the deployment and the cluster you want to deploy it to but an ApplicationSet allows you to deploy an Application across multiple clusters. Let’s start simple and break down each piece until we have a full-fledged, dynamic deployment system.

We’ll start with the following Application that deploys the k8s-pvc-tagger using Helm.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: k8s-pvc-tagger
namespace: argocd
spec:
destination:
namespace: k8s-pvc-tagger
server: https://my-clusters-cluster.example.com
project: default
source:
chart: k8s-pvc-tagger
repoURL: https://mtougeron.github.io/helm-charts/
targetRevision: 2.0.8
helm:
releaseName: k8s-pvc-tagger

This is great if you’re deploying to a single cluster but what about if you have more than one? You could create multiple Application resources but that would be a pain and isn’t scalable. This is where Argo CD’s ApplicationSet comes into play.

An ApplicationSet (docs) allows you to automatically generate a list of Application resources to deploy in a templated fashion. For example, you can use an ApplicationSet to deploy the same Application to multiple clusters or use it to deploy your application based on pull requests to a repository. In this post, I’ll show you how to use the ApplicationSet generators (docs) to do several powerful deployments to ease the way you use Argo CD.

Before we get into specific examples, it’s important to understand how an ApplicationSet works. You can use a variety of different generators to determine what the deployment should look like. These generators can be based on resources like files or directories in a repo, open pull requests, or labels on the Kubernetes clusters that are registered in Argo CD. Each of these generators can be combined together with the matrix or merge generators to create a complex criteria for selection. In addition, you can use values from the files inside of a file or git generator. I’ll go through these different generators in the sections below.

In my opinion, the simplest of generators is the clusters generator (docs) that allows you to deploy to multiple clusters based on the cluster labels assigned when a Kubernetes cluster was added to Argo CD. This generator filters the available clusters based on those labels and creates anApplication resource for each cluster that is found. Here’s an example of that approach.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: k8s-pvc-tagger
namespace: argocd
spec:
goTemplate: true
generators:
- clusters:
selector:
matchLabels:
environment: production
template:
metadata:
name: 'k8s-pvc-tagger-{{.name}}'
spec:
destination:
namespace: k8s-pvc-tagger
server: '{{.server}}'
project: default
source:
chart: k8s-pvc-tagger
repoURL: https://mtougeron.github.io/helm-charts/
targetRevision: 2.0.8
helm:
releaseName: k8s-pvc-tagger

It’s important to use the {{.name}} variable (or similar) so it creates a unique Application resource. Otherwise you will have conflicts and that’s never a good thing. Second, you’ll see the {{.server}} variable that defines the cluster’s server url that the Application is being deployed to. The rest looks like it did with a standard Application.

But what if you wanted to deploy different versions of k8s-pvc-tagger based on the environment; after all it’s always good to test in non-prod first, right? AnApplicationSet allows for this as well. In this example, we’re defining that the stage environment should run version 2.0.8 while production is still running 2.0.7. We’re able to use the templating options to dynamically decide which version to run.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: k8s-pvc-tagger
namespace: argocd
spec:
goTemplate: true
generators:
- clusters:
selector:
matchLabels:
environment: stage
values:
version: 2.0.8
- clusters:
selector:
matchLabels:
environment: production
values:
version: 2.0.7
template:
metadata:
name: 'k8s-pvc-tagger-{{.name}}'
spec:
destination:
namespace: k8s-pvc-tagger
server: '{{.server}}'
project: default
source:
chart: k8s-pvc-tagger
repoURL: https://mtougeron.github.io/helm-charts/
targetRevision: '{{.values.version}}'
helm:
releaseName: k8s-pvc-tagger

Let’s take this a step farther though. What if we wanted dev to use a pre-release version, stage to run 2.0.8 and production to run 2.0.7? This gets a little more complicated because Helm with Argo CD doesn’t allow you to install a chart without defining a version. This means we need to get a bit tricky and toggle between installing the Helm chart from git or the chart repository.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: k8s-pvc-tagger
namespace: argocd
spec:
goTemplate: true
generators:
- clusters:
selector:
matchLabels:
environment: dev
values:
version: HEAD
- clusters:
selector:
matchLabels:
environment: stage
values:
version: 2.0.8
- clusters:
selector:
matchLabels:
environment: production
values:
version: 2.0.7
template:
metadata:
name: 'k8s-pvc-tagger-{{.name}}'
spec:
destination:
namespace: k8s-pvc-tagger
server: '{{.server}}'
project: default
source:
chart: '{{if ne .values.version "HEAD"}}k8s-pvc-tagger{{end}}'
path: '{{if eq .values.version "HEAD"}}charts/k8s-pvc-tagger{{end}}'
repoURL: '{{if ne .values.version "HEAD"}}https://mtougeron.github.io/helm-charts/{{else}}https://github.com/mtougeron/k8s-pvc-tagger{{end}}'
targetRevision: '{{.values.version}}'
helm:
releaseName: k8s-pvc-tagger

In that example, if the .values.version is HEAD we set an empty value for chart and instead set the path to the Helm chart in git. Similarly we toggle between the chart repository and the git repo in the repoURL field. This is handy for doing an automated pre-release to the dev clusters whenever main is updated but staggering the deployments to the stage and production environments.

If we wanted to alter the values for the Helm chart based on the cluster’s environment we can go one step farther with the templating and set the valuesObject in the Application's template. In this example, for the dev environment we’ll run it with debug: true so that we can see more details in the logs. We’ll also adjust the amount of cpu requested because we run a larger cluster in production than we do in the other environments.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: k8s-pvc-tagger
namespace: argocd
spec:
...
template:
spec:
source:
helm:
valuesObject:
debug: '{{if eq .values.environment "dev"}}true{{end}}'
resources:
requests:
cpu: '{{if eq .values.environment "production"}}100m{{else}}50m{{end}}'

Following the idea to the next level, let’s run a version of a deployment based on a PR. This is helpful for testing changes before those changes are merged. In this scenario we’ll use the helm-guestbook chart that Argo CD provides.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: guestbook
namespace: argocd
spec:
goTemplate: true
generators:
- pullRequest
github:
owner: argoproj
repo: argocd-example-apps
labels:
- ok-to-test
template:
metadata:
name: 'guestbook-{{.branch_slug}}-{{.number}}'
labels:
branch: 'guestbook-{{.branch}}'
spec:
destination:
namespace: 'guestbook-{{.branch_slug}}-{{.number}}'
server: https://kubernetes.default.svc
project: default
source:
path: helm-guestbook
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: '{{.head_sha}}'
helm:
releaseName: guestbook
valuesObject:
ingress:
hosts:
- 'https://guestbook-{{.branch_slug}}-{{.number}}.example.com'
syncPolicy:
syncOptions:
- CreateNamespace=true

Breaking down the changes by section, you’ll see the generators is now using pullRequest (docs). In our case, we’re using GitHub for the source but it supports options like GitLab, Bitbucket, and others. It’s defined the repo that the PRs are from and only creates an Application for the PR if the PR has the label ok-to-test on it. This helps prevent tests from running unless you’ve authorized them.

In the next section you’ll see that it uses .branch_slug and .number to add more information to the name so that it is more unique. You might have also noted that we added labels to the metadata so that we can filter in the Argo CD UI to all the Application resources created for a branch in the repo for guestbook. Most importantly, the targetRevision is set to the .head_sha so that it uses the code from the PR’s revision.

In the valuesObject we dynamically assign the hosts so that each PR has its own URL to test against. Other values can be customized as well so that the deployment for the PR best represents the changes being made.

Lastly, the spec.destination.namespace is unique per branch & PR as well. This allows for each PR to be deployed into its own Kubernetes Namespace for isolation. In order for this to work it also needs to have the CreateNamespace=true option set.

The merge generator is pretty cool IMO because it can allow for filtering the clusters found from the clusters generator based on the values found in the git generator. Let’s take an example where you have 100 clusters but for some reason you want to only install the k8s-pvc-tagger Helm chart into 10 of them. You could label each cluster with a flag that defines which clusters run that app. However, if you decided to add or remove it from a cluster you have to add that new label to the cluster which is generally a more operations focused task. Wouldn’t it be easier to just drop a values file into a directory of a git repo and have it automatically installed? Or have a single file that defines which version of a Helm chart to install?

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: k8s-pvc-tagger
namespace: argocd
spec:
goTemplate: true
generators:
- merge:
mergeKeys:
- name
generators:
- clusters:
selector:
matchLabels:
argocd.argoproj.io/secret-type: cluster
- git:
repoURL: https://github.com/mtougeron/my-deploy-repo
revision: HEAD
files:
- path: "clusters/*.yaml"
selector:
matchExpressions:
- key: k8s-pvc-tagger
operator: Exists
template:
metadata:
name: 'k8s-pvc-tagger-{{.name}}'
spec:
destination:
namespace: k8s-pvc-tagger
server: '{{.server}}'
project: default
source:
chart: k8s-pvc-tagger
repoURL: https://mtougeron.github.io/helm-charts/
targetRevision: '{{index . "k8s-pvc-tagger"}}'
helm:
releaseName: k8s-pvc-tagger

In the mtougeron/my-deploy-repo repository in the clusters directory a set of yaml files exist that have the name of the cluster along with each chart and their version to install.

name: my-cluster-name
k8s-pvc-tagger: 2.0.8
guestbook: HEAD
some-other-app: 1.2.3

Argo CD will first get the list of clusters that exist and merge that list with the files found in that directory. It will then filter that list to the files that have a variable called k8s-pvc-tagger. Lastly, it uses the value of that variable to set the targetRevision to install.

While not specific to an ApplicationSet, a feature that I really like in Argo CD is the ability to use sources instead of source for an Application. This allows you to use more than one repository in your deployment. Why would you want this you ask? A common practice is to use an open source Helm chart but have your own configuration repository. Let’s say I had a configuration repository that contains my values file(s) for the Helm chart.

├── guestbook
├── k8s-pvc-tagger
│ ├── dev.yaml
│ ├── production.yaml
│ └── stage.yaml
└── some-other-app

Now I want to use these Helm values files when rendering the chart via Argo CD. I setup two sources (instead of using source). One for the Helm chart and one that references my-config-repo where the values file(s) live. The values files are stored in the values directory and broken down by chart. It aliases the my-config-repo repository as $values so that it can be used in the first source for where to pull the files from.

spec:
template:
spec:
sources:
- repoURL: https://mtougeron.github.io/helm-charts/
chart: k8s-pvc-tagger
version: 2.0.8
helm:
releaseName: k8s-pvc-tagger
valueFiles:
- values.yaml
- $values/{{.metadata.labels.environment}}.yaml
- repoURL: https://github.com/mtougeron/my-config-repo
path: 'values/k8s-pvc-tagger'
targetRevision: HEAD
ref: values

As you see in that example, it also dynamically points to the values file for the environment label set for the cluster in Argo CD.

When you sum it all together, as seen below, you have a powerful way to dynamically filter and set the version of the charts you want to install on each cluster.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: k8s-pvc-tagger
namespace: argocd
spec:
goTemplate: true
generators:
- clusters:
selector:
matchLabels:
environment: dev
values:
version: HEAD
- merge:
mergeKeys:
- name
generators:
- clusters:
selector:
matchLabels:
argocd.argoproj.io/secret-type: cluster
environment: stage
- git:
repoURL: https://github.com/mtougeron/my-config-repo
revision: HEAD
files:
- path: "clusters/*.yaml"
selector:
matchExpressions:
- key: k8s-pvc-tagger
operator: Exists
- merge:
mergeKeys:
- name
generators:
- clusters:
selector:
matchLabels:
argocd.argoproj.io/secret-type: cluster
environment: production
- git:
repoURL: https://github.com/mtougeron/my-config-repo
revision: HEAD
files:
- path: "clusters/*.yaml"
selector:
matchExpressions:
- key: k8s-pvc-tagger
operator: Exists
template:
metadata:
name: 'k8s-pvc-tagger-{{.name}}'
spec:
destination:
namespace: k8s-pvc-tagger
server: '{{.server}}'
project: default
sources:
- repoURL: '{{if ne .values.version "HEAD"}}https://mtougeron.github.io/helm-charts/{{else}}https://github.com/mtougeron/k8s-pvc-tagger{{end}}'
chart: '{{if ne .values.version "HEAD"}}k8s-pvc-tagger{{end}}'
path: '{{if eq .values.version "HEAD"}}charts/k8s-pvc-tagger{{else}}{{index . "k8s-pvc-tagger"}}{{end}}'
targetRevision: '{{.values.version}}'
helm:
releaseName: k8s-pvc-tagger
valueFiles:
- values.yaml
- $values/{{.metadata.labels.environment}}.yaml
- repoURL: https://github.com/mtougeron/my-config-repo
path: 'values/k8s-pvc-tagger'
targetRevision: HEAD
ref: values

Hopefully you’ve found these examples helpful and agree that using an ApplicationSet is a powerful way to do deployments. If you have any questions, I’m available on the CNCF slack and I’d be happy to provide more details. You can also watch some of my talks (GitOps Me Some of That! Managing Hundreds of Clusters with Argo CD or Hundreds of Clusters Sitting in a Tree with Argo CD) on the same subject as well.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Published in GrepMyMind

Feel free to grep & grok your way through my thoughts on Kubernetes, programming, tech & other random bits of knowledge. My randomness is my own & not those of any company I might be working for. I may be right, I may be wrong, but as Deep Thought said, “42.”

Written by Mike Tougeron

Lead SRE @Adobe , #kubernetes fan & gamer (board & video). he/him. Remember, reality is all in your head…

No responses yet

Write a response