The Carvel Suite is a set of composable tools to help deploy applications to Kubernetes. While other solutions try to solve all problems in one package, Carvel provides tools and leaves it up to you to glue the components together. This enables a lot of flexibility!
Updated (April 25, 2021):
- Use ytt v0.32.0 instead of v0.31.0
This post will cover using:
- vendir to fetch dependencies such as YAML files from a Git repository and Helm charts
- ytt to patch retrieved YAML files using a Starlark-based templating language
- kapp to deploy to Kubernetes and provide release lifecycle management
We’ll use these tools to deploy an nginx deployment and the Loki-Stack Helm chart.
Note: This posts’ example code can be found at carvel-suite-example.
Install required tools
We’ll need a few tools to try out the Carvel suite. Download the following:
Once done, go ahead and create a Kubernetes cluster by running:
|
|
Use vendir to download raw YAML files from Git repo
Our first goal is to consume Kubernetes’ example nginx deployment.
We can use vendir to fetch this YAML file. Start by creating a file named vendir.yml
with the following content:
|
|
Note: Consult vendir.yml spec for more info on what can exist in
vendir.yml
.
Navigate to the directory where you created the above vendir.yml
file and run:
|
|
This might take a while as the kubernetes/website
repository is quite large and leverages Git submodules. Fortunately, vendir handles
Git submodules just fine! Afterward, we’ll have a structure that looks like this:
.
├── deploy
│ └── synced
│ └── nginx
│ └── deployment.yaml
├── vendir.lock.yml
└── vendir.yml
We’re using the deploy/synced
to hold files downloaded by vendir. Later we’ll create other directories under deploy
to patch our retrieved files.
Notice the vendir.lock.yml
file created. At the time of writing this, mine looks like this:
|
|
This is excellent news because it allows us to specify a branch name in our vendir.yml
file, but vendir
will create a lock file to pin the reference to an exact
commit SHA. Even though the master branch will continue to change, we’ll always have the same result because of this lock file.
To instruct vendir to use a lock file, we have to run:
|
|
Use ytt to modify nginx deployment
If we look at the contents of deploy/synced/nginx/deployment.yaml
we’ll see:
|
|
Our next goal is to change the replica count to 3
. We can create a template file for ytt to change the replica count from 2
to 3
.
Create a new YAML file named deploy/overlays/nginx/nginx-deployment-replica-count.yaml
by running:
|
|
Note: All of these names are a convention I’m using but not required.
The content of deploy/overlays/nginx/nginx-deployment-replica-count.yaml
should be:
|
|
We can then run ytt to see the impact of our overlay by running:
|
|
ytt will print the modified nginx deployment with 3
replicas to stdout!
Use kapp to deploy
Now that we can create the raw YAML files with ytt, we can deploy them using kapp.
Run the following command:
|
|
This will pipe the output from ytt to kapp. Kapp will then deploy the resources as an app named dev-nginx. The --diff-changes
option
will display the difference between the cluster’s version of the resource and the provided YAML’s version. This is super nifty for making
sure the desired change will be deployed. Finally, we provide --yes
just to automatically say yes to deploy.
Note: kapp has pretty reasonable defaults on order to submit resources to Kubernetes, similar to Helm. Kapp also supports changing the order and configuring how to wait on different resources. Check out kapp’s documentation. We won’t tackle any of this in this post.
We can get a list of deployed applications in the cluster by running:
|
|
and we can inspect our dev-nginx application by running:
|
|
Use vendir to download the Loki-Stack Helm chart
We’ve deployed a relatively simple YAML file. Now let’s try deploying something more complex, like the Loki-Stack Helm chart.
First, we’ll append a helmChart
content to our vendir.yml
. Update vendir.yml
to match:
|
|
Once again, run:
|
|
vendir will retrieve our nginx deployment file and the loki-stack Helm chart. It’ll also update the lock file. Feel free to browse the deploy/synced/loki-stack
to notice its template and its
dependencies (loki Helm chart, promtail Helm chart, etc.) are included!
Use ytt to set the namespace for loki-stack resources
Our ytt workflow will be a bit different this time. ytt isn’t aware of Helm templates, so we’ll need to use Helm to convert templates to raw YAML files, and then ytt can handle the rest.
We can run:
|
|
At this point, there isn’t anything for ytt to handle.
If you look at the output of the above command, you’ll notice no namespace is provided. Let’s
create a ytt template, so each resource is made in the loki
namespace.
Create a new file named deploy/overlays/loki-stack/all-namespace.yaml
by running:
|
|
Next, make deploy/overlays/loki-stack/all-namespace.yaml
look like:
|
|
In Helm, it’s common to provide the --create-namespace
option for Helm to create the release’s
namespace if missing. kapp doesn’t have this feature, so we’ll need another template to add
a namespace resource. Without doing this, kapp will return an error later saying the loki
namespace doesn’t exist.
Create a file named deploy/overlays/loki-stack/loki-namespace.yaml
, that has the following:
|
|
We can re-run the following command:
|
|
to see the resources printed to stdout again. This time the namespace is set, and the loki namespace resource exists.
Use ytt to handle Helm test pods
Another great feature of Helm is the concept of test pods. This isn’t something kapp is aware of either. Fortunately, we can do some slight modifications via ytt to effectively use the same test pod spec to validate that an application is deployed correctly.
Let’s create another ytt template file named deploy/overlays/loki-stack/loki-stack-test-pod.yaml
with the following:
|
|
The above YAML file’s comments explain how we can leverage the Loki-Stack’s test pod to validate the Loki-Stack deployment. Effectively, we modify the pod to restart until finally successful, and we instruct kapp to replace the test pod on deployments to validate that any changes don’t break Loki-Stack.
Use kapp to deploy our group of applications
When we deployed our dev-nginx application, we used kapp deploy
. The same command can be used for loki-stack, but kapp has
another command, kapp app-group deploy
, that is useful for deploying multiple applications at once. We provide a directory,
and kapp deploys each application.
This is where the composability of the Carvel suite really shines. We’ll want some glue to manage fetching dependencies, rendering a Helm template (if needed), applying ytt templates, and lastly, deploying to Kubernetes via kapp. Our glue will be a Bash script.
Create a script named deploy.sh
with:
|
|
There’s a bit to take in there. Most of it is similar to what we’ve done already. The bulk is the while loop. We handle
the case where it’s a Helm chart. This is where we could also support kustomize and other packages to consume. We create
a directory for each rendered application under deploy/rendered/
. Then we can simply instruct kapp to deploy the entire
deploy/rendered
directory. The --group
option will prefix each application name, which is the directory name under deploy/rendered
,
such as nginx
and loki-stack
.
If we run:
|
|
kapp will detect no changes to dev-nginx, and it’ll deploy our loki-stack application.
Closing thoughts
I’m really excited about the Carvel suite. Having the fully rendered YAML files available opens several avenues for static analysis before even attempting to deploy to a real cluster, and I’m all about that fast feedback loop.
There’s a tradeoff with Carvel compared to other tools, given the amount of glue needed. For me, this composability is what I’m after. We’ve created a workflow that handles plain YAML files and Helm charts. It wouldn’t be much effort to support kustomize, for example. The flexibility here is empowering.
Are you using Carvel for anything? What’s working out for you? Let me know on Twitter, LinkedIn, or GitHub.