What is K3s and how does Relay use it

Blog post cover

This article aims to teach readers about K3s, its relation to kubernetes, and delves into the ways in which Relay uses these technologies. In this article, we refer to you, the reader, as a developer learning about k3s, kubernetes, and other cloud-native tech. Enjoy!

What is K3s?

Also pronounced “keys”, k3s is a lightweight Kubernetes distribution developed by Rancher Labs. It combines all the components of the larger mainline Kubernetes distribution into a single binary. It also strips out a large amount of unused APIs that help the nodes run a bit leaner. All this makes running a Kubernetes cluster much easier; especially on a laptop, Raspberry Pi or other low-powered edge device.

Rancher K3S diagram

In the image above, we have a diagram from Rancher Labs on how the server and agent function for K3s. For an in-depth introduction and demo, check out this video

Why Did Relay Use K3s?

K3s is also part of a broad ecosystem of tools built by Rancher Labs around slimming down Kubernetes. This was important in evaluating tools to help us build our workflow engine into the Relay command line interface (CLI). We won’t cover them all here, but one We used to facilitate our needs on Relay is k3d.

I spent some time evaluating a few tools built for running a Kubernetes cluster on a workstation host and the two that shined were kind and k3d. Both have really good cross-platform support as they run their nodes as Docker containers, so they easily work on Windows, Linux and macOS. Ultimately, we chose k3d because the codebase is easier to use as a library, so we can embed the cluster management directly into the Relay CLI (here on out referred to as “the CLI”).

How we use Relay + K3s

Developing the components of Relay can be cumbersome. It heavily relies on Kubernetes APIs and the core itself is a Kubernetes controller that reacts to changes on our own Custom Resource Definitions (CRD). In order for our API to kick off a workflow run, it must craft a CRD and submit it to a cluster running the Relay core. There’s also security related features like token signing and policy creation so workflows have access to their secrets and runtime data.

This means that Kubernetes is a hard requirement and unless you are running a development cluster in the cloud, or a cluster that you made yourself on your laptop, it’s nearly impossible to try out changes developers make to our codebases without committing the code, letting CI build it and then pushing that build to a remote cluster.

We also just open-sourced the relay-core codebase and want to take advantage of this to enable our users to also run a local cluster that will make workflow authoring easier. We think that being able to write and run workflows for testing locally is going to speed up the authoring process and allow our users to experiment with workflows a lot more.

The requirements for a development environment

We had a few requirements for running a relay-core cluster from the CLI:

  1. Much like k3s, it has to be a single binary. Outside of needing Docker installed on the host, starting a cluster from the CLI should only require the CLI itself to be installed.
  2. It must be opinionated. We don’t want the CLI to become yet another Kubernetes cluster manager. There are a lot of these and most of them are excellent tools. This is why we embedded k3d. It does a very good job at this.
  3. We wanted to isolate calls to the cluster by embedding the kubectl command. If you need to use kubectl against the cluster, you can use it as a sub-command to the CLI, which will automatically load the correct kubeconfig and credentials. Many cluster administrators run this on their workstations to manage other Kubernetes clusters and we want to prevent destructive actions from being executed on the wrong one.
  4. We wanted the CLI to bootstrap a fully functional standalone workflow execution engine when the cluster is created. This means that you can run Relay workflows, written in yaml, using our workflow schema in the local cluster. We wanted to utilize this standalone engine so that our developers could run a single workflow that further bootstrapped into a fully functional version of our SaaS offering which essentially upgrades it to an instance of the multi-tenant environment. This would give us a fully functional development environment with as little as 2 commands.

How to Embed k3d and Run k3s via the CLI

We pull in k3d as packages and use the cluster management code to configure a multi-node kubernetes cluster. The configuration is very opinionated, but there are limits to this. We have to allow the user to change some of the default ports (HTTP ingress and the container image registry) so we provide flags for those. The CLI also makes the cluster as a singleton. This is to avoid unnecessary complexity of managing multiple clusters and managing state storage of cluster metadata. This means that the “start” command will create and/or start the cluster nodes.

Once the cluster has started and we can talk to the Kubernetes api server, the CLI then applies the dependency resources. This includes cert-manager, Tekton, and knative components. Talking about the bootstrapping process can be its own lengthy article, so I won’t dive in too heavily here.

This is actually similar to how k3s works. By pulling in the important bits from the Kubernetes upstream packages, k3s essentially embeds the entire Kubernetes pipeline into a single binary that acts as both the kubelet and the api server, then bootstraps some useful components like the network fabric, service load balancing and an ingress controller.

Putting k3s and k3d components to work

Because both of the k3s and k3d setup networking, load balancing and ingress for us, we can utilize them to expose services by creating regular Kubernetes resources and CRDs. For instance, we run a docker registry inside the cluster that you can push images to and to expose this to the host, we create a k8s Service of type LoadBalancer on port 5000, then use k3d node configuration to map 5000 from the host to one of the cluster nodes.

The management of resources is made even easier by utilizing the controller-runtime sig client and utilities. We use this to process and submit all resources to the cluster, whether they are from yaml files or defined as Go types.

Sign Up for Relay and Try Out our Latest CLI

Rancher created k3s as a lightweight Kubernetes distribution that’s easy to start, configure, and run and I think they pulled it off. The extensive configuration space and simple defaults makes it easy to roll out clusters using shell scripting, configuration management, or embedding into another CLI program.

There’s still a lot of work to be done and speed improvements can be made, but we think our standalone workflow engine will help users experiment with workflows and come up with some awesome new step plugins for Relay.

Sign up for Relay here and check out the latest release of our CLI if you want to try running some workflows yourself.