User-defined Webhooks in Puppet Relay with Knative and Ambassador API Gateway

Blog post cover

Editor’s note: This article is a technical deep dive originally published on the Ambassador blog.

What is Puppet Relay?

Relay is an event-driven automation platform designed to make wrangling diverse operational environments easy. Relay executes workflows, which consist of multiple related steps, to perform actions like opening Jira tickets, merging pull requests, or even deploying an application to a Kubernetes cluster.

Relay is built on containerization and leverages Tekton to execute workflows. Each step in a Relay workflow runs an OCI-compatible container image. Unlike conventional workflow automation tools, this gives you the ability to make your own completely custom steps; you’re not restricted to our curated steps or even to a particular programming language.

When we set out to implement triggers to automatically run workflows from event data, we wanted to make sure you’d have similar flexibility as within a workflow to receive external data. We decided to provide three initial options: schedule triggers, which run your workflows on a specified interval; push triggers, which allow your services to directly send data to Relay; and webhook triggers, which integrate with lots of external services that can push event data to arbitrary HTTP endpoints.

Webhook triggers presented the biggest technical challenge to implement, as every service provides slightly different payloads representing their events. Keeping with our container-based approach, we let you define webhook triggers by running your own web server in a container you provide to us. In this post, we’ll walk through how we implemented our webhook trigger handling using Knative Serving and the Ambassador API Gateway.

You can see an example of a complete webhook trigger implementation in our PagerDuty integration.

Installing Knative Serving

We chose Knative Serving as a reverse proxy for webhook triggers. We could have configured a Kubernetes deployment for each webhook trigger, but we felt the resource burden on our cluster would be too high to have every customer pod running all the time.

Knative Serving’s unique model uses an activator and autoscaler to dynamically provision pods when it receives requests. A rough state machine for a Knative service looks like:

  1. When a service is created or updated, a revision is created initially in an inactive state. This corresponds to a Kubernetes deployment scaled to zero pods. In this inactive state, HTTP requests are routed to the Knative activator.
  2. When an HTTP request is received for the service, the Knative activator queues the request and switches the current revision to an active state. Knative then scales the deployment up and switches the service to route to the deployment’s pods. Once the service is pointing at the deployment, the queued HTTP request is dispatched.
  3. The Knative autoscaler monitors inbound request rate and scales the deployment as needed.
  4. If the service doesn’t receive any HTTP requests after a configured timeout, the deployment is scaled back to zero, the revision is marked as inactive, and the service is switched back to the activator.
  5. Repeat from the beginning!

For a cached container image, the whole activation process generally takes less than 2 seconds, quickly enough for our webhook handling use case. And for webhook triggers that only receive events every few minutes or less frequently, it saves us considerable cluster resources.

Installing Knative Serving is straightforward. You need their CRDs and core components like the activator. Here we’ll use version 0.13.0, but check their installation instructions for the latest version.

$ kubectl apply -f
$ kubectl apply -f

Then you need to pick a networking layer, or gateway, to route requests.

Choosing a Knative Serving Gateway

Like most Knative Serving users, we started by evaluating Istio, a popular service mesh and ingress gateway offering for Kubernetes. However, Istio’s focus on connecting microservices didn’t really support our use case:

  1. We didn’t need the service mesh component of Istio at all, only its Envoy gateway. A complete Istio installation on our cluster consumed a lot of resources, most of which were ultimately going to waste.
  2. Because we’re configuring webhook triggers dynamically from our database, we put our own reverse proxy in front of Knative Serving. It is difficult (although not impossible) to change the behavior of Istio to run as an internal-facing service instead of a public gateway.

Most other networking layer options for Knative Serving were positioned more strongly as ingress gateways, focusing mainly on exposing services directly to the internet.

Ultimately we settled on Ambassador because its lightweight single-container model made deploying it for our internal use case easy.

Installing Ambassador and Configuring Knative to Use it

For Relay, we use a custom Helm chart to set up the Ambassador API Gateway. We have a single deployment, an optional horizontal pod autoscaler, a service account with corresponding role bindings, and finally a ClusterIP service to use as the target networking layer for Knative.

Our deployment YAML is largely the same as the one from the Ambassador installation instructions in ambassador-rbac.yaml. However, we must explicitly enable Knative support:

# ...
      - image: datawire/ambassador:1.5.2
            value: 'true'
        # ...

Likewise, our service account and role bindings are similar to those in ambassador-rbac.yaml, but use {{ .Release.Namespace }} instead of default.

For the gateway service, note especially:

  1. The label must be present and set exactly to the value ambassador-service or Ambassador won’t pick it up to set the Knative service target correctly.
  2. We use type: ClusterIP to make the service cluster-local. This instance of Ambassador won’t be reachable from the internet.
{{- $name := include "" . -}}
{{- $fullname := include "ambassador.fullname" . -}}
{{- $namespace := .Release.Namespace -}}
apiVersion: v1
kind: Service
  name: {{ $fullname }}
  namespace: {{ $namespace }}
  labels: {{ $name }}
    # Per the Ambassador source code, this must be specified explicitly exactly
    # like this. ambassador-service {{ .Release.Name }} "{{ .Values.image.tag }}" {{ .Release.Service }} {{ include "ambassador.chart" . }}
  type: ClusterIP
   - port: 80
     targetPort: 8080
  selector: {{ $name }} {{ .Release.Name }}

As a reference, you can find our entire Ambassador Helm chart on GitHub.

Finally, we need to configure Knative to use cluster-local routing and Ambassador as its default gateway. Simply apply this manifest to your cluster using kubectl apply:

apiVersion: v1
kind: ConfigMap
  name: config-domain
  namespace: knative-serving
  svc.cluster.local: ''
apiVersion: v1
kind: ConfigMap
  name: config-network
  namespace: knative-serving

Deploying, Testing, and Managing Internal-Only Knative Services

Now you can create a cluster-local Knative service. Use kubectl apply on a manifest like this:

kind: Service
  name: my-test-service
  namespace: default
  labels: cluster-local
      - image:
        name: helloworld-go

Within a few seconds, Ambassador will process the service and set up a mapping for it. Assuming you installed Ambassador to the ambassador namespace, you’ll see this service when you run kubectl get svc:

NAME              TYPE           CLUSTER-IP   EXTERNAL-IP                               PORT(S)   AGE
my-test-service   ExternalName   <none>       ambassador.ambassador.svc.cluster.local   <none>    112s

If you don’t see the service pointed at your Ambassador deployment, inspect the Knative service and ingress using kubectl describe ksvc my-test-service and kubectl describe king my-test-service. The status conditions and events should provide useful hints to remediate any problems.

Now we can try sending a request to the service by running a one-off pod:

$ kubectl run \
    --generator=run-pod/v1 internal-requester \
    --image=alpine --rm -ti --attach --restart=Never \
    -- wget -q -O - http://my-test-service.default.svc.cluster.local
Hello World!
pod "internal-requester" deleted

Yay! Your service works. You’ll also see a pod running to handle the request you just made. If you don’t make any more requests, within a few minutes, that pod will be automatically terminated.

NAME                                                READY   STATUS    RESTARTS   AGE
my-test-service-cw9v2-deployment-59c889f74b-74rwk   2/2     Running   0          8s

You can view all the mappings Ambassador has configured for your Knative services by forwarding the admin endpoint of your deployment:

$ kubectl port-forward -n ambassador deployment/ambassador 8877
Forwarding from -> 8877
Forwarding from [::1]:8877 -> 8877

Then navigate to http://localhost:8877/ambassador/v0/diag/.

For Relay, we manage our Knative services by writing out a higher-level CRD that our operator processes. This lets us perform lifecycle management operations more efficiently. For example, we create and clean up webhook triggers and workflow runs in batches we call tenants. We get a ton of value from the combination of Tekton, Knative, Ambassador, and our own operator, with relatively little cluster resource overhead.

Deployment Scenarios for Ambassador and Knative

The developer use cases for Knative fall into three categories:

  1. Replace glue/aggregation functions with Knative

    Function as a Service (FaaS) offerings have become popular as a way to deploy and run services that “glue” functionality together. The main challenge for development teams is that the workflow for deploying cloud-based FaaS is different than that for Kubenetes. If you’ve already invested in training engineers to work with Kubernetes, then the added time and money to train them is extraneous.

  2. Build smaller microservices as functions

    Some simple functions are event-driven and provisioning (or running) an entire microservice/app framework seems unnecessary. Knative provides “just enough” framework to deploy and manage the lifecycle of a very simple microservice or “nanoservice” using the primitives provided within modern language stacks.

  3. Deploy high-volume functions, cost-effectively

    Pay-as-you go serverless offerings can be very cost-effective for certain use cases. For longer-running or high-volume functions, pay-as-you-go isn’t as practical. Running Knative on your own hardware, or even running this via Kubernetes deployed on cloud VMs, can enable the easier costing of execution when you know that you will be running a service that has high-volume traffic.


Running on-demand microservices in your infrastructure is easier now than ever before. With the help of Knative and Ambassador, you can drastically reduce costs and resource utilization while maintaining clean separations of APIs across your environment.

At the frontier of cloud-native experience, Knative also unlocks some very exciting opportunities that haven’t been practical in conventional deployment environments. In this post, we explored low-trust user-defined workloads using custom containers as one example, but there are many more!

Next Steps

  1. Install the Ambassador Edge Stack.
  2. Install Knative with Ambassador.
  3. Read the documentation for using Knative and Ambassador.
  4. Contact the Ambassador team to learn more about using Ambassador with Knative.
  5. Sign up for Relay to try webhook triggers yourself.