Relay Security Overview

Introduction

We’re committed to the security of our products at Puppet. We frequently review our security program to address rapidly changing technologies, and the constantly evolving cyber threat landscape. Secure development of our products is the first step in securing customer data. Our application security program includes threat modeling, manual code review, dynamic code testing, regular vulnerability scans, and web penetration testing. Relay stores your data securely in Google Cloud Platform (GCP).

Please keep in mind as you review the security information below that the most effective way to minimize security exposure is to avoid storing unnecessarily sensitive data in the first place.

Contents

  1. Physical Security
  2. System Security
  3. Operational Security
  4. Application Security
  5. Incident Reporting and Ongoing Improvements

Physical Security

Relay production data is processed and stored within world-renowned third party data centers, primarily Google Cloud Platform (GCP), which uses a layered security model, including safeguards like:

  • Custom-designed electronic access cards
  • Alarms
  • Vehicle access barriers
  • Perimeter fencing
  • Metal detectors
  • Biometrics
  • Laser beam intrusion detection

The data centers are monitored 24/7 by high-resolution interior and exterior cameras that can detect and track intruders. Access logs, activity records, and camera footage are available in case an incident occurs. Data centers are also routinely patrolled by experienced security guards who have undergone rigorous background checks and training. As you get closer to the data center floor, security measures also increase. Access to the data center floor is only possible via a security corridor which implements multi-factor access control using security badges and biometrics. Only approved employees with specific roles may enter.

All details for GCP can be found here: https://cloud.google.com/security/overview/whitepaper. A full list of the cloud providers used to maintain security and provide services within Relay is available on request.

System security

Servers and Networking

Relay software production runs in a Google Kubernetes Engine (GKE) cluster in GCP. We ensure the cluster remains up-to-date and observe all recommended actions posted in GKE security bulletins. Additional hosted services that we utilize, such as Google Cloud Storage, are comprehensively hardened Google infrastructure-as-a-service (IaaS) platforms.

Our web servers encrypt data in transit using the strongest grade of HTTPS (TLS 1.2 or higher) so that requests are protected from eavesdroppers and man-in-the-middle attacks. Our TLS certificates are 2048-bit RSA, signed with SHA-256.

The nodes our internal services are run on are never exposed to the public internet. Direct access to internal Kubernetes nodes only occurs through an audited bastion host.

Storage

All persistent data is encrypted at rest using the AES-128 standards or similarly high standards. While Relay by Puppet has yet undergone a third party audit for SOC 2 or ISO27001, oursecurity program aligns with controls present in those frameworks and use cloud hosting providers who have have successfully completed third party audits or certifications such as ISO 27001, SSAE-16, SOC 1, SOC 2, and SOC 3 certifications.

Operational Security

Employee Equipment

Employee computers have strong passwords, encrypted disks, mobile device management, and inbound and outbound network traffic monitoring is monitored 24/7. Sophos is used in defense against malware.

Employee Access

We follow the principle of least privilege in how we write software as well as the level of access employees are instructed to utilize in diagnosing and resolving problems in our software and in response to customer support requests.

We use Google account infrastructure to verify employee account identity and require physical two-factor authentication for all internal applications without exception. Access to administrative interfaces additionally enforce administrator permissions where applicable, and all administrative access is logged.

Code Reviews and Production Sign Off

All changes to source code destined for production systems are subject to pre-commit code review by a qualified engineering peer that includes security, performance, and potential-for-abuse analysis.

Prior to updating production services, all contributors to the updated software version are required to approve that their changes are working as intended on staging servers.

Application Security

Client and Server Hardening

Exposed API end-points are tested in compliance with our security review process at specified intervals.

Request-handling code paths have frequent user re-authorization checks, payload size restrictions, rate limiting where appropriate, and other request verification/validation techniques. All requests are logged and made searchable to operations staff. (Logging is great, but these need to be actively reviewed as well)

Client code is vetted utilizing several testing methods to ensure that best practices AND industry standards are observed and implemented. Some of best practices/industry standards considered in testing are:

  • OWASP top 10
  • ASVS 4.0
  • CWE
  • CCM

API and Connections

All secrets are stored in a hosted Vault instance. The values of the secrets are never presented after they have been saved to prevent unnecessary exposure.

Connections with other applications are all opt-in and authenticate via applicable mechanisms required by the third party application. Connections can be disabled at any time.

We take extra measures to ensure no secret data is passed through payloads that are logged through the cloud provider.

Multi-tenancy and Sandboxing

We run Relay workflows on a set of Kubernetes nodes physically separate from the rest of our infrastructure. However, workflows from multiple Relay accounts may run concurrently on the same infrastructure. To keep account data secure and the nodes safe when running untrusted code, we use a layered approach:

  • Each workflow has a dedicated Kubernetes namespace that contains all of the information needed to run the workflow and handle events. Kubernetes objects used by a workflow are never shared with other workflows or accounts.
  • A workflow namespace is configured using Linux cgroups to prevent one workflow from consuming too many resources on any given node.
  • A workflow namespace is configured with a restrictive firewall to prevent inter-pod and inter-namespace communication. When we run a container as part of a workflow, it only has network access to the internet, never other internal infrastructure.
  • Every customer container is run in a gVisor sandbox. gVisor intercepts system calls and reimplements or restricts access to potentially vulnerable host kernel APIs and devices, greatly reducing the threat surface area for local privilege escalation attacks.
  • Access to configured secret data is on an “as needed” basis. When a workflow component executes in a container, credentials to access secrets are encrypted and attached to its pod. Requests to decrypt secrets are made by our metadata service as requested and data is never persisted or cached outside of Vault.

Log Retention

We encrypt all logs produced by workflow components, including webhook triggers and workflow steps. Generally, for auditing purposes and customer convenience, we store logs for the duration of your use of Relay. However, upon request, we will delete the cryptographic keys used for log data on a per-account, per-workflow, or per-workflow run basis.

Incident Reporting and Ongoing Improvements

Puppet monitors for malicious activity such as attempted intrusions, excessive login attempts, malicious code injection attempts, and is on call 24/7 to respond to security alerts and incidents.

Puppet has a vulnerability submission policy. You can read more details about our program and the rules of engagement at https://puppet.com/security.

If you have a security concern or are aware of an incident, please send an email to security@puppet.com, a carefully controlled and monitored email account.