Running the agent on Kubernetes

Deploy a self-hosted Airplane agent on Kubernetes.
The Airplane agent can be quickly installed on Kubernetes using Helm. When installed, the agent will use Kubernetes APIs to run Airplane tasks and runbooks.

Installation guide

Install Helm

First, install Helm if you haven't yet. On macOS, you can brew install helm. For other operating systems, see the Helm installation docs.

Configure

You'll need the following values:
  • YOUR_API_TOKEN: generate a new token by running airplane apikeys create <token name> from the Airplane CLI.
  • YOUR_TEAM_ID: get your team ID via airplane auth info or visit the Team Settings page.
Create a values.yaml configuration file:
yaml
Copied
1
airplane:
2
apiToken: YOUR_API_TOKEN
3
teamID: YOUR_TEAM_ID
4
5
# Change this to run pods in a different namespace
6
runNamespace: default
7
8
# Optional: Attach labels to agents for constraints
9
agentLabels:
10
orchestration: kubernetes
11
12
# Optional: If set, only allows agents to execute runs
13
# from the environment with the provided slug
14
envSlug: ""
Keep track of your values.yaml file. You'll need it for subsequent upgrades. We recommend you store this in version control.
Setting the agentLabels key will add the provided labels to the agent for use in run constraints. For details on using labels, see Execute rules & constraints.
Setting the envSlug key will only permit the agent to execute runs from the given environment. For details on using environments, see Environments.
This example writes the API token inline into the values.yaml file. If you would like to avoid committing it directly into the file, see the Using secrets section below.

Install

Add the Airplane chart repo:
bash
Copied
1
helm repo add airplane https://airplanedev.github.io/charts
Update your chart repos:
bash
Copied
1
helm repo update
Install the helm chart:
bash
Copied
1
helm install -f values.yaml airplane airplane/airplane-agent
That's it! Visit app.airplane.dev/agents to check that the the agents are up and running.

Upgrading

To update your chart in the future (e.g. when there are newer updates to the agent), you can run helm upgrade:
bash
Copied
1
helm upgrade -f values.yaml airplane airplane/airplane-agent

Using secret for API token

If you would prefer to use Kubernetes secrets for the token and avoid typing them into a file, you can create a secret:
bash
Copied
1
kubectl create secret generic airplane-secrets \
2
--from-literal=token=YOUR_API_TOKEN
And reference it from the values.yaml file:
yaml
Copied
1
airplane:
2
apiTokenSecret:
3
name: airplane-secrets
4
key: token
5
6
teamID: YOUR_TEAM_ID
7
8
# Change this to run pods in a different namespace
9
runNamespace: default
If you're applying this after having already created the agent, you can run an upgrade to apply the changes—see Upgrading.

Runner pod customizations

Once running and connected to the Airplane API, the agent spins up runner pods to execute tasks. The default configurations for these pods should be sufficient for the majority of use cases, but in some situations it may be necessary to modify the defaults, e.g. for security or performance reasons.

Setting a custom service account

By default, the agent will run all Airplane task pods with the namespace default service account. You can change this for all tasks by setting the taskServiceAccountName parameter in the airplane section of the values.yaml file. This will allow these pods to hit the Kubernetes API with custom permissions, among other use cases.
Alternatively, to set a custom service account on a task-by-task basis, you can set the AIRPLANE_TASK_K8S_SERVICE_ACCOUNT_NAME environment variable in the associated task.

Configuring task CPU and memory

By default, each task runs in a container which requests 0.25 vCPU and 256MB of memory, with limits of 1.0 and 2GB respectively. To change the resources allocated for all task containers, you can update the values of the defaultTaskResources parameter in the values.yaml chart file.
Alternatively, to adjust these resources on a task-by-task basis, you can set the AIRPLANE_TASK_CPU and AIRPLANE_TASK_MEMORY environment variables in the associated tasks. The agent will set the container resource requests to these values and then adjust the default limits, if needed, so they're at least as high.
Note that these resource values, whether set in the agent configs or the environment, are interpreted as strings in Kubernetes quantity format. The latter supports several different units, but it's usually easiest to express CPU values in millicores (e.g., 500m for 0.5 cores, 2000m for 2 cores, etc.) and memory values in either megabytes (e.g., 256Mi) or gigabytes (e.g., 2Gi).

Custom pod patches

To allow changes beyond the service account and resource options described above, Airplane supports applying arbitary patches to the agent runner pods. These patches are applied to the associated pod specs via the strategic merge algorithm described in the Kubernetes docs.
To apply a patch to all runner pods managed by an agent, set the airplane.runnerPodSpecPatch field in the chart values:
yaml
Copied
1
airplane:
2
runnerPodSpecPatch:
3
# Example of adding a sidecar container
4
containers:
5
- name: sidecar
6
image: "ubuntu:latest"
7
command:
8
- /bin/bash
9
- "-c"
10
- trap "echo exiting; exit 1" SIGHUP SIGINT SIGTERM; while true; do sleep 20; done
To apply a patch for runners of a specific task, you can set the runnerConfig field in the associated task definition:
typescript
Copied
1
export default airplane.task(
2
{
3
slug: "my_task",
4
runnerConfig: {
5
k8sPodSpecPatch: {
6
patch: {
7
// Mount a volume in the main task container
8
volumes: [
9
{
10
name: "cache-vol",
11
emptyDir: {
12
sizeLimit: "100Mi",
13
},
14
},
15
],
16
containers: [
17
{
18
name: "task-container",
19
volumeMounts: [
20
{
21
mountPath: "/cache",
22
name: "cache-vol",
23
},
24
],
25
},
26
],
27
},
28
},
29
},
30
},
31
async () => {...}
32
);

Using Terraform and Helm

You can optionally use the Terraform helm_release resource to manage the Helm chart from Terraform.
You'll need to first add the Helm provider, if you haven't already:
hcl
Copied
1
provider "helm" {
2
kubernetes {
3
config_path = "~/.kube/config"
4
}
5
}
Then, add the helm_release resource:
hcl
Copied
1
resource "helm_release" "airplane" {
2
name = "airplane-agent"
3
repository = "https://airplanedev.github.io/charts"
4
chart = "airplane-agent"
5
6
# Change namespace if desired:
7
# namespace = "default"
8
# create_namespace = true
9
10
timeout = "600"
11
12
set {
13
name = "airplane.teamID"
14
value = var.airplane_team_id
15
}
16
17
set {
18
name = "airplane.apiToken"
19
value = var.airplane_api_token
20
}
21
22
# Alternatively, to use an existing secret for the
23
# API token:
24
#
25
# set {
26
# name = "airplane.apiTokenSecret.name"
27
# value = "airplane-secrets"
28
# }
29
# set {
30
# name = "airplane.apiTokenSecret.key"
31
# value = "token"
32
# }
33
34
}
See the docs on helm_release for more details.

Self-hosted storage in Kubernetes

You can enable self-hosted storage for Kubernetes-hosted agents with some additional steps. The procedure to follow depends on the environment where your Kubernetes cluster is running.

AWS EKS

If you're running in a EKS Kubernetes cluster, you can configure self-hosted storage by using a small Terraform module followed by the existing Airplane agents Helm chart.
First, ensure that your cluster has the AWS Load Balancer Controller installed. This is needed to bind the agent instances to the ALB target group created in Terraform.
Then, configure the AWS storage Terraform module for your environment. At a minimum, the following inputs must be set:
hcl
Copied
1
module "airplane_agent_storage" {
2
source = "airplanedev/airplane-agents/aws//modules/storage"
3
4
api_token = "YOUR_API_TOKEN"
5
team_id = "YOUR_TEAM_ID"
6
kube_cluster_name = "YOUR_CLUSTER_NAME"
7
kube_namespace = "YOUR_CLUSTER_NAMESPACE"
8
9
# Slug for your zone (lowercase letters and numbers only,
10
# e.g. 'myzone')
11
agent_storage_zone_slug = "YOUR_ZONE_SLUG"
12
13
agent_security_group_id = "YOUR_AGENT_SECURITY_GROUP_ID"
14
15
# List of public subnet IDs for the agent server external
16
# load balancer
17
subnet_ids = ["PUBLIC_SUBNET_ID1", "PUBLIC_SUBNET_ID2"]
18
}
The security group ID is needed so that external requests can be routed to your agent instances. If you've set up your EKS cluster with the default settings, this is generally the same as the security group for the cluster nodes.
Apply the module above and note down the following outputs:
  1. Storage bucket name
  2. Storage redis address
  3. External target group ARN
  4. Agent IAM role ARN
Finally, configure the Helm chart according to the directions above, with the following extra parameters set in the values.yaml file:
yaml
Copied
1
airplane:
2
# ... parameters from above ... #
3
4
storage:
5
enabled: "true"
6
mode: eks
7
s3BucketName: "BUCKET_NAME_FROM_TERRAFORM"
8
redisHost: "REDIS_ADDRESS_FROM_TERRAFORM"
9
awsTargetGroupARN: "TARGET_GROUP_ARN_FROM_TERRAFORM"
10
zoneSlug: "YOUR_ZONE_SLUG"
11
12
# Optional: Enable self-hosted inputs in addition to logs and outputs
13
acceptInputs: "true"
14
15
serviceAccount:
16
annotations:
17
"eks.amazonaws.com/role-arn": "AGENT_IAM_ROLE_ARN_FROM_TERRAFORM"
If you're managing your Helm chart via Terraform, then you can automatically feed the outputs of the storage module to the chart values as shown in the following example:
Show full example
hcl
Copied
1
data "aws_eks_cluster" "cluster" {
2
name = "YOUR_CLUSTER_NAME"
3
}
4
5
provider "helm" {
6
kubernetes {
7
host = data.aws_eks_cluster.cluster.endpoint
8
cluster_ca_certificate = base64decode(
9
data.aws_eks_cluster.cluster.certificate_authority[0].data,
10
)
11
exec {
12
api_version = "client.authentication.k8s.io/v1beta1"
13
args = ["eks", "get-token", "--cluster-name", "YOUR_CLUSTER_NAME"]
14
command = "aws"
15
}
16
}
17
}
18
19
module "airplane_agent_storage" {
20
source = "airplanedev/airplane-agents/aws//modules/storage"
21
22
api_token = "YOUR_API_TOKEN"
23
team_id = "YOUR_TEAM_ID"
24
kube_cluster_name = "YOUR_CLUSTER_NAME"
25
kube_namespace = "YOUR_CLUSTER_NAMESPACE"
26
27
agent_security_group_id = "YOUR_AGENT_SECURITY_GROUP_ID"
28
29
# Slug for your zone (lowercase letters and numbers only,
30
# e.g. 'myzone')
31
agent_storage_zone_slug = "YOUR_ZONE_SLUG"
32
33
# List of public subnet IDs for the agent server external
34
# load balancer
35
subnet_ids = ["PUBLIC_SUBNET_ID1", "PUBLIC_SUBNET_ID2"]
36
}
37
38
resource "helm_release" "airplane_agent" {
39
name = "airplane-agent"
40
namespace = "YOUR_CLUSTER_NAMESPACE"
41
create_namespace = true
42
43
repository = "https://airplanedev.github.io/charts"
44
chart = "airplane-agent"
45
46
set {
47
name = "airplane.apiToken"
48
value = "YOUR_API_TOKEN"
49
}
50
51
set {
52
name = "airplane.runNamespace"
53
value = "YOUR_CLUSTER_NAMESPACE"
54
}
55
56
set {
57
name = "airplane.teamID"
58
value = "YOUR_TEAM_ID"
59
}
60
61
set {
62
name = "airplane.storage.enabled"
63
value = "true"
64
}
65
66
set {
67
name = "airplane.storage.mode"
68
value = "eks"
69
}
70
71
set {
72
name = "airplane.storage.s3BucketName"
73
value = module.airplane_agent_storage.storage_bucket_name
74
}
75
76
set {
77
name = "airplane.storage.awsTargetGroupARN"
78
value = module.airplane_agent_storage.target_group_arn
79
}
80
81
set {
82
name = "airplane.storage.redisHost"
83
value = module.airplane_agent_storage.storage_redis_addr
84
}
85
86
set {
87
name = "airplane.storage.zoneSlug"
88
value = "YOUR_ZONE_SLUG"
89
}
90
91
set {
92
name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
93
value = module.airplane_agent_storage.agent_iam_role_arn
94
}
95
}
Once you've applied the Terraform and Helm configs, follow the verification procedure described on the self-hosted storage page.

GCP GKE

If you're running in a GKE Kubernetes cluster, you can configure self-hosted storage by using a small Terraform module followed by the existing Airplane agents Helm chart.
First, configure the GKE storage Terraform module for your environment. At a minimum, the following inputs must be set:
hcl
Copied
1
module "airplane_agent_storage" {
2
source = "airplanedev/airplane-agents/google//modules/storage"
3
4
project = "YOUR_GCP_PROJECT_NAME"
5
region = "YOUR_GCP_REGION"
6
api_token = "YOUR_API_TOKEN"
7
team_id = "YOUR_TEAM_ID"
8
kube_namespace = "YOUR_CLUSTER_NAMESPACE"
9
10
# Slug for your zone (lowercase letters and numbers only,
11
# e.g. 'myzone')
12
agent_storage_zone_slug = "YOUR_ZONE_SLUG"
13
}
Apply the module above and note down the following outputs:
  1. Storage bucket name
  2. Storage redis address
  3. Agent service account email
  4. Agent service IP address name
Finally, configure the Helm chart according to the directions above, with the following extra parameters set in the values.yaml file:
yaml
Copied
1
airplane:
2
# ... settings from above ... #
3
4
storage:
5
enabled: "true"
6
mode: gke
7
gcsBucketName: "BUCKET_NAME_FROM_TERRAFORM"
8
redisHost: "REDIS_ADDRESS_FROM_TERRAFORM"
9
gcpIPAddressName: "IP_ADDRESS_NAME_FROM_TERRAFORM"
10
zoneSlug: "YOUR_ZONE_SLUG"
11
12
# Optional: Enable self-hosted inputs in addition to logs and outputs
13
acceptInputs: "true"
14
15
serviceAccount:
16
annotations:
17
"iam.gke.io/gcp-service-account": "AGENT_SERVICE_ACCOUNT_EMAIL_FROM_TERRAFORM"
If you're managing your Helm chart via Terraform, then you can automatically feed the outputs of the storage module to the chart values as shown in the following example:
Show full example
hcl
Copied
1
data "google_client_config" "default" {}
2
3
module "airplane_agent_storage" {
4
source = "airplanedev/airplane-agents/google//modules/storage"
5
6
project = "YOUR_GCP_PROJECT_NAME"
7
region = "YOUR_GCP_REGION"
8
api_token = "YOUR_API_TOKEN"
9
team_id = "YOUR_TEAM_ID"
10
kube_namespace = "YOUR_CLUSTER_NAMESPACE"
11
12
# Slug for your zone (lowercase letters and numbers only,
13
# e.g. 'myzone')
14
agent_storage_zone_slug = "YOUR_ZONE_SLUG"
15
}
16
17
data "google_container_cluster" "cluster" {
18
name = "YOUR_CLUSTER_NAME"
19
location = "YOUR_GCP_REGION"
20
}
21
22
provider "helm" {
23
kubernetes {
24
host = "https://${data.google_container_cluster.cluster.endpoint}"
25
token = data.google_client_config.default.access_token
26
cluster_ca_certificate = base64decode(
27
data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate
28
)
29
}
30
}
31
32
resource "helm_release" "airplane_agent" {
33
name = "airplane-agent"
34
namespace = "YOUR_CLUSTER_NAMESPACE"
35
create_namespace = true
36
37
repository = "https://airplanedev.github.io/charts"
38
chart = "airplane-agent"
39
40
set {
41
name = "airplane.apiToken"
42
value = "YOUR_API_TOKEN"
43
}
44
45
set {
46
name = "airplane.runNamespace"
47
value = "YOUR_CLUSTER_NAMESPACE"
48
}
49
50
set {
51
name = "airplane.teamID"
52
value = "YOUR_TEAM_ID"
53
}
54
55
set {
56
name = "airplane.storage.enabled"
57
value = "true"
58
}
59
60
set {
61
name = "airplane.storage.mode"
62
value = "gke"
63
}
64
65
set {
66
name = "airplane.storage.gcsBucketName"
67
value = module.airplane_agent_storage.storage_bucket_name
68
}
69
70
set {
71
name = "airplane.storage.redisHost"
72
value = module.airplane_agent_storage.storage_redis_addr
73
}
74
75
set {
76
name = "airplane.storage.gcpIPAddressName"
77
value = module.airplane_agent_storage.agent_server_addr_name
78
}
79
80
set {
81
name = "airplane.storage.zoneSlug"
82
value = "YOUR_ZONE_SLUG"
83
}
84
85
set {
86
name = "serviceAccount.annotations.iam\\.gke\\.io/gcp-service-account"
87
value = module.airplane_agent_storage.service_account_email
88
}
89
}

The Helm chart will create a ManagedCertificate resource for the external load balancer in your Kubernetes cluster. This certificate can take up to 30 minutes to fully provision. You can check on the status by using the kubectl command line:
Copied
1
$ kubectl get managedcertificates
2
NAME AGE STATUS
3
airplane-managed-cert 10d Active
Once the status is Active, you can continue to the verification procedure described on the self-hosted storage page.

Other Kubernetes setups

The Airplane agents Helm chart supports a "base" mode that will generate an agent Kubernetes service, but not configure the associated ingress, DNS, or certificates required to run the full self-hosted storage product.
To use this, you need to:
  1. Create either an AWS S3 bucket, GCP GCS bucket, or Azure Blob Store account and container for long-term storage
  2. Provision a Redis instance that's accessible to the agents in your cluster; this can either be managed by your cloud provider or installed manually into your cluster, e.g. using Helm
  3. Pick a domain that your organization controls and that you can add DNS rules for (e.g., example.com).
Then, configure the Helm chart according to the directions above, with the following extra parameters set in the values.yaml file:
yaml
Copied
1
airplane:
2
# ... settings from above ... #
3
4
storage:
5
enabled: "true"
6
mode: base
7
8
# Note: Zone slugs should consist of lowercase letters and numbers only,
9
# e.g. 'myzone'
10
zoneSlug: "YOUR_ZONE_SLUG"
11
12
# Configuration for connecting from the agent to redis. Must set exactly one of these three.
13
#
14
# (1) Redis host with port. The agent will communicate with the redis without AUTH or TLS.
15
redisHost: "YOUR_REDIS_HOST:YOUR_REDIS_PORT"
16
# (2) OR a full redis connection URI in the format documented in
17
# https://github.com/lettuce-io/lettuce-core/wiki/Redis-URI-and-connection-details#uri-syntax.
18
# This allows for AUTH and/or TLS.
19
redisURI: "rediss://YOUR_REDIS_USERNAME:YOUR_REDIS_PASSWORD@YOUR_REDIS_HOST:YOUR_REDIS_TLS_PORT"
20
# (3) OR a reference to a Kubernetes secret that contains the full redis URI.
21
redisURISecret:
22
name: "YOUR_REDIS_SECRET_NAME"
23
key: "YOUR_REDIS_SECRET_KEY"
24
25
externalServerURL: "https://YOUR_ZONE_SLUG.YOUR_DOMAIN"
26
27
# Optional: Enable self-hosted inputs in addition to logs and outputs
28
acceptInputs: "true"
29
30
# Either (for AWS)
31
s3BucketName: "YOUR_S3_BUCKET_NAME"
32
33
# or (for GCP)
34
gcsBucketName: "YOUR_GCS_BUCKET_NAME"
35
36
# or (for Azure)
37
absAccountName: "YOUR_ABS_ACCOUNT_NAME"
38
absContainerName: "YOUR_ABS_CONTAINER_NAME"
Configuration of your cloud credentials will vary based on your setup. These can either be:
  1. Automatically pulled from the cloud provider (if using EKS, GKE, or AKS with the appropriate identity provider settings)
  2. Set in the agent environment via the chart airplane.extraEnvVars value
  3. Set via the airplane.storage.gcpAppCredentialsSecret parameter (for static Google credentials only)
Once the chart is applied, it will generate a Kubernetes service called airplane-agent-external in the target namespace. You then need to configure an ingress so that external requests to https://YOUR_ZONE_SLUG.YOUR_DOMAIN are routed to port 2190 in the previous service.
The server URL format is just a suggested convention- you can use a another subdomain other than YOUR_ZONE_SLUG if that's more convenient for you. Just be sure to update the chart value of externalServerURL accordingly.
The setup will vary, but if you're using cert-manager plus an ingress controller in your cluster, the configuration will look something like:
yaml
Copied
1
apiVersion: networking.k8s.io/v1
2
kind: Ingress
3
metadata:
4
name: airplane-server-ingress
5
annotations:
6
cert-manager.io/cluster-issuer: "YOUR_CERT_ISSUER"
7
kubernetes.io/ingress.class: "YOUR_INGRESS_CLASS"
8
spec:
9
tls:
10
- hosts:
11
- YOUR_ZONE_SLUG.YOUR_DOMAIN
12
rules:
13
- host: YOUR_ZONE_SLUG.YOUR_DOMAIN
14
http:
15
paths:
16
- pathType: Prefix
17
path: "/"
18
backend:
19
service:
20
name: airplane-agent-external
21
port:
22
number: 2190
Once you've applied the Kubernetes changes, follow the verification procedure described on the self-hosted storage page.