Running the agent on Kubernetes
Deploy a self-hosted Airplane agent on Kubernetes.
The Airplane agent can be quickly installed on Kubernetes using Helm. When installed, the agent will
use Kubernetes APIs to run Airplane tasks and runbooks.
Installation guide
Installation guide
Install Helm
Install Helm
First, install Helm if you haven't yet. On macOS, you can
brew install helm
. For other operating
systems, see the Helm installation docs.Configure
Configure
You'll need the following values:
YOUR_API_TOKEN
: generate a new token by runningairplane apikeys create <token name>
from the Airplane CLI.YOUR_TEAM_ID
: get your team ID viaairplane auth info
or visit the Team Settings page.
Create a
values.yaml
configuration file:yamlCopied1airplane:2apiToken: YOUR_API_TOKEN3teamID: YOUR_TEAM_ID45# Change this to run pods in a different namespace6runNamespace: default78# Optional: Attach labels to agents for constraints9agentLabels:10orchestration: kubernetes1112# Optional: If set, only allows agents to execute runs13# from the environment with the provided slug14envSlug: ""
Keep track of your
values.yaml
file. You'll need it for subsequent upgrades. We recommend you
store this in version control.Setting the
agentLabels
key will add the provided labels to the agent for use in run constraints.
For details on using labels, see
Execute rules & constraints.Setting the
envSlug
key will only permit the agent to execute runs from the given environment. For
details on using environments, see Environments.This example writes the API token inline into the
values.yaml
file. If you would like to avoid
committing it directly into the file, see the
Using secrets section below.Install
Install
Add the Airplane chart repo:
bashCopied1helm repo add airplane https://airplanedev.github.io/charts
Update your chart repos:
bashCopied1helm repo update
Install the helm chart:
bashCopied1helm install -f values.yaml airplane airplane/airplane-agent
That's it! Visit app.airplane.dev/agents to check that the the
agents are up and running.
Upgrading
Upgrading
To update your chart in the future (e.g. when there are newer updates to the agent), you can run
helm upgrade
:bashCopied1helm upgrade -f values.yaml airplane airplane/airplane-agent
Using secrets
Using secrets
If you would prefer to use Kubernetes secrets for the token and avoid typing them into a file, you
can create a secret:
bashCopied1kubectl create secret generic airplane-secrets \2--from-literal=token=YOUR_API_TOKEN
And reference it from the
values.yaml
file:yamlCopied1airplane:2apiTokenSecret:3name: airplane-secrets4key: token56teamID: YOUR_TEAM_ID78# Change this to run pods in a different namespace9runNamespace: default
If you're applying this after having already created the agent, you can run an upgrade to apply the
changes—see Upgrading.
Setting a custom service account
Setting a custom service account
By default, the agent will run all Airplane task pods with the namespace default service account.
You can change this by setting the
taskServiceAccountName
parameter in the airplane
section of
the values.yaml
file. This will allow these pods to hit the Kubernetes API with custom
permissions, among other use cases.Configuring task CPU and memory
Configuring task CPU and memory
By default, each task runs in a container which requests 0.25 vCPU and 256MB of memory, with limits
of 1.0 and 2GB respectively. To change the resources allocated for all task containers, you can
update the values of the
defaultTaskResources
parameter in the values.yaml
chart file.Alternatively, to adjust these resources on a task-by-task basis, you can set the
AIRPLANE_TASK_CPU
and AIRPLANE_TASK_MEMORY
environment variables in the associated tasks. The
agent will set the container resource requests to these values and then adjust the default limits,
if needed, so they're at least as high.Note that these resource values, whether set in the agent configs or the environment, are
interpreted as strings in
Kubernetes quantity format.
The latter supports several different units, but it's usually easiest to express CPU values in
millicores (e.g.,
500m
for 0.5 cores, 2000m
for 2 cores, etc.) and memory values in either
megabytes (e.g., 256Mi
) or gigabytes (e.g., 2Gi
).Using Terraform and Helm
Using Terraform and Helm
You can optionally use the Terraform
helm_release
resource to manage the Helm chart from
Terraform.You'll need to first add the
Helm provider, if you haven't
already:
hclCopied1provider "helm" {2kubernetes {3config_path = "~/.kube/config"4}5}
Then, add the
helm_release
resource:hclCopied1resource "helm_release" "airplane" {2name = "airplane-agent"3repository = "https://airplanedev.github.io/charts"4chart = "airplane-agent"56# Change namespace if desired:7# namespace = "default"8# create_namespace = true910timeout = "600"1112set {13name = "airplane.teamID"14value = var.airplane_team_id15}1617set {18name = "airplane.apiToken"19value = var.airplane_api_token20}2122# Alternatively, to use an existing secret for the23# API token:24#25# set {26# name = "airplane.apiTokenSecret.name"27# value = "airplane-secrets"28# }29# set {30# name = "airplane.apiTokenSecret.key"31# value = "token"32# }3334}
See the
docs on
helm_release
for more details.Self-hosted storage in Kubernetes
Self-hosted storage in Kubernetes
You can enable self-hosted storage for Kubernetes-hosted agents with some
additional steps. The procedure to follow depends on the environment where your Kubernetes cluster
is running.
AWS EKS
AWS EKS
If you're running in a EKS Kubernetes cluster, you can configure
self-hosted storage by using a small Terraform module followed by the existing Airplane agents Helm
chart.
First, ensure that your cluster has the
AWS Load Balancer Controller
installed. This is needed to bind the agent instances to the ALB target group created in Terraform.
Then, configure the
AWS storage Terraform module
for your environment. At a minimum, the following inputs must be set:
hclCopied1module "airplane_agent_storage" {2source = "airplanedev/airplane-agents/aws//modules/storage"34api_token = "YOUR_API_TOKEN"5team_id = "YOUR_TEAM_ID"6kube_cluster_name = "YOUR_CLUSTER_NAME"7kube_namespace = "YOUR_CLUSTER_NAMESPACE"89# Slug for your zone (lowercase letters and numbers only,10# e.g. 'myzone')11agent_storage_zone_slug = "YOUR_ZONE_SLUG"1213agent_security_group_id = "YOUR_AGENT_SECURITY_GROUP_ID"1415# List of public subnet IDs for the agent server external16# load balancer17subnet_ids = ["PUBLIC_SUBNET_ID1", "PUBLIC_SUBNET_ID2"]18}
The security group ID is needed so that external requests can be routed to your agent instances. If
you've set up your EKS cluster with the default settings, this is generally the same as the security
group for the cluster nodes.
Apply the module above and note down the following outputs:
- Storage bucket name
- Storage redis address
- External target group ARN
- Agent IAM role ARN
Finally, configure the Helm chart according to the directions
above, with the following extra parameters set in the
values.yaml
file:yamlCopied1airplane:2# ... parameters from above ... #34storage:5enabled: "true"6mode: eks7s3BucketName: "BUCKET_NAME_FROM_TERRAFORM"8redisHost: "REDIS_ADDRESS_FROM_TERRAFORM"9awsTargetGroupARN: "TARGET_GROUP_ARN_FROM_TERRAFORM"10zoneSlug: "YOUR_ZONE_SLUG"1112serviceAccount:13annotations:14"eks.amazonaws.com/role-arn": "AGENT_IAM_ROLE_ARN_FROM_TERRAFORM"
If you're managing your Helm chart via Terraform, then you can automatically feed the outputs of the
storage module to the chart values as shown in the following example:
Show full example
hclCopied1data "aws_eks_cluster" "cluster" {2name = "YOUR_CLUSTER_NAME"3}45provider "helm" {6kubernetes {7host = data.aws_eks_cluster.cluster.endpoint8cluster_ca_certificate = base64decode(9data.aws_eks_cluster.cluster.certificate_authority[0].data,10)11exec {12api_version = "client.authentication.k8s.io/v1beta1"13args = ["eks", "get-token", "--cluster-name", "YOUR_CLUSTER_NAME"]14command = "aws"15}16}17}1819module "airplane_agent_storage" {20source = "airplanedev/airplane-agents/aws//modules/storage"2122api_token = "YOUR_API_TOKEN"23team_id = "YOUR_TEAM_ID"24kube_cluster_name = "YOUR_CLUSTER_NAME"25kube_namespace = "YOUR_CLUSTER_NAMESPACE"2627agent_security_group_id = "YOUR_AGENT_SECURITY_GROUP_ID"2829# Slug for your zone (lowercase letters and numbers only,30# e.g. 'myzone')31agent_storage_zone_slug = "YOUR_ZONE_SLUG"3233# List of public subnet IDs for the agent server external34# load balancer35subnet_ids = ["PUBLIC_SUBNET_ID1", "PUBLIC_SUBNET_ID2"]36}3738resource "helm_release" "airplane_agent" {39name = "airplane-agent"40namespace = "YOUR_CLUSTER_NAMESPACE"41create_namespace = true4243repository = "https://airplanedev.github.io/charts"44chart = "airplane-agent"4546set {47name = "airplane.apiToken"48value = "YOUR_API_TOKEN"49}5051set {52name = "airplane.runNamespace"53value = "YOUR_CLUSTER_NAMESPACE"54}5556set {57name = "airplane.teamID"58value = "YOUR_TEAM_ID"59}6061set {62name = "airplane.storage.enabled"63value = "true"64}6566set {67name = "airplane.storage.mode"68value = "eks"69}7071set {72name = "airplane.storage.s3BucketName"73value = module.airplane_agent_storage.storage_bucket_name74}7576set {77name = "airplane.storage.awsTargetGroupARN"78value = module.airplane_agent_storage.target_group_arn79}8081set {82name = "airplane.storage.redisHost"83value = module.airplane_agent_storage.storage_redis_addr84}8586set {87name = "airplane.storage.zoneSlug"88value = "YOUR_ZONE_SLUG"89}9091set {92name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"93value = module.airplane_agent_storage.agent_iam_role_arn94}95}
Once you've applied the Terraform and Helm configs, follow the verification procedure described on
the self-hosted storage page.
GCP GKE
GCP GKE
If you're running in a GKE Kubernetes cluster, you can
configure self-hosted storage by using a small Terraform module followed by the existing Airplane
agents Helm chart.
First, configure the
GKE storage Terraform module
for your environment. At a minimum, the following inputs must be set:
hclCopied1module "airplane_agent_storage" {2source = "airplanedev/airplane-agents/google//modules/storage"34project = "YOUR_GCP_PROJECT_NAME"5region = "YOUR_GCP_REGION"6api_token = "YOUR_API_TOKEN"7team_id = "YOUR_TEAM_ID"8kube_namespace = "YOUR_CLUSTER_NAMESPACE"910# Slug for your zone (lowercase letters and numbers only,11# e.g. 'myzone')12agent_storage_zone_slug = "YOUR_ZONE_SLUG"13}
Apply the module above and note down the following outputs:
- Storage bucket name
- Storage redis address
- Agent service account email
- Agent service IP address name
Finally, configure the Helm chart according to the
directions above, with the following extra parameters set in
the
values.yaml
file:yamlCopied1airplane:2# ... settings from above ... #34storage:5enabled: "true"6mode: gke7gcsBucketName: "BUCKET_NAME_FROM_TERRAFORM"8redisHost: "REDIS_ADDRESS_FROM_TERRAFORM"9gcpIPAddressName: "IP_ADDRESS_NAME_FROM_TERRAFORM"10zoneSlug: "YOUR_ZONE_SLUG"1112serviceAccount:13annotations:14"iam.gke.io/gcp-service-account": "AGENT_SERVICE_ACCOUNT_EMAIL_FROM_TERRAFORM"
If you're managing your Helm chart via Terraform, then you can automatically feed the outputs of the
storage module to the chart values as shown in the following example:
Show full example
hclCopied1data "google_client_config" "default" {}23module "airplane_agent_storage" {4source = "airplanedev/airplane-agents/google//modules/storage"56project = "YOUR_GCP_PROJECT_NAME"7region = "YOUR_GCP_REGION"8api_token = "YOUR_API_TOKEN"9team_id = "YOUR_TEAM_ID"10kube_namespace = "YOUR_CLUSTER_NAMESPACE"1112# Slug for your zone (lowercase letters and numbers only,13# e.g. 'myzone')14agent_storage_zone_slug = "YOUR_ZONE_SLUG"15}1617data "google_container_cluster" "cluster" {18name = "YOUR_CLUSTER_NAME"19location = "YOUR_GCP_REGION"20}2122provider "helm" {23kubernetes {24host = "https://${data.google_container_cluster.cluster.endpoint}"25token = data.google_client_config.default.access_token26cluster_ca_certificate = base64decode(27data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate28)29}30}3132resource "helm_release" "airplane_agent" {33name = "airplane-agent"34namespace = "YOUR_CLUSTER_NAMESPACE"35create_namespace = true3637repository = "https://airplanedev.github.io/charts"38chart = "airplane-agent"3940set {41name = "airplane.apiToken"42value = "YOUR_API_TOKEN"43}4445set {46name = "airplane.runNamespace"47value = "YOUR_CLUSTER_NAMESPACE"48}4950set {51name = "airplane.teamID"52value = "YOUR_TEAM_ID"53}5455set {56name = "airplane.storage.enabled"57value = "true"58}5960set {61name = "airplane.storage.mode"62value = "gke"63}6465set {66name = "airplane.storage.gcsBucketName"67value = module.airplane_agent_storage.storage_bucket_name68}6970set {71name = "airplane.storage.redisHost"72value = module.airplane_agent_storage.storage_redis_addr73}7475set {76name = "airplane.storage.gcpIPAddressName"77value = module.airplane_agent_storage.agent_server_addr_name78}7980set {81name = "airplane.storage.zoneSlug"82value = "YOUR_ZONE_SLUG"83}8485set {86name = "serviceAccount.annotations.iam\\.gke\\.io/gcp-service-account"87value = module.airplane_agent_storage.service_account_email88}89}
The Helm chart will create a
ManagedCertificate
resource
for the external load balancer in your Kubernetes cluster. This certificate can take up to 30
minutes to fully provision. You can check on the status by using the kubectl
command line:Copied1$ kubectl get managedcertificates2NAME AGE STATUS3airplane-managed-cert 10d Active
Once the status is
Active
, you can continue to the verification procedure described on the
self-hosted storage page.Other Kubernetes setups
Other Kubernetes setups
The Airplane agents Helm chart supports a "base" mode that will generate
an agent Kubernetes service, but not configure the associated ingress, DNS, or certificates required
to run the full self-hosted storage product.
To use this, you need to:
- Create either an AWS S3 bucket, GCP GCS bucket, or Azure Blob Store account and container for long-term storage
- Provision a Redis instance that's accessible to the agents in your cluster; this can either be managed by your cloud provider or installed manually into your cluster, e.g. using Helm
- Pick a domain that your organization controls and that you can add DNS rules for (e.g.,
example.com
)
Then, configure the Helm chart according to the
directions above, with the following extra parameters set in
the
values.yaml
file:yamlCopied1airplane:2# ... settings from above ... #34storage:5enabled: "true"6mode: base78redisHost: "YOUR_REDIS_ADDRESS"9domain: "YOUR_DOMAIN"1011# Note: Zone slugs should consist of lowercase letters and numbers only,12# e.g. 'myzone'13zoneSlug: "YOUR_ZONE_SLUG"1415# Either (for AWS)16s3BucketName: "YOUR_S3_BUCKET_NAME"1718# or (for GCP)19gcsBucketName: "YOUR_GCS_BUCKET_NAME"2021# or (for Azure)22absAccountName: "YOUR_ABS_ACCOUNT_NAME"23absContainerName: "YOUR_ABS_CONTAINER_NAME"
Configuration of your cloud credentials will vary based on your setup. These can either be:
- Automatically pulled from the cloud provider (if using EKS, GKE, or AKS with the appropriate identity provider settings)
- Set in the agent environment via the chart
airplane.extraEnvVars
value - Set via the
airplane.storage.gcpAppCredentialsSecret
parameter (for static Google credentials only)
Once the chart is applied, it will generate a Kubernetes service called
airplane-agent-external
in
the target namespace. You then need to configure an ingress so that external requests to
https://YOUR_ZONE_SLUG.YOUR_TEAM_ID.YOUR_DOMAIN
are routed to port 2190
in the previous service.The setup will vary, but if you're using cert-manager plus an ingress
controller in your cluster, the configuration will look something like:
yamlCopied1apiVersion: networking.k8s.io/v12kind: Ingress3metadata:4name: airplane-server-ingress5annotations:6cert-manager.io/cluster-issuer: "YOUR_CERT_ISSUER"7kubernetes.io/ingress.class: "YOUR_INGRESS_CLASS"8spec:9tls:10- hosts:11- YOUR_ZONE_SLUG.YOUR_TEAM_ID.YOUR_DOMAIN12rules:13- host: YOUR_ZONE_SLUG.YOUR_TEAM_ID.YOUR_DOMAIN14http:15paths:16- pathType: Prefix17path: "/"18backend:19service:20name: airplane-agent-external21port:22number: 2190
Once you've applied the Kubernetes changes, follow the verification procedure described on the
self-hosted storage page.