Building AI tasks
Integrate generative AI into your applications. Compatible with OpenAI and Anthropic.
Airplane's AI built-ins allow you to quickly integrate with LLMs (large language models) with
minimal configuration. The Airplane SDKs provide a simple interface to send a message to an LLM,
maintain a conversation with an AI assistant, or define an AI function to convert inputs to outputs
by following a set of instructions.
Airplane AI built-ins provide utilities to support a large number of use cases:
Chat
: Send a single message to an LLM and receive a response. Useful for obtaining information, generating text, answering questions and more.ChatBot
: Have a conversation with an LLM. Useful for building AI assistants or conversational experiences that maintain history and can build context over time.Func
: Create a reusable AI function that can be used to convert inputs to outputs by following a set of instructions. Useful for summarization, translation, classification, sentiment analysis, and more.
Using Chat to talk to AI
Using Chat to talk to AI
The
Chat
function can be used to generate a response from an LLM. In this example, we use it to
provide feedback on a SQL query. We define a task that takes in a SQL parameter and returns feedback
about the query. Note that we set the OPENAI_API_KEY
environment variable to enable and configure
use of the OpenAI API.typescriptCopied1import airplane from "airplane";23export default airplane.task(4{5slug: "critique_sql",6parameters: { query: "sql" },7envVars: { OPENAI_API_KEY: { config: "OPENAI_API_KEY" } },8},9async (params): string => {10const critique = await airplane.ai.chat(11`You are a highly trained software engineer.12Give feedback on the SQL query for clarity and correctness.13SQL query: ${params.query}`,14);1516return `Here's a critique of the provided sql query: ${critique}`;17},18);
pythonCopied1import airplane23@airplane.task(4env_vars=[5airplane.EnvVar(name="OPENAI_API_KEY", config_var_name="OPENAI_API_KEY"),6],7)8def critique_sql(query: airplane.SQL) -> str:9critique = airplane.ai.chat(10f"""You are a highly trained software engineer.11Give feedback on the SQL query for clarity and correctness.12SQL query: {query}""",13)14return f"Here's a critique of the provided sql query: {critique}"
Using Func to create an AI function
Using Func to create an AI function
The
Func
utility can be used to create a reusable AI function that converts inputs to outputs by
following a set of instructions. This example creates an AI function that takes in a list of
feedback texts and returns the top request in the feedback. It defines a task that takes in a file
parameter for a CSV file, parses the file, and returns the most requested feedback item.typescriptCopied1import airplane from "airplane";23const summarize = await airplane.ai.func("Summarize the top request in the feedback", [4{5input: [6// Provide an example input and output to help "train" the LLM.7"I want more color options",8"I don't like the shape of the chair",9"I think we could use a larger variety of colors",10],11output: "more colors", // The function will generate an output of the same native type.12},13]);1415export const summarize = airplane.task(16{17slug: "summarize",18parameters: { feedback: "upload" },19envVars: { OPENAI_API_KEY: { config: "OPENAI_API_KEY" } },20},21async (params): Promise<string> => {22// Parse a CSV in the format of "customer_name,feedback" and strip the header.23// Create an array of feedback texts.24let file = await params.feedback.text();25let lines = file.split("\n");26lines.shift();27let feedback = lines.map((line) => line.split(",")[1]);2829const { output, confidence } = await summarize(feedback);30return `The most requested feedback item is: ${output} (confidence: ${confidence})`;31},32);
pythonCopied1import requests23import airplane45summary = airplane.ai.Func(6"Summarize the top request in the feedback",7[(8[ # Provide an example input and output to help "train" the LLM.9"I want more color options",10"I don't like the shape of the chair",11"I think we could use a larger variety of colors",12],13"more colors", # The function will generate an output of the same native type.14)],15model="gpt-4",16)1718@airplane.task(19env_vars=[20airplane.EnvVar(name="OPENAI_API_KEY", config_var_name="OPENAI_API_KEY"),21],22)23def summarize(feedback: airplane.File) -> str:24# Parse a CSV in the format of "customer_name,feedback" and strip the header.25# Create an array of feedback texts.26response = requests.get(feedback.url)27feedback = [28line.split(",")[1] for line in response.text.split("\n")29]30output, confidence = summary(feedback)31return f"The most requested feedback item is: {output} (confidence: {confidence})"
Configuration
Configuration
To use AI built-ins, you must add an environment variable
to your task with an OpenAI or Anthropic API key. All AI built-ins will error if neither of these
environment variables are set:
- To use OpenAI: Add an environment variable named
OPENAI_API_KEY
- To use Anthropic: Add an environment variable named
ANTHROPIC_API_KEY
While not required, we recommend creating a secret config variable and then
referencing the config variable in the task environment variables, instead of directly adding your
API key in the task configuration. This will allow you to hide your API key and easily update it
without having to update all of your tasks.
Logging
Logging
By default, AI built-ins will log the entire prompt and raw LLM response every time they make an API
request. The prompt will be prefixed with
AI Prompt:
and the response will be prefixed with
AI Response:
.The prompt will include all of the individual messages being sent to the LLM, with fields
content
and role
. The content
will be the raw string message content. The role
describes the purpose
of the specific message in the prompt and will be one of the following:user
for a user-created message (input to any of the AI SDK functions)assistant
for a message that was generated by the LLM, and is being used as input to another call (relevant for ChatBot)system
for a message that was generated by the SDK functions as a preamble to the user's input
The response (prefixed with
AI Response:
) will be the raw string response from the LLM.Logging is turned on by default, but can be turned off:
typescriptCopied1import airplane from "airplane";23export default airplane.task(4{5slug: "ai",6envVars: { OPENAI_API_KEY: { config: "OPENAI_API_KEY" } },7},8async () => {9airplane.ai.setLogging(false); // Turn off logging10},11);
pythonCopied1import airplane23@airplane.task(4env_vars=[5airplane.EnvVar(name="OPENAI_API_KEY", config_var_name="OPENAI_API_KEY"),6],7)8def ai():9airplane.ai.logging = False # Turn off logging
SDK reference
SDK reference
See AI SDK reference for more information.