Building AI tasks

Integrate generative AI into your applications. Compatible with OpenAI and Anthropic.
Airplane's AI built-ins allow you to quickly integrate with LLMs (large language models) with minimal configuration. The Airplane SDKs provide a simple interface to send a message to an LLM, maintain a conversation with an AI assistant, or define an AI function to convert inputs to outputs by following a set of instructions.
Airplane AI built-ins provide utilities to support a large number of use cases:
  • Chat: Send a single message to an LLM and receive a response. Useful for obtaining information, generating text, answering questions and more.
  • ChatBot: Have a conversation with an LLM. Useful for building AI assistants or conversational experiences that maintain history and can build context over time.
  • Func: Create a reusable AI function that can be used to convert inputs to outputs by following a set of instructions. Useful for summarization, translation, classification, sentiment analysis, and more.

Using Chat to talk to AI

The Chat function can be used to generate a response from an LLM. In this example, we use it to provide feedback on a SQL query. We define a task that takes in a SQL parameter and returns feedback about the query. Note that we set the OPENAI_API_KEY environment variable to enable and configure use of the OpenAI API.
typescript
Copied
1
import airplane from "airplane";
2
3
export default airplane.task(
4
{
5
slug: "critique_sql",
6
parameters: { query: "sql" },
7
envVars: { OPENAI_API_KEY: { config: "OPENAI_API_KEY" } },
8
},
9
async (params): string => {
10
const critique = await airplane.ai.chat(
11
`You are a highly trained software engineer.
12
Give feedback on the SQL query for clarity and correctness.
13
SQL query: ${params.query}`,
14
);
15
16
return `Here's a critique of the provided sql query: ${critique}`;
17
},
18
);

Using Func to create an AI function

The Func utility can be used to create a reusable AI function that converts inputs to outputs by following a set of instructions. This example creates an AI function that takes in a list of feedback texts and returns the top request in the feedback. It defines a task that takes in a file parameter for a CSV file, parses the file, and returns the most requested feedback item.
typescript
Copied
1
import airplane from "airplane";
2
3
const summarize = await airplane.ai.func("Summarize the top request in the feedback", [
4
{
5
input: [
6
// Provide an example input and output to help "train" the LLM.
7
"I want more color options",
8
"I don't like the shape of the chair",
9
"I think we could use a larger variety of colors",
10
],
11
output: "more colors", // The function will generate an output of the same native type.
12
},
13
]);
14
15
export const summarize = airplane.task(
16
{
17
slug: "summarize",
18
parameters: { feedback: "upload" },
19
envVars: { OPENAI_API_KEY: { config: "OPENAI_API_KEY" } },
20
},
21
async (params): Promise<string> => {
22
// Parse a CSV in the format of "customer_name,feedback" and strip the header.
23
// Create an array of feedback texts.
24
let file = await params.feedback.text();
25
let lines = file.split("\n");
26
lines.shift();
27
let feedback = lines.map((line) => line.split(",")[1]);
28
29
const { output, confidence } = await summarize(feedback);
30
return `The most requested feedback item is: ${output} (confidence: ${confidence})`;
31
},
32
);

Configuration

To use AI built-ins, you must add an environment variable to your task with an OpenAI or Anthropic API key. All AI built-ins will error if neither of these environment variables are set:
  • To use OpenAI: Add an environment variable named OPENAI_API_KEY
  • To use Anthropic: Add an environment variable named ANTHROPIC_API_KEY
While not required, we recommend creating a secret config variable and then referencing the config variable in the task environment variables, instead of directly adding your API key in the task configuration. This will allow you to hide your API key and easily update it without having to update all of your tasks.

Logging

By default, AI built-ins will log the entire prompt and raw LLM response every time they make an API request. The prompt will be prefixed with AI Prompt: and the response will be prefixed with AI Response:.
The prompt will include all of the individual messages being sent to the LLM, with fields content and role. The content will be the raw string message content. The role describes the purpose of the specific message in the prompt and will be one of the following:
  • user for a user-created message (input to any of the AI SDK functions)
  • assistant for a message that was generated by the LLM, and is being used as input to another call (relevant for ChatBot)
  • system for a message that was generated by the SDK functions as a preamble to the user's input
The response (prefixed with AI Response:) will be the raw string response from the LLM.
Logging is turned on by default, but can be turned off:
typescript
Copied
1
import airplane from "airplane";
2
3
export default airplane.task(
4
{
5
slug: "ai",
6
envVars: { OPENAI_API_KEY: { config: "OPENAI_API_KEY" } },
7
},
8
async () => {
9
airplane.ai.setLogging(false); // Turn off logging
10
},
11
);

SDK reference

See AI SDK reference for more information.