AI
Chat
typescriptCopied1import airplane from "airplane";23export default airplane.task(4{5slug: "company_tagline",6parameters: { description: "shorttext" },7envVars: { OPENAI_API_KEY: { config: "OPENAI_API_KEY" } },8},9async (params): string => {10const tagline = await airplane.ai.chat(11`Write a company tagline given this description: ${params.description}`,12);13return `Here's a tagline: ${tagline}`;14},15);
The message to send to the LLM.
The model to use.
The temperature setting for the LLM. Defaults to 0. Lower temperatures will result in less variable output. Valid values are between 0 and 1.
The response from the LLM.
pythonCopied1import airplane23@airplane.task(4env_vars=[5airplane.EnvVar(name="OPENAI_API_KEY", config_var_name="OPENAI_API_KEY"),6],7)8def chat(message: str) -> str:9tagline = airplane.ai.chat(10f"Write a company tagline given this description: {params.description}",11)12return f"Here's a tagline: {tagline}"
The message to send to the LLM.
The model to use.
The temperature setting for the LLM. Defaults to 0. Lower temperatures will result in less variable output. Valid values are between 0 and 1.
The response from the LLM.
If neither OPENAI_API_KEY nor ANTHROPIC_API_KEY are set.
ChatBot
ChatBot
and send
multiple messages
— the conversation history is automatically included between messages. This allows you to have
a conversation that builds context and allows you to refer to any previous messages.Provide concise answers with less than 10 words
Be polite and always assume the customer is correct
Do not guess answers if you are unsure
If the customer doesn't ask a question, thank them and end the conversation
typescriptCopied1import airplane from "airplane";23export default airplane.task(4{5slug: "chat_bot",6envVars: { OPENAI_API_KEY: { config: "OPENAI_API_KEY" } },7},8async () => {9const bot = new airplane.ai.ChatBot();10console.log(await bot.chat("Name all of the planets in the solar system."));11console.log(await bot.chat("Which one is the farthest away?"));12},13);
Optional instructions to include for the LLM. Useful for providing additional context, guiding the tone of the response, shaping the output, and more.
The model to use.
The temperature setting for the LLM. Defaults to 0. Lower temperatures will result in less variable output. Valid values are between 0 and 1.
The message to send to the LLM.
The response from the LLM.
pythonCopied1import airplane23@airplane.task(4env_vars=[5airplane.EnvVar(name="OPENAI_API_KEY", config_var_name="OPENAI_API_KEY"),6],7)8def chat_bot():9bot = airplane.ai.ChatBot()10print(bot.chat("Name all of the planets in the solar system."))11print(bot.chat("Which one is the farthest away?"))
Optional instructions to include for the LLM. Useful for providing additional context, guiding the tone of the response, shaping the output, and more.
The model to use.
The temperature setting for the LLM. Defaults to 0. Lower temperatures will result in less variable output. Valid values are between 0 and 1.
The message to send to the LLM.
The response from the LLM.
If neither OPENAI_API_KEY nor ANTHROPIC_API_KEY are set.
Func
confidence
of 0.typescriptCopied1import airplane from "airplane";23export default airplane.task(4{5slug: "ai_func",6parameters: { word: "shorttext" },7envVars: { OPENAI_API_KEY: { config: "OPENAI_API_KEY" } },8},9async (params): string[] => {10const genSynonyms = airplane.ai.func("Provide synonyms for the provided input", [11{ input: "happy", output: ["content", "cheeful", "joyful"] },12{ input: "sad", output: ["unhappy", "downcast", "despondent"] },13]);1415// Since the example outputs are an array of strings, the output will be an array of strings.16// Confidence will be a number between 0 and 1.17const { output, confidence } = await genSynonyms(params.word);18console.log(`Confidence score: ${confidence}`);19return output;20},21);
Instructions on how to transform the function input into output.
Examples that describe the appropriate output for a potential function input. Examples are used to help the LLM understand the expected behavior, as well as inform the LLM of the expected output format. Example inputs must be the same type and must match the input type when calling the function. Example outputs must be the same type and will match the type of the output when calling the function.
The model to use.
The temperature setting for the LLM. Defaults to 0. Lower temperatures will result in less variable output. Valid values are between 0 and 1.
The input to pass into the function.
The output of the function. If the output cannot be parsed into the correct type, the function will return the raw string output with a confidence of 0.
The LLM's confidence score for the output. This is a value between 0 and 1.
pythonCopied1from typing import List23import airplane45@airplane.task(6env_vars=[7airplane.EnvVar(name="OPENAI_API_KEY", config_var_name="OPENAI_API_KEY"),8],9)10def ai_func(word: str) -> List[str]:11gen_synonyms = airplane.ai.Func(12"Provide synonyms for the provided input",13[14("happy", ["content", "cheerful", "joyful"]),15("sad", ["unhappy", "downcast", "despondent"]),16]17)18# Since the example outputs are an array of strings, the output will be an array of strings.19# Confidence will be a number between 0 and 1.20output, confidence = gen_synonyms(word)21print(f"Confidence score: {confidence}")22return output
Instructions on how to transform the function input into output.
Examples that describe the appropriate output for a potential function input. Examples are used to help the LLM understand the expected behavior, as well as inform the LLM of the expected output format. Example inputs must be the same type and must match the input type when calling the function. Example outputs must be the same type and will match the type of the output when calling the function.
The model to use.
The temperature setting for the LLM. Defaults to 0. Lower temperatures will result in less variable output. Valid values are between 0 and 1.
A tuple of the parsed response from the LLM and a confidence score. The response will be any JSON-compatible value that is the same type of the example outputs. The confidence score will be between 0 and 1, and represents how confident the LLM is with its answer.
If neither OPENAI_API_KEY nor ANTHROPIC_API_KEY are set.