ChatCompletion
Given a prompt, get a response from an LLM using the OpenAI’s Chat Completions API.
For more information, refer to the Chat Completions API docs.
type: "io.kestra.plugin.openai.ChatCompletion"
Based on a prompt input, generate a completion response and pass it to a downstream task.
id: openai
namespace: company.team
inputs:
- id: prompt
type: STRING
defaults: What is data orchestration?
tasks:
- id: completion
type: io.kestra.plugin.openai.ChatCompletion
apiKey: "yourOpenAIapiKey"
model: gpt-4o
prompt: "{{ inputs.prompt }}"
- id: response
type: io.kestra.plugin.core.debug.Return
format: {{ outputs.completion.choices[0].message.content }}"
Send a prompt to OpenAI's ChatCompletion API
id: openai
namespace: company.team
tasks:
- id: prompt
type: io.kestra.plugin.openai.ChatCompletion
apiKey: "{{ secret('OPENAI_API_KEY') }}"
model: gpt-4
prompt: Explain in one sentence why data engineers build data pipelines
- id: use_output
type: io.kestra.plugin.core.log.Log
message: "{{ outputs.prompt.choices | jq('.[].message.content') | first }}"
Based on a prompt input, ask OpenAI to call a function that determines whether you need to respond to a customer's review immediately or wait until later, and then comes up with a suggested response.
id: openai
namespace: company.team
inputs:
- id: prompt
type: STRING
defaults: I love your product and would purchase it again!
tasks:
- id: prioritize_response
type: io.kestra.plugin.openai.ChatCompletion
apiKey: "yourOpenAIapiKey"
model: gpt-4o
messages:
- role: user
content: "{{ inputs.prompt }}"
functions:
- name: respond_to_review
description: Given the customer product review provided as input, determines how urgently a reply is required and then provides suggested response text.
parameters:
- name: response_urgency
type: string
description: How urgently this customer review needs a reply. Bad reviews
must be addressed immediately before anyone sees them. Good reviews can
wait until later.
required: true
enumValues:
- reply_immediately
- reply_later
- name: response_text
type: string
description: The text to post online in response to this review.
required: true
- id: response_urgency
type: io.kestra.plugin.core.debug.Return
format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_urgency }}"
- id: response_text
type: io.kestra.plugin.core.debug.Return
format: "{{ outputs.prioritize_response.choices[0].message.function_call.arguments.response_text }}"
YES
The OpenAI API key.
YES
ID of the model to use e.g. 'gpt-4'
See the OpenAI model's documentation page for more details.
NO
10
The maximum number of seconds to wait for a response.
YES
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far. Defaults to 0.
YES
The name of the function OpenAI should generate a call for.
Enter a specific function name, or 'auto' to let the model decide. The default is auto.
YES
Modify the likelihood of specified tokens appearing in the completion. Defaults to null.
YES
The maximum number of tokens to generate in the chat completion. No limits are set by default.
YES
How many chat completion choices to generate for each input message. Defaults to 1.
YES
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far. Defaults to 0.
YES
The prompt(s) to generate completions for. By default, this prompt will be sent as a user
role.
If not provided, make sure to set the messages
property.
YES
Up to 4 sequences where the API will stop generating further tokens. Defaults to null.
YES
What sampling temperature to use, between 0 and 2. Defaults to 1.
YES
An alternative to sampling with temperature, where the model considers the results of the tokens with top_p probability mass. Defaults to 1.
YES
A unique identifier representing your end-user.
NO
NO
YES
A description of the function parameter.
Provide as many details as possible to ensure the model returns an accurate parameter.
YES
The name of the function parameter.
YES
A list of values that the model must choose from when setting this parameter.
Optional, but useful when for classification problems.
YES
Whether or not the model is required to provide this parameter.
Defaults to false.
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO