Execute one or more Python scripts from a Command Line Interface.
type: "io.kestra.plugin.scripts.python.commands"
Execute a Python script in a Conda virtual environment. First, add the following script in the embedded Code Editor and name it etl_script.py
:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--num", type=int, default=42, help="Enter an integer")
args = parser.parse_args()
result = args.num * 2
print(result)
Then, make sure to set the enabled
flag of the namespaceFiles
property to true
to enable namespace files. By default, setting to true
injects all Namespace files; we include
only the etl_script.py
file as that is the only file we require from namespace files.
This flow uses a io.kestra.plugin.core.runner.Process
Task Runner and Conda virtual environment for process isolation and dependency management. However, note that, by default, Kestra runs tasks in a Docker container (i.e. a Docker task runner), and you can use the taskRunner
property to customize many options, as well as containerImage
to choose the Docker image to use.
id: python_venv
namespace: company.team
tasks:
- id: python
type: io.kestra.plugin.scripts.python.Commands
namespaceFiles:
enabled: true
include:
- etl_script.py
taskRunner:
type: io.kestra.plugin.core.runner.Process
beforeCommands:
- conda activate myCondaEnv
commands:
- python etl_script.py
Execute a Python script from Git in a Docker container and output a file
id: python_commands_example
namespace: company.team
tasks:
- id: wdir
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: clone_repository
type: io.kestra.plugin.git.Clone
url: https://github.com/kestra-io/examples
branch: main
- id: git_python_scripts
type: io.kestra.plugin.scripts.python.Commands
warningOnStdErr: false
containerImage: ghcr.io/kestra-io/pydata:latest
beforeCommands:
- pip install faker > /dev/null
commands:
- python examples/scripts/etl_script.py
- python examples/scripts/generate_orders.py
outputFiles:
- orders.csv
- id: load_csv_to_s3
type: io.kestra.plugin.aws.s3.Upload
accessKeyId: "{{ secret('AWS_ACCESS_KEY_ID') }}"
secretKeyId: "{{ secret('AWS_SECRET_KEY_ID') }}"
region: eu-central-1
bucket: kestraio
key: stage/orders.csv
from: "{{ outputs.gitPythonScripts.outputFiles['orders.csv'] }}"
Execute a Python script on a remote worker with a GPU
id: gpu_task
namespace: company.team
tasks:
- id: python
type: io.kestra.plugin.scripts.python.Commands
taskRunner:
type: io.kestra.plugin.core.runner.Process
commands:
- python ml_on_gpu.py
workerGroup:
key: gpu
Run a Python command that can takes an input using an environment variable
id: python_input_as_env_variable
namespace: company.team
inputs:
- id: uri
type: URI
defaults: https://www.google.com/
tasks:
- id: code
type: io.kestra.plugin.scripts.python.Commands
taskRunner:
type: io.kestra.plugin.scripts.runner.docker.Docker
containerImage: ghcr.io/kestra-io/pydata:latest
inputFiles:
main.py: |
import requests
import os
# Perform the GET request
response = requests.get(os.environ['URI'])
# Check if the request was successful
if response.status_code == 200:
# Print the content of the page
print(response.text)
else:
print(f"Failed to retrieve the webpage. Status code: {response.status_code}")
env:
URI: "{{ inputs.uri }}"
commands:
- python main.py
Pass detected S3 objects from the event trigger to a Python script
id: s3_trigger_commands
namespace: company.team
description: process CSV file from S3 trigger
tasks:
- id: wdir
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: clone_repository
type: io.kestra.plugin.git.Clone
url: https://github.com/kestra-io/examples
branch: main
- id: python
type: io.kestra.plugin.scripts.python.Commands
inputFiles:
data.csv: "{{ trigger.objects | jq('.[].uri') | first }}"
description: this script reads a file `data.csv` from S3 trigger
containerImage: ghcr.io/kestra-io/pydata:latest
warningOnStdErr: false
commands:
- python examples/scripts/clean_messy_dataset.py
outputFiles:
- "*.csv"
- "*.parquet"
triggers:
- id: wait_for_s3_object
type: io.kestra.plugin.aws.s3.Trigger
bucket: declarative-orchestration
maxKeys: 1
interval: PT1S
filter: FILES
action: MOVE
prefix: raw/
moveTo:
key: archive/raw/
accessKeyId: "{{ secret('AWS_ACCESS_KEY_ID') }}"
secretKeyId: "{{ secret('AWS_SECRET_KEY_ID') }}"
region: "{{ secret('AWS_DEFAULT_REGION') }}"
Execute a Python script from Git using a private Docker container image
id: python_in_container
namespace: company.team
tasks:
- id: wdir
type: io.kestra.plugin.core.flow.WorkingDirectory
tasks:
- id: clone_repository
type: io.kestra.plugin.git.Clone
url: https://github.com/kestra-io/examples
branch: main
- id: git_python_scripts
type: io.kestra.plugin.scripts.python.Commands
warningOnStdErr: false
commands:
- python examples/scripts/etl_script.py
outputFiles:
- "*.csv"
- "*.parquet"
containerImage: annageller/kestra:latest
taskRunner:
type: io.kestra.plugin.scripts.runner.docker.Docker
config: |
{
"auths": {
"https://index.docker.io/v1/": {
"username": "annageller",
"password": "{{ secret('DOCKER_PAT') }}"
}
}
}
Create a python script and execute it in a virtual environment
id: script_in_venv
namespace: company.team
tasks:
- id: python
type: io.kestra.plugin.scripts.python.Commands
inputFiles:
main.py: |
import requests
from kestra import Kestra
response = requests.get('https://google.com')
print(response.status_code)
Kestra.outputs({'status': response.status_code, 'text': response.text})
beforeCommands:
- python -m venv venv
- . venv/bin/activate
- pip install requests kestra > /dev/null
commands:
- python main.py
YES
The commands to run.
YES
AUTO
LINUX
WINDOWS
AUTO
The target operating system where the script will run.
YES
A list of commands that will run before the commands
, allowing to set up the environment e.g. pip install -r requirements.txt
.
YES
ghcr.io/kestra-io/kestrapy:latest
The task runner container image, only used if the task runner is container-based.
NO
Deprecated - use the 'taskRunner' property instead.
Only used if the taskRunner
property is not set
YES
Additional environment variables for the current process.
YES
true
Fail the task on the first command with a non-zero status.
If set to false
all commands will be executed one after the other. The final state of task execution is determined by the last command. Note that this property maybe be ignored if a non compatible interpreter is specified.
You can also disable it if your interpreter does not support the set -e
option.
YES
The files to create on the local filesystem. It can be a map or a JSON object.
YES
["/bin/sh","-c"]
Which interpreter to use.
NO
Inject namespace files.
Inject namespace files to this task. When enabled, it will, by default, load all namespace files into the working directory. However, you can use the include
or exclude
properties to limit which namespace files will be injected.
YES
false
Whether to setup the output directory mechanism.
Required to use the expression. Note that it could increase the starting time. Deprecated, use the outputFiles
property instead.
YES
The files from the local filesystem to send to Kestra's internal storage.
Must be a list of glob expressions relative to the current working directory, some examples: my-dir/**
, my-dir/*/**
or my-dir/my-file.txt
.
NO
PROCESS
DOCKER
Deprecated - use the 'taskRunner' property instead.
Only used if the taskRunner
property is not set
NO
{
"type": "io.kestra.plugin.scripts.runner.docker.Docker"
}
The task runner to use.
Task runners are provided by plugins, each have their own properties.
YES
true
Whether to set the task state to WARNING
when any stdErr
output is detected.
Note that a script error will set the state to FAILED
regardless.
0
The exit code of the entire flow execution.
The output files' URIs in Kestra's internal storage.
The value extracted from the output of the executed commands
.
YES
true
Whether to enable namespace files to be loaded into the working directory. If explicitly set to true
in a task, it will load all Namespace Files into the task's working directory. Note that this property is by default set to true
so that you can specify only the include
and exclude
properties to filter the files to load without having to explicitly set enabled
to true
.
YES
A list of filters to exclude matching glob patterns. This allows you to exclude a subset of the Namespace Files from being downloaded at runtime. You can combine this property together with include
to only inject a subset of files that you need into the task's working directory.
YES
OVERWRITE
OVERWRITE
FAIL
WARN
IGNORE
Comportment of the task if a file already exist in the working directory.
YES
A list of filters to include only matching glob patterns. This allows you to only load a subset of the Namespace Files into the working directory.
YES
["{{flow.namespace}}"]
A list of namespaces in which searching files. The files are loaded in the namespace order, and only the latest version of a file is kept. Meaning if a file is present in the first and second namespace, only the file present on the second namespace will be loaded.
YES
The maximum amount of CPU resources a container can use.
Make sure to set that to a numeric value e.g. cpus: "1.5"
or cpus: "4"
or For instance, if the host machine has two CPUs and you set cpus: "1.5"
, the container is guaranteed at most one and a half of the CPUs.
YES
The maximum amount of kernel memory the container can use.
The minimum allowed value is 4MB
. Because kernel memory cannot be swapped out, a container which is starved of kernel memory may block host machine resources, which can have side effects on the host machine and on other containers. See the kernel-memory docs for more details.
YES
The maximum amount of memory resources the container can use.
Make sure to use the format number
+ unit
(regardless of the case) without any spaces.
The unit can be KB (kilobytes), MB (megabytes), GB (gigabytes), etc.
Given that it's case-insensitive, the following values are equivalent:
"512MB"
"512Mb"
"512mb"
"512000KB"
"0.5GB"
It is recommended that you allocate at least 6MB
.
YES
Allows you to specify a soft limit smaller than memory
which is activated when Docker detects contention or low memory on the host machine.
If you use memoryReservation
, it must be set lower than memory
for it to take precedence. Because it is a soft limit, it does not guarantee that the container doesn’t exceed the limit.
YES
The total amount of memory
and swap
that can be used by a container.
If memory
and memorySwap
are set to the same value, this prevents containers from using any swap. This is because memorySwap
includes both the physical memory and swap space, while memory
is only the amount of physical memory that can be used.
YES
A setting which controls the likelihood of the kernel to swap memory pages.
By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set memorySwappiness
to a value between 0 and 100 to tune this percentage.
YES
By default, if an out-of-memory (OOM) error occurs, the kernel kills processes in a container.
To change this behavior, use the oomKillDisable
option. Only disable the OOM killer on containers where you have also set the memory
option. If the memory
flag is not set, the host can run out of memory, and the kernel may need to kill the host system’s processes to free the memory.
YES
1
Docker image to use.
YES
Docker configuration file.
Docker configuration file that can set access credentials to private container registries. Usually located in ~/.docker/config.json
.
NO
Limits the CPU usage to a given maximum threshold value.
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
YES
YES
Docker entrypoint to use.
YES
Extra hostname mappings to the container network interface configuration.
YES
Docker API URI.
NO
Limits memory usage to a given maximum threshold value.
Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory, or soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. Some of these options have different effects when used alone or when more than one option is set.
YES
Docker network mode to use e.g. host
, none
, etc.
YES
Give extended privileges to this container.
YES
ALWAYS
IF_NOT_PRESENT
ALWAYS
NEVER
The image pull policy for a container image and the tag of the image, which affect when Docker attempts to pull (download) the specified image.
YES
Size of /dev/shm
in bytes.
The size must be greater than 0. If omitted, the system uses 64MB.
YES
User in the Docker container.
YES
List of volumes to mount.
Must be a valid mount expression as string, example : /home/user:/app
.
Volumes mount are disabled by default for security reasons; you must enable them on server configuration by setting kestra.tasks.scripts.docker.volume-enabled
to true
.
YES
The registry authentication.
The auth
field is a base64-encoded authentication string of username: password
or a token.
YES
The identity token.
YES
The registry password.
YES
The registry URL.
If not defined, the registry will be extracted from the image name.
YES
The registry token.
YES
The registry username.
YES
A list of capabilities; an OR list of AND lists of capabilities.
YES
YES
YES
YES
Driver-specific options, specified as key/value pairs.
These options are passed directly to the driver.
NO
\d+\.\d+\.\d+(-[a-zA-Z0-9-]+)?|([a-zA-Z0-9]+)
The version of the plugin to use.