Forward workflow execution logs to one or more desired destinations.
The Log Shipper task extracts logs from the Kestra backend and loads them to desired destinations including Datadog, Elasticsearch, New Relic, OpenTelemetry, AWS CloudWatch, Google Operational Suite, and Azure Monitor.
The task works incrementally in batches:
- Determines the starting timestamp using either:
- The last successfully processed log's timestamp (persisted in KV Store using the
offsetKey
) - Current time minus
lookbackPeriod
duration if no previous state exists
- The last successfully processed log's timestamp (persisted in KV Store using the
- Sends retrieved logs through configured
logExporters
- Stores the timestamp of the last processed log to maintain state between executions
- Subsequent runs continue from the last stored timestamp
This incremental approach ensures reliable log forwarding without gaps or duplicates.
type: "io.kestra.plugin.ee.core.log.LogShipper"
Ship logs to multiple destinations
id: logShipper
namespace: system
tasks:
- id: shipLogs
type: io.kestra.plugin.ee.core.log.LogShipper
logLevelFilter: INFO
lookbackPeriod: P1D
offsetKey: logShipperOffset
delete: false
logExporters:
- id: file
type: io.kestra.plugin.ee.core.log.FileLogExporter
- id: awsCloudWatch
type: io.kestra.plugin.ee.aws.cloudwatch.LogExporter
accessKeyId: "{{ secret('AWS_ACCESS_KEY_ID') }}"
secretKeyId: "{{ secret('AWS_SECRET_KEY_ID') }}"
region: us-east-1
logGroupName: kestra
logStreamName: production
- id: S3LogExporter
type: io.kestra.plugin.ee.aws.s3.LogExporter
accessKeyId: "{{ secret('AWS_ACCESS_KEY_ID') }}"
secretKeyId: "{{ secret('AWS_SECRET_KEY_ID') }}"
region: "{{ vars.region }}"
format: JSON
bucket: logbucket
logFilePrefix: kestra-log-file
maxLinesPerFile: 1000000
- id: googleOperationalSuite
type: io.kestra.plugin.ee.gcp.operationalsuite.LogExporter
projectId: my-gcp-project
- id: gcs
type: io.kestra.plugin.ee.gcp.gcs.LogExporter
projectId: myProjectId
format: JSON
maxLinesPerFile: 10000
bucket: my-bucket
logFilePrefix: kestra-log-file
chunk: 1000
- id: azureMonitor
type: io.kestra.plugin.ee.azure.monitor.LogExporter
endpoint: https://endpoint-host.ingest.monitor.azure.com
tenantId: "{{ secret('AZURE_TENANT_ID') }}"
clientId: "{{ secret('AZURE_CLIENT_ID') }}"
clientSecret: "{{ secret('AZURE_CLIENT_SECRET') }}"
ruleId: dcr-69f0b123041d4d6e9f2bf72aad0b62cf
streamName: kestraLogs
- id: azureBlobStorage
type: io.kestra.plugin.ee.azure.storage.LogExporter
endpoint: https://myblob.blob.core.windows.net/
tenantId: "{{ secret('AZURE_TENANT_ID') }}"
clientId: "{{ secret('AZURE_CLIENT_ID') }}"
clientSecret: "{{ secret('AZURE_CLIENT_SECRET') }}"
containerName: logs
format: JSON
logFilePrefix: kestra-log-file
maxLinesPerFile: 1000000
chunk: 1000
- id: datadog
type: io.kestra.plugin.ee.datadog.LogExporter
basePath: https://http-intake.logs.datadoghq.eu
apiKey: "{{ secret('DATADOG_API_KEY') }}"
- id: elasticsearch
type: io.kestra.plugin.ee.elasticsearch.LogExporter
indexName: kestra-logs
connection:
basicAuth:
password: "{{ secret('ES_PASSWORD') }}"
username: kestra_user
hosts:
- https://elastic.example.com:9200
- id: opensearch
type: io.kestra.plugin.ee.opensearch.LogExporter
indexName: kestra-logs
connection:
basicAuth:
password: "{{ secret('ES_PASSWORD') }}"
username: kestra_user
hosts:
- https://elastic.example.com:9200
- id: newRelic
type: io.kestra.plugin.ee.newrelic.LogExporter
basePath: https://log-api.newrelic.com
apiKey: "{{ secret('NEWRELIC_API_KEY') }}"
- id: openTelemetry
type: io.kestra.plugin.ee.opentelemetry.LogExporter
otlpEndpoint: http://otel-collector:4318/v1/logs
authorizationHeaderName: Authorization
authorizationHeaderValue: "Bearer {{ secret('OTEL_TOKEN') }}"
triggers:
- id: dailySchedule
type: io.kestra.plugin.core.trigger.Schedule
cron: "0 0 * * *"
disabled: true
NO
1
List of log shippers
The list of log shippers to use for sending logs
NO
Deprecated
NO
Delete logs after export
The log shipper will delete the exported logs
YES
INFO
Log level to send
This property specifies the minimum log level to send.
YES
P1D
duration
Starting duration before now
If no previous execution or state exists, the fetch start date is set to the current time minus this duration
YES
Namespace to search
The namespace to use to filter logs
YES
Prefix of the KVStore key
The prefix of the KVStore key that contains the last execution's end fetched date
YES
duration
The time allowed to establish a connection to the server before failing.
YES
PT5M
duration
The time allowed for a read connection to remain idle before closing it.
NO
The connection properties.
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
YES
The name of the index to send logs to
NO
NO
1000
The chunk size for every bulk request.
YES
The address of the proxy server.
YES
The password for proxy authentication.
NO
The port of the proxy server.
YES
DIRECT
DIRECT
HTTP
SOCKS
The type of proxy to use.
YES
The username for proxy authentication.
YES
List of HTTP OpenSearch servers.
Must be an URI like https://opensearch.com: 9200
with scheme and port.
NO
Basic auth configuration.
YES
List of HTTP headers to be send on every request.
Must be a string with key value separated with :
, ex: Authorization: Token XYZ
.
YES
Sets the path's prefix for every request used by the HTTP client.
For example, if this is set to /my/path
, then any client request will become /my/path/
+ endpoint.
In essence, every request's endpoint is prefixed by this pathPrefix
.
The path prefix is useful for when OpenSearch is behind a proxy that provides a base path or a proxy that requires all paths to start with '/'; it is not intended for other purposes and it should not be supplied in other scenarios.
NO
Whether the REST client should return any response containing at least one warning header as a failure.
NO
Trust all SSL CA certificates.
Use this if the server is using a self signed SSL certificate.
YES
S3 Bucket to upload logs files.
The bucket where log files are going to be imported
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
YES
AWS region with which the SDK should communicate.
NO
YES
Access Key Id in order to connect to AWS.
If no credentials are defined, we will use the default credentials provider chain to fetch credentials.
NO
1000
The chunk size for every bulk request.
YES
The endpoint with which the SDK should communicate.
This property allows you to use a different S3 compatible storage backend.
YES
JSON
ION
JSON
Format of the exported files
The format of the exported files
YES
kestra-log-file
Prefix of the log files
The prefix of the log files name. The full file name will be logFilePrefix-localDateTime.json/ion
NO
100000
Maximum number of lines per file
The maximum number of lines per file
YES
Secret Key Id in order to connect to AWS.
If no credentials are defined, we will use the default credentials provider chain to fetch credentials.
YES
AWS session token, retrieved from an AWS token service, used for authenticating that this user has received temporary permissions to access a given resource.
If no credentials are defined, we will use the default credentials provider chain to fetch credentials.
YES
The AWS STS endpoint with which the SDKClient should communicate.
YES
AWS STS Role.
The Amazon Resource Name (ARN) of the role to assume. If set the task will use the StsAssumeRoleCredentialsProvider
. If no credentials are defined, we will use the default credentials provider chain to fetch credentials.
YES
AWS STS External Id.
A unique identifier that might be required when you assume a role in another account. This property is only used when an stsRoleArn
is defined.
YES
PT15M
duration
AWS STS Session duration.
The duration of the role session (default: 15 minutes, i.e., PT15M). This property is only used when an stsRoleArn
is defined.
YES
AWS STS Session name.
This property is only used when an stsRoleArn
is defined.
YES
Url of the Data Collection Endpoint
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
YES
Id of the Data Collection Rule
YES
Name of the stream
NO
NO
1000
The chunk size for every bulk request.
YES
Client ID
Client ID of the Azure service principal. If you don't have a service principal, refer to create a service principal with Azure CLI.
YES
Client Secret
Service principal client secret. The tenantId, clientId and clientSecret of the service principal are required for this credential to acquire an access token.
YES
PEM Certificate
Your stored PEM certificate.
The tenantId, clientId and clientCertificate of the service principal are required for this credential to acquire an access token.
YES
Tenant ID
NO
false
If true, allow a failed response code (response code >= 400)
YES
List of response code allowed for this request
NO
The authentification to use.
NO
The password for HTTP basic authentication.
NO
The username for HTTP basic authentication.
NO
duration
The time allowed to establish a connection to the server before failing.
NO
duration
The time an idle connection can remain in the client's connection pool before being closed.
NO
UTF-8
The default charset for the request.
NO
true
Whether redirects should be followed automatically.
NO
ALL
TRACE
DEBUG
INFO
WARN
ERROR
OFF
NOT_SPECIFIED
The log level for the HTTP client.
NO
REQUEST_HEADERS
REQUEST_BODY
RESPONSE_HEADERS
RESPONSE_BODY
The enabled log.
NO
The maximum content length of the response.
NO
The proxy configuration.
NO
The address of the proxy server.
NO
The password for proxy authentication.
NO
The port of the proxy server.
NO
DIRECT
HTTP
SOCKS
The type of proxy to use.
NO
The username for proxy authentication.
NO
duration
The time allowed for a read connection to remain idle before closing it.
NO
duration
The maximum time allowed for reading data from the server before failing.
NO
The SSL request options
NO
The timeout configuration.
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
YES
The name of the log group.
YES
The name of the log stream
YES
AWS region with which the SDK should communicate.
NO
YES
Access Key Id in order to connect to AWS.
If no credentials are defined, we will use the default credentials provider chain to fetch credentials.
NO
1000
The chunk size for every bulk request.
YES
The endpoint with which the SDK should communicate.
This property allows you to use a different S3 compatible storage backend.
YES
Secret Key Id in order to connect to AWS.
If no credentials are defined, we will use the default credentials provider chain to fetch credentials.
YES
AWS session token, retrieved from an AWS token service, used for authenticating that this user has received temporary permissions to access a given resource.
If no credentials are defined, we will use the default credentials provider chain to fetch credentials.
YES
The AWS STS endpoint with which the SDKClient should communicate.
YES
AWS STS Role.
The Amazon Resource Name (ARN) of the role to assume. If set the task will use the StsAssumeRoleCredentialsProvider
. If no credentials are defined, we will use the default credentials provider chain to fetch credentials.
YES
AWS STS External Id.
A unique identifier that might be required when you assume a role in another account. This property is only used when an stsRoleArn
is defined.
YES
PT15M
duration
AWS STS Session duration.
The duration of the role session (default: 15 minutes, i.e., PT15M). This property is only used when an stsRoleArn
is defined.
YES
AWS STS Session name.
This property is only used when an stsRoleArn
is defined.
YES
1
List of HTTP ElasticSearch servers.
Must be an URI like https://elasticsearch.com: 9200
with scheme and port.
NO
Basic auth configuration.
YES
List of HTTP headers to be send on every request.
Must be a string with key value separated with :
, ex: Authorization: Token XYZ
.
YES
Sets the path's prefix for every request used by the HTTP client.
For example, if this is set to /my/path
, then any client request will become /my/path/
+ endpoint.
In essence, every request's endpoint is prefixed by this pathPrefix
.
The path prefix is useful for when ElasticSearch is behind a proxy that provides a base path or a proxy that requires all paths to start with '/'; it is not intended for other purposes and it should not be supplied in other scenarios.
NO
Whether the REST client should return any response containing at least one warning header as a failure.
NO
Trust all SSL CA certificates.
Use this if the server is using a self signed SSL certificate.
NO
YES
The token for bearer token authentication.
YES
Splunk host
Url of the Splunk host to export logs to
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
YES
Splunk token
Token used to authenticate to Splunk API
NO
NO
1000
The chunk size for every bulk request.
NO
The http client configuration
YES
Kestra
Log source
The source of the logs
YES
GCS Bucket to upload logs files.
The bucket where log files are going to be imported
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
NO
NO
1000
The chunk size for every bulk request.
YES
JSON
ION
JSON
Format of the exported files
The format of the exported files
YES
kestra-log-file
Prefix of the log files
The prefix of the log files name. The full file name will be logFilePrefix-localDateTime.json/ion
NO
100000
Maximum number of lines per file
The maximum number of lines per file
YES
The GCP project ID.
YES
["https://www.googleapis.com/auth/cloud-platform"]
The GCP scopes to be used.
YES
The GCP service account key.
NO
YES
The password for HTTP basic authentication.
YES
The username for HTTP basic authentication.
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
YES
OTLP endpoint
Url of the OTLP endpoint to export logs to
NO
NO
1000
The chunk size for every bulk request.
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
NO
NO
1000
The chunk size for every bulk request.
YES
The GCP project ID.
YES
["https://www.googleapis.com/auth/cloud-platform"]
The GCP scopes to be used.
YES
The GCP service account key.
NO
Whether to disable checking of the remote SSL certificate.
Only applies if no trust store is configured. Note: This makes the SSL connection insecure and should only be used for testing. If you are using a self-signed certificate, set up a trust store instead.
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
NO
YES
ION
ION
JSON
Format of the exported files
This property defines the format of the exported files.
YES
kestra-log-file
Prefix of the log files
This property sets the prefix of the log files name. The full file name will be logFilePrefix-localDateTime.json/ion.
NO
Maximum number of lines per file
This property specifies the maximum number of lines per log file.
YES
Api key
Api key used to log in the Datadog instance
YES
Datadog base path
Base path of the Datadog instance
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
NO
NO
1000
The chunk size for every bulk request.
NO
The http client configuration
YES
LogExporter
Log sending service
Name of the service that send logs
YES
Kestra
Log source
The source of the logs
YES
Basic auth password.
YES
Basic auth username.
YES
Basic auth password.
YES
Basic auth username.
YES
Authentication key
Api key or License key used to log to the New Relic instance
YES
New Relic base path
Base path of the new relic instance to send logs to
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
NO
NO
1000
The chunk size for every bulk request.
NO
The http client configuration
YES
Name of the container
Name of the container in the blob storage
YES
Url of the Blob Storage
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
NO
NO
1000
The chunk size for every bulk request.
YES
Client ID
Client ID of the Azure service principal. If you don't have a service principal, refer to create a service principal with Azure CLI.
YES
Client Secret
Service principal client secret. The tenantId, clientId and clientSecret of the service principal are required for this credential to acquire an access token.
YES
Connection string of the Storage Account.
YES
JSON
ION
JSON
Format of the exported files
The format of the exported files
YES
kestra-log-file
Prefix of the log files
The prefix of the log files name. The full file name will be logFilePrefix-localDateTime.json/ion
NO
100000
Maximum number of lines per file
The maximum number of lines per file
YES
PEM Certificate
Your stored PEM certificate.
The tenantId, clientId and clientCertificate of the service principal are required for this credential to acquire an access token.
YES
The SAS token to use for authenticating requests.
This string should only be the query parameters (with or without a leading '?') and not a full URL.
YES
Tenant ID
NO
The connection properties.
NO
^[a-zA-Z0-9][a-zA-Z0-9_-]*
1
YES
The name of the index to send logs to
NO
NO
1000
The chunk size for every bulk request.