Documentation
Getting Started
To start forwarding logs, you need an API key. You can find yours in the dashboard under Settings → API Keys.
Send your first log event with a single curl command:
curl -X POST https://ingest.splunk-relay.net/v1/logs \
-H "Authorization: Bearer rl_live_7f3a9b2c1d4e5f6a" \
-H "Content-Type: application/json" \
-d '{
"source": "web-api",
"severity": "info",
"message": "User login successful",
"meta": {
"user_id": "usr_482",
"ip": "203.0.113.42"
}
}'
If everything is configured correctly, you will receive a response like this:
{
"status": "accepted",
"id": "evt_9k2m4n6p8r0t",
"timestamp": "2026-03-26T14:03:22.418Z"
}
Configuration
Relay pipelines are configured using a YAML file. You can manage your configuration through the dashboard or by uploading a file via the API.
Here is a typical pipeline configuration:
pipeline:
name: production-logs
region: us-east-1
sources:
- id: web-api
type: http
auth: bearer
path: /v1/logs
- id: syslog-servers
type: syslog
protocol: tcp
port: 5514
tls: true
filters:
- id: drop-debug
match:
severity: debug
action: drop
- id: tag-errors
match:
severity:
- error
- critical
action: add_field
field: priority
value: high
destinations:
- id: elastic-prod
type: elasticsearch
hosts:
- https://es-cluster.internal:9200
index: "logs-%{+yyyy.MM.dd}"
auth:
type: api_key
key_id: "${ES_KEY_ID}"
api_key: "${ES_API_KEY}"
- id: s3-archive
type: s3
bucket: company-log-archive
region: us-east-1
prefix: "raw/%{source}/"
compression: gzip
Filters & Routes
Filters let you control which log events reach each destination. Every filter has a match block and an action.
Supported actions:
drop— discard the event entirelyadd_field— attach a new field to the eventremove_field— strip a field before forwardingroute— send the event to a specific destinationsample— forward only a percentage of matching events
Routing example — send errors to PagerDuty, everything else to S3:
filters:
- id: errors-to-pagerduty
match:
severity:
- error
- critical
action: route
destination: pagerduty-alerts
- id: everything-to-archive
match: "*"
action: route
destination: s3-archive
Destinations
A destination is any system that receives forwarded logs. Relay supports the following destination types out of the box:
elasticsearch— Elasticsearch or OpenSearch clustersloki— Grafana Lokis3— Amazon S3 or S3-compatible storagegcs— Google Cloud Storageazure_blob— Azure Blob Storagekafka— Apache Kafka topicswebhook— any HTTP endpointdatadog— Datadog Logs APIclickhouse— ClickHouse tablespostgresql— PostgreSQL via COPY protocol
Each destination supports TLS, authentication, batching, and retry configuration. See individual destination docs for full options.
API Reference
All API requests require an Authorization: Bearer header with your API key.
Base URL: https://api.splunk-relay.net/v1
Ingest a single log event or a batch of up to 1,000 events.
List all configured pipelines for the authenticated account.
Create or update a pipeline from a YAML configuration body.
Return ingestion and forwarding statistics for a pipeline over the last 24 hours.
Delete a pipeline. All associated routes and filters will be removed.