Documentation

Getting Started

To start forwarding logs, you need an API key. You can find yours in the dashboard under Settings → API Keys.

Send your first log event with a single curl command:

curl -X POST https://ingest.splunk-relay.net/v1/logs \
  -H "Authorization: Bearer rl_live_7f3a9b2c1d4e5f6a" \
  -H "Content-Type: application/json" \
  -d '{
    "source": "web-api",
    "severity": "info",
    "message": "User login successful",
    "meta": {
      "user_id": "usr_482",
      "ip": "203.0.113.42"
    }
  }'

If everything is configured correctly, you will receive a response like this:

{
  "status": "accepted",
  "id": "evt_9k2m4n6p8r0t",
  "timestamp": "2026-03-26T14:03:22.418Z"
}

Configuration

Relay pipelines are configured using a YAML file. You can manage your configuration through the dashboard or by uploading a file via the API.

Here is a typical pipeline configuration:

pipeline:
  name: production-logs
  region: us-east-1

sources:
  - id: web-api
    type: http
    auth: bearer
    path: /v1/logs

  - id: syslog-servers
    type: syslog
    protocol: tcp
    port: 5514
    tls: true

filters:
  - id: drop-debug
    match:
      severity: debug
    action: drop

  - id: tag-errors
    match:
      severity:
        - error
        - critical
    action: add_field
    field: priority
    value: high

destinations:
  - id: elastic-prod
    type: elasticsearch
    hosts:
      - https://es-cluster.internal:9200
    index: "logs-%{+yyyy.MM.dd}"
    auth:
      type: api_key
      key_id: "${ES_KEY_ID}"
      api_key: "${ES_API_KEY}"

  - id: s3-archive
    type: s3
    bucket: company-log-archive
    region: us-east-1
    prefix: "raw/%{source}/"
    compression: gzip

Filters & Routes

Filters let you control which log events reach each destination. Every filter has a match block and an action.

Supported actions:

  • drop — discard the event entirely
  • add_field — attach a new field to the event
  • remove_field — strip a field before forwarding
  • route — send the event to a specific destination
  • sample — forward only a percentage of matching events

Routing example — send errors to PagerDuty, everything else to S3:

filters:
  - id: errors-to-pagerduty
    match:
      severity:
        - error
        - critical
    action: route
    destination: pagerduty-alerts

  - id: everything-to-archive
    match: "*"
    action: route
    destination: s3-archive

Destinations

A destination is any system that receives forwarded logs. Relay supports the following destination types out of the box:

  • elasticsearch — Elasticsearch or OpenSearch clusters
  • loki — Grafana Loki
  • s3 — Amazon S3 or S3-compatible storage
  • gcs — Google Cloud Storage
  • azure_blob — Azure Blob Storage
  • kafka — Apache Kafka topics
  • webhook — any HTTP endpoint
  • datadog — Datadog Logs API
  • clickhouse — ClickHouse tables
  • postgresql — PostgreSQL via COPY protocol

Each destination supports TLS, authentication, batching, and retry configuration. See individual destination docs for full options.

API Reference

All API requests require an Authorization: Bearer header with your API key.

Base URL: https://api.splunk-relay.net/v1

POST /v1/logs

Ingest a single log event or a batch of up to 1,000 events.

GET /v1/pipelines

List all configured pipelines for the authenticated account.

POST /v1/pipelines

Create or update a pipeline from a YAML configuration body.

GET /v1/pipelines/:id/stats

Return ingestion and forwarding statistics for a pipeline over the last 24 hours.

DELETE /v1/pipelines/:id

Delete a pipeline. All associated routes and filters will be removed.