All integrations
Amazon S3
+
Bruin

Amazon S3 + Bruin

SourceDestination

Ingest data from Amazon S3 or push enriched data back, with quality checks, lineage, and scheduling. Defined in YAML, version-controlled in Git.

For business teams

What you get

  • Files and events in your warehouse

    Amazon S3 data lands in your warehouse with automatic schema detection. No manual parsing, no format guessing.

  • Schema drift protection

    Quality checks catch unexpected format changes, null values, and schema drift from Amazon S3 before it breaks models.

  • Data lake orchestration

    Use Amazon S3 as a staging layer. Bruin handles landing, transforming, and materializing, all in one pipeline.

  • Multi-cloud flexibility

    Move data between Amazon S3 and other storage or warehouses. Bruin manages scheduling, retries, and lineage.

For data & engineering teams

How it works

  • Automatic schema detection

    Bruin detects Amazon S3 data schemas automatically. No manual configuration when formats change.

  • YAML-defined, Git-versioned

    Your Amazon S3 pipeline is a YAML file. Review in PRs, deploy with CI/CD, roll back with git revert.

  • Format validation

    Quality checks catch schema drift, unexpected nulls, and format changes from Amazon S3 at the ingestion layer.

  • Land, transform, materialize

    Use Amazon S3 as staging. Bruin handles the full flow: land raw data, transform, and materialize into your warehouse.

Before you start

AWS credentials
S3 bucket access permissions

Step 1

Add your Amazon S3 connection

Connect using AWS S3 credentials with optional S3-compatible endpoint. Add this to your Bruin environment file, credentials are stored securely and referenced by name in your pipeline YAML.

Parameters

  • access_key_idAWS access key ID
  • secret_access_keyAWS secret access key
  • endpoint_urlURL of S3-compatible API server (for destinations)
  • layoutLayout template for file organization (for destinations)
connections:
  s3:
    type: s3
    uri: "s3://?access_key_id=<your_access_key_id>&secret_access_key=<your_secret_access_key>&endpoint_url=<endpoint_url>&layout=<layout>"

Step 2

Create your pipeline

Define a YAML asset that tells Bruin what to pull from Amazon S3 and where to land it. This file lives in your Git repo, reviewable, version-controlled, and deployable with CI/CD.

name: raw.s3_data
type: ingestr

parameters:
  source_connection: s3
  source_table: 'data'
  destination: bigquery

Step 3

Add quality checks

Add column-level and custom SQL checks to your Amazon S3 data. If a check fails, the pipeline stops, bad data never reaches downstream models or dashboards.

Catch events with future timestamps
Validate file paths and timestamps are present
Flag schema drift at the ingestion layer
columns:
  - name: file_path
    checks:
      - name: not_null
  - name: event_timestamp
    checks:
      - name: not_null

custom_checks:
  - name: no events from the future
    query: |
      SELECT COUNT(*) = 0
      FROM raw.s3_data
      WHERE event_timestamp > CURRENT_TIMESTAMP()

Step 4

Run it

One command. Bruin connects to Amazon S3, pulls data incrementally, runs your quality checks, and lands clean data in your warehouse. If a check fails, the pipeline stops, bad data never reaches downstream.

Backfill historical data with --start-date
Schedule with cron or trigger from CI/CD
Full lineage from Amazon S3 to your dashboards
$ bruin run .
Running pipeline...

  s3_data
    ✓ Fetched 2,847 new records
    ✓ Quality: campaign_id not_null     PASSED
    ✓ Quality: spend not_null           PASSED
    ✓ Quality: no negative ad spend     PASSED
    ✓ Loaded into bigquery

  Completed in 12s

Ready to connect Amazon S3?

Start for free, or book a demo to see how Bruin handles ingestion, quality, lineage, and scheduling for your entire data stack.