All integrations
Amazon Redshift
+
Bruin

Amazon Redshift + Bruin

Source

Ingest Amazon Redshift data into your warehouse with incremental loading, quality checks, and full lineage. Defined in YAML, version-controlled in Git.

For business teams

What you get

  • 100+ sources into ${pn}

    Pull from any tool, database, or API directly into Amazon Redshift. One YAML file per source, all managed by Bruin.

  • Data quality you can trust

    Column-level and custom SQL checks on any Amazon Redshift table. Bad data gets blocked before it reaches dashboards.

  • Full lineage visibility

    Trace data from ingestion through transforms to final reports. When something breaks, find the cause in seconds.

  • SQL + Python in one pipeline

    Build transforms in Amazon Redshift with both SQL and Python. Bruin resolves dependencies across languages automatically.

For data & engineering teams

How it works

  • 100+ managed connectors

    Ingest from any source directly into Amazon Redshift with one YAML file per source. Bruin manages connections and scheduling.

  • YAML-defined, Git-versioned

    Every pipeline is a YAML file. Review in PRs, deploy with CI/CD, roll back with git revert.

  • SQL + Python assets

    Build transformation layers in Amazon Redshift with SQL and Python. Bruin resolves dependencies and handles materialization.

  • Quality gates between stages

    Quality checks run between ingestion and transformation. Bad data gets blocked before it reaches downstream models.

Before you start

Redshift cluster running and accessible
Security group allows inbound connections
Database user with appropriate permissions
VPC and subnet properly configured

Step 1

Add your Amazon Redshift connection

Redshift uses PostgreSQL-compatible connection format. Add this to your Bruin environment file — credentials are stored securely and referenced by name in your pipeline YAML.

Parameters

  • usernameMaster username or IAM user
  • passwordUser password
  • hostCluster endpoint URL
  • portPort number (default 5439)
  • databaseDatabase name
connections:
  redshift:
    type: redshift
    uri: "redshift://username:password@host:port/database"

Step 2

Create your pipeline

Define a YAML asset that tells Bruin what to pull from Amazon Redshift and where to land it. This file lives in your Git repo — reviewable, version-controlled, and deployable with CI/CD.

Available tables

staging.raw_dataanalytics.factsdimensions.customers
name: raw.redshift_staging.raw_data
type: ingestr

parameters:
  source_connection: redshift
  source_table: 'staging.raw_data'
  destination: bigquery

Step 3

Add quality checks

Add column-level and custom SQL checks to your Amazon Redshift data. If a check fails, the pipeline stops — bad data never reaches downstream models or dashboards.

Validate data freshness on every sync
Ensure IDs are unique across tables
Block bad data before it reaches downstream models
columns:
  - name: id
    checks:
      - name: not_null
      - name: unique

custom_checks:
  - name: freshness check
    query: |
      SELECT MAX(updated_at) >
        TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
      FROM raw.redshift_staging.raw_data

Step 4

Run it

One command. Bruin connects to Amazon Redshift, pulls data incrementally, runs your quality checks, and lands clean data in your warehouse. If a check fails, the pipeline stops — bad data never reaches downstream.

Backfill historical data with --start-date
Schedule with cron or trigger from CI/CD
Full lineage from Amazon Redshift to your dashboards
$ bruin run .
Running pipeline...

  redshift_staging.raw_data
    ✓ Fetched 2,847 new records
    ✓ Quality: campaign_id not_null     PASSED
    ✓ Quality: spend not_null           PASSED
    ✓ Quality: no negative ad spend     PASSED
    ✓ Loaded into bigquery

  Completed in 12s

Ready to connect Amazon Redshift?

Start for free, or book a demo to see how Bruin handles ingestion, quality, lineage, and scheduling for your entire data stack.