All integrations
PostgreSQL
+
Bruin

PostgreSQL + Bruin

SourceDestination

Ingest data from PostgreSQL or push enriched data back, with quality checks, lineage, and scheduling. Defined in YAML, version-controlled in Git.

For business teams

What you get

  • Real-time warehouse sync

    PostgreSQL tables replicate to your warehouse continuously. Analytics teams work with fresh data, not yesterday's export.

  • Catch issues at the source

    Quality checks validate PostgreSQL data as it replicates. Null IDs, duplicate records, and schema drift get caught early.

  • Multi-source joins

    Combine PostgreSQL with SaaS data, APIs, and other databases in your warehouse. One Bruin pipeline handles it all.

  • No untracked scripts

    Replication is defined in YAML, reviewed in PRs, and deployed with CI/CD. No more mystery cron jobs.

For data & engineering teams

How it works

  • CDC with merge strategy

    Bruin handles change data capture from PostgreSQL with deduplication. Schema changes are detected and handled automatically.

  • YAML-defined, Git-versioned

    Your PostgreSQL replication is a YAML file. Review in PRs, deploy with CI/CD. No more untracked database scripts.

  • Row-level quality checks

    Validate primary keys, foreign keys, and referential integrity on every sync. Catch corruption at the source.

  • Multi-source pipelines

    Combine PostgreSQL with SaaS APIs and other databases in one pipeline. Bruin resolves cross-source dependencies.

Before you start

PostgreSQL server accessible from your network
Database user with appropriate permissions
pg_hba.conf configured to allow connections
Firewall rules allowing port 5432 (or custom port)

Step 1

Add your PostgreSQL connection

PostgreSQL connection using standard connection string format. Add this to your Bruin environment file, credentials are stored securely and referenced by name in your pipeline YAML.

Parameters

  • usernameDatabase user
  • passwordUser password
  • hostDatabase server hostname or IP
  • portServer port (default 5432)
  • databaseDatabase name
  • sslmodeSSL mode (disable, require, verify-ca, verify-full)
connections:
  postgresql:
    type: postgresql
    uri: "postgresql://username:password@host:port/database?sslmode=disable"

Step 2

Create your pipeline

Define a YAML asset that tells Bruin what to pull from PostgreSQL and where to land it. This file lives in your Git repo, reviewable, version-controlled, and deployable with CI/CD.

Available tables

public.userspublic.orderspublic.productsanalytics.events
name: raw.postgresql_public.users
type: ingestr

parameters:
  source_connection: postgresql
  source_table: 'public.users'
  destination: bigquery

Step 3

Add quality checks

Add column-level and custom SQL checks to your PostgreSQL data. If a check fails, the pipeline stops, bad data never reaches downstream models or dashboards.

Validate row counts are within expected range
Ensure primary keys are unique and not null
Catch schema drift with freshness checks
columns:
  - name: id
    checks:
      - name: not_null
      - name: unique
  - name: created_at
    checks:
      - name: not_null

custom_checks:
  - name: row count within expected range
    query: |
      SELECT COUNT(*) BETWEEN 1 AND 10000000
      FROM raw.postgresql_public.users

Step 4

Run it

One command. Bruin connects to PostgreSQL, pulls data incrementally, runs your quality checks, and lands clean data in your warehouse. If a check fails, the pipeline stops, bad data never reaches downstream.

Backfill historical data with --start-date
Schedule with cron or trigger from CI/CD
Full lineage from PostgreSQL to your dashboards
$ bruin run .
Running pipeline...

  postgresql_public.users
    ✓ Fetched 2,847 new records
    ✓ Quality: campaign_id not_null     PASSED
    ✓ Quality: spend not_null           PASSED
    ✓ Quality: no negative ad spend     PASSED
    ✓ Loaded into bigquery

  Completed in 12s

Ready to connect PostgreSQL?

Start for free, or book a demo to see how Bruin handles ingestion, quality, lineage, and scheduling for your entire data stack.