All integrations
Databricks
+
Bruin

Databricks + Bruin

SourceDestination

Ingest data from Databricks or push enriched data back — with quality checks, lineage, and scheduling. Defined in YAML, version-controlled in Git.

For business teams

What you get

  • 100+ sources into ${pn}

    Pull from any tool, database, or API directly into Databricks. One YAML file per source, all managed by Bruin.

  • Data quality you can trust

    Column-level and custom SQL checks on any Databricks table. Bad data gets blocked before it reaches dashboards.

  • Full lineage visibility

    Trace data from ingestion through transforms to final reports. When something breaks, find the cause in seconds.

  • SQL + Python in one pipeline

    Build transforms in Databricks with both SQL and Python. Bruin resolves dependencies across languages automatically.

For data & engineering teams

How it works

  • 100+ managed connectors

    Ingest from any source directly into Databricks with one YAML file per source. Bruin manages connections and scheduling.

  • YAML-defined, Git-versioned

    Every pipeline is a YAML file. Review in PRs, deploy with CI/CD, roll back with git revert.

  • SQL + Python assets

    Build transformation layers in Databricks with SQL and Python. Bruin resolves dependencies and handles materialization.

  • Quality gates between stages

    Quality checks run between ingestion and transformation. Bad data gets blocked before it reaches downstream models.

Before you start

Databricks workspace with SQL endpoint
Personal access token generated
SQL endpoint running (not terminated)
Appropriate permissions on catalog/schema

Step 1

Add your Databricks connection

Databricks connection using personal access token. Add this to your Bruin environment file — credentials are stored securely and referenced by name in your pipeline YAML.

Parameters

  • tokenPersonal access token (use as username)
  • hostWorkspace URL
  • portPort number (usually 443)
  • http_pathSQL endpoint HTTP path
connections:
  databricks:
    type: databricks
    uri: "databricks://token@host:port/http_path"

Step 2

Create your pipeline

Define a YAML asset that tells Bruin what to pull from Databricks and where to land it. This file lives in your Git repo — reviewable, version-controlled, and deployable with CI/CD.

Available tables

bronze.raw_datasilver.cleaned_datagold.aggregates
name: raw.databricks_bronze.raw_data
type: ingestr

parameters:
  source_connection: databricks
  source_table: 'bronze.raw_data'
  destination: bigquery

Step 3

Add quality checks

Add column-level and custom SQL checks to your Databricks data. If a check fails, the pipeline stops — bad data never reaches downstream models or dashboards.

Validate data freshness on every sync
Ensure IDs are unique across tables
Block bad data before it reaches downstream models
columns:
  - name: id
    checks:
      - name: not_null
      - name: unique

custom_checks:
  - name: freshness check
    query: |
      SELECT MAX(updated_at) >
        TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
      FROM raw.databricks_bronze.raw_data

Step 4

Run it

One command. Bruin connects to Databricks, pulls data incrementally, runs your quality checks, and lands clean data in your warehouse. If a check fails, the pipeline stops — bad data never reaches downstream.

Backfill historical data with --start-date
Schedule with cron or trigger from CI/CD
Full lineage from Databricks to your dashboards
$ bruin run .
Running pipeline...

  databricks_bronze.raw_data
    ✓ Fetched 2,847 new records
    ✓ Quality: campaign_id not_null     PASSED
    ✓ Quality: spend not_null           PASSED
    ✓ Quality: no negative ad spend     PASSED
    ✓ Loaded into bigquery

  Completed in 12s

Ready to connect Databricks?

Start for free, or book a demo to see how Bruin handles ingestion, quality, lineage, and scheduling for your entire data stack.