All integrations
GCP Dataproc Serverless
+
Bruin

GCP Dataproc Serverless + Bruin

SourceDestination

Ingest data from GCP Dataproc Serverless or push enriched data back — with quality checks, lineage, and scheduling. Defined in YAML, version-controlled in Git.

For business teams

What you get

  • 100+ sources into ${pn}

    Pull from any tool, database, or API directly into GCP Dataproc Serverless. One YAML file per source, all managed by Bruin.

  • Data quality you can trust

    Column-level and custom SQL checks on any GCP Dataproc Serverless table. Bad data gets blocked before it reaches dashboards.

  • Full lineage visibility

    Trace data from ingestion through transforms to final reports. When something breaks, find the cause in seconds.

  • SQL + Python in one pipeline

    Build transforms in GCP Dataproc Serverless with both SQL and Python. Bruin resolves dependencies across languages automatically.

For data & engineering teams

How it works

  • 100+ managed connectors

    Ingest from any source directly into GCP Dataproc Serverless with one YAML file per source. Bruin manages connections and scheduling.

  • YAML-defined, Git-versioned

    Every pipeline is a YAML file. Review in PRs, deploy with CI/CD, roll back with git revert.

  • SQL + Python assets

    Build transformation layers in GCP Dataproc Serverless with SQL and Python. Bruin resolves dependencies and handles materialization.

  • Quality gates between stages

    Quality checks run between ingestion and transformation. Bad data gets blocked before it reaches downstream models.

Before you start

GCP project
Service account with Dataproc permissions

Step 1

Add your GCP Dataproc Serverless connection

Connect using GCP credentials and Dataproc Serverless configuration. Add this to your Bruin environment file — credentials are stored securely and referenced by name in your pipeline YAML.

connections:
  dataproc_serverless:
    type: dataproc-serverless
    uri: "dataproc-serverless://project_id/region?credentials=/path/to/key.json"

Step 2

Create your pipeline

Define a YAML asset that tells Bruin what to pull from GCP Dataproc Serverless and where to land it. This file lives in your Git repo — reviewable, version-controlled, and deployable with CI/CD.

Available tables

spark_tablesbigquery_tableshive_metastore_tables
name: raw.dataproc_serverless_spark_tables
type: ingestr

parameters:
  source_connection: dataproc_serverless
  source_table: 'spark_tables'
  destination: bigquery

Step 3

Add quality checks

Add column-level and custom SQL checks to your GCP Dataproc Serverless data. If a check fails, the pipeline stops — bad data never reaches downstream models or dashboards.

Validate data freshness on every sync
Ensure IDs are unique across tables
Block bad data before it reaches downstream models
columns:
  - name: id
    checks:
      - name: not_null
      - name: unique

custom_checks:
  - name: freshness check
    query: |
      SELECT MAX(updated_at) >
        TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
      FROM raw.dataproc_serverless_spark_tables

Step 4

Run it

One command. Bruin connects to GCP Dataproc Serverless, pulls data incrementally, runs your quality checks, and lands clean data in your warehouse. If a check fails, the pipeline stops — bad data never reaches downstream.

Backfill historical data with --start-date
Schedule with cron or trigger from CI/CD
Full lineage from GCP Dataproc Serverless to your dashboards
$ bruin run .
Running pipeline...

  dataproc_serverless_spark_tables
    ✓ Fetched 2,847 new records
    ✓ Quality: campaign_id not_null     PASSED
    ✓ Quality: spend not_null           PASSED
    ✓ Quality: no negative ad spend     PASSED
    ✓ Loaded into bigquery

  Completed in 12s

Ready to connect GCP Dataproc Serverless?

Start for free, or book a demo to see how Bruin handles ingestion, quality, lineage, and scheduling for your entire data stack.