All integrations
Google BigQuery
+
Bruin

Google BigQuery + Bruin

SourceDestination

Ingest data from Google BigQuery or push enriched data back — with quality checks, lineage, and scheduling. Defined in YAML, version-controlled in Git.

For business teams

What you get

  • 100+ sources into ${pn}

    Pull from any tool, database, or API directly into Google BigQuery. One YAML file per source, all managed by Bruin.

  • Data quality you can trust

    Column-level and custom SQL checks on any Google BigQuery table. Bad data gets blocked before it reaches dashboards.

  • Full lineage visibility

    Trace data from ingestion through transforms to final reports. When something breaks, find the cause in seconds.

  • SQL + Python in one pipeline

    Build transforms in Google BigQuery with both SQL and Python. Bruin resolves dependencies across languages automatically.

For data & engineering teams

How it works

  • 100+ managed connectors

    Ingest from any source directly into Google BigQuery with one YAML file per source. Bruin manages connections and scheduling.

  • YAML-defined, Git-versioned

    Every pipeline is a YAML file. Review in PRs, deploy with CI/CD, roll back with git revert.

  • SQL + Python assets

    Build transformation layers in Google BigQuery with SQL and Python. Bruin resolves dependencies and handles materialization.

  • Quality gates between stages

    Quality checks run between ingestion and transformation. Bad data gets blocked before it reaches downstream models.

Before you start

Google Cloud project with BigQuery API enabled
Service account with BigQuery Data Editor and Job User roles
Downloaded service account JSON key file
Dataset created in BigQuery (or permissions to create one)

Step 1

Add your Google BigQuery connection

BigQuery connection requires a Google Cloud project ID and service account credentials. Add this to your Bruin environment file — credentials are stored securely and referenced by name in your pipeline YAML.

Parameters

  • project-idYour Google Cloud project ID
  • credentials_pathPath to service account JSON key file
  • locationOptional dataset location (e.g., US, EU)
connections:
  bigquery:
    type: bigquery
    uri: "bigquery://project-id?credentials_path=/path/to/service-account.json"

Step 2

Create your pipeline

Define a YAML asset that tells Bruin what to pull from Google BigQuery and where to land it. This file lives in your Git repo — reviewable, version-controlled, and deployable with CI/CD.

Available tables

eventsuserstransactionsproductssessions
name: raw.bigquery_events
type: ingestr

parameters:
  source_connection: bigquery
  source_table: 'events'
  destination: bigquery

Step 3

Add quality checks

Add column-level and custom SQL checks to your Google BigQuery data. If a check fails, the pipeline stops — bad data never reaches downstream models or dashboards.

Validate data freshness on every sync
Ensure IDs are unique across tables
Block bad data before it reaches downstream models
columns:
  - name: id
    checks:
      - name: not_null
      - name: unique

custom_checks:
  - name: freshness check
    query: |
      SELECT MAX(updated_at) >
        TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
      FROM raw.bigquery_events

Step 4

Run it

One command. Bruin connects to Google BigQuery, pulls data incrementally, runs your quality checks, and lands clean data in your warehouse. If a check fails, the pipeline stops — bad data never reaches downstream.

Backfill historical data with --start-date
Schedule with cron or trigger from CI/CD
Full lineage from Google BigQuery to your dashboards
$ bruin run .
Running pipeline...

  bigquery_events
    ✓ Fetched 2,847 new records
    ✓ Quality: campaign_id not_null     PASSED
    ✓ Quality: spend not_null           PASSED
    ✓ Quality: no negative ad spend     PASSED
    ✓ Loaded into bigquery

  Completed in 12s

Ready to connect Google BigQuery?

Start for free, or book a demo to see how Bruin handles ingestion, quality, lineage, and scheduling for your entire data stack.