All integrations
Google Analytics
+
Bruin

Google Analytics + Bruin

Source

Ingest Google Analytics data into your warehouse with incremental loading, quality checks, and full lineage. Defined in YAML, version-controlled in Git.

For business teams

What you get

  • Analysis beyond built-in reports

    Join Google Analytics behavioral data with revenue, support, and CRM data. Answer questions Google Analytics alone can't.

  • Trusted behavioral data

    Quality checks catch tracking gaps, duplicate events, and missing timestamps before they corrupt your models.

  • Self-serve for analysts

    Google Analytics data lands in your warehouse where analysts already work. No more exporting, no more waiting.

  • Real user journeys

    Combine Google Analytics events with purchase and support data to see the full customer journey, not just the product funnel.

For data & engineering teams

How it works

  • Event schema validation

    Check for null event IDs, missing timestamps, and duplicate events on every sync. Catch tracking issues at ingestion.

  • YAML-defined, Git-versioned

    Your Google Analytics pipeline is a YAML file. Review in PRs, deploy with CI/CD, roll back with git revert.

  • SQL + Python transforms

    Transform raw Google Analytics events into funnels, cohorts, and user journeys with SQL or Python, in the same pipeline.

  • Dependency-aware scheduling

    Bruin resolves pipeline dependencies automatically. Transforms only run after Google Analytics data has landed.

Before you start

Google Analytics 4 property
Service account with Analytics API access

Step 1

Add your Google Analytics connection

Connect using Google Analytics service account. Add this to your Bruin environment file, credentials are stored securely and referenced by name in your pipeline YAML.

Parameters

  • credentials_pathPath to Google service account JSON file
  • property_idGoogle Analytics 4 property ID
connections:
  googleanalytics:
    type: googleanalytics
    uri: "google_analytics://?credentials_path=<credentials_path>&property_id=<property_id>"

Step 2

Create your pipeline

Define a YAML asset that tells Bruin what to pull from Google Analytics and where to land it. This file lives in your Git repo, reviewable, version-controlled, and deployable with CI/CD.

Available tables

sessionsuserseventsconversionsaudience_demographics
name: raw.googleanalytics_sessions
type: ingestr

parameters:
  source_connection: googleanalytics
  source_table: 'sessions'
  destination: bigquery

Step 3

Add quality checks

Add column-level and custom SQL checks to your Google Analytics data. If a check fails, the pipeline stops, bad data never reaches downstream models or dashboards.

Catch duplicate events and missing timestamps
Validate event freshness, stale data gets flagged
Ensure event IDs are unique across syncs
columns:
  - name: event_id
    checks:
      - name: not_null
      - name: unique
  - name: event_timestamp
    checks:
      - name: not_null

custom_checks:
  - name: data is fresh
    query: |
      SELECT MAX(event_timestamp) >
        TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
      FROM raw.googleanalytics_sessions

Step 4

Run it

One command. Bruin connects to Google Analytics, pulls data incrementally, runs your quality checks, and lands clean data in your warehouse. If a check fails, the pipeline stops, bad data never reaches downstream.

Backfill historical data with --start-date
Schedule with cron or trigger from CI/CD
Full lineage from Google Analytics to your dashboards
$ bruin run .
Running pipeline...

  googleanalytics_sessions
    ✓ Fetched 2,847 new records
    ✓ Quality: campaign_id not_null     PASSED
    ✓ Quality: spend not_null           PASSED
    ✓ Quality: no negative ad spend     PASSED
    ✓ Loaded into bigquery

  Completed in 12s

Ready to connect Google Analytics?

Start for free, or book a demo to see how Bruin handles ingestion, quality, lineage, and scheduling for your entire data stack.