All integrations
Tableau
+
Bruin

Tableau + Bruin

Source

Ingest Tableau data into your warehouse with incremental loading, quality checks, and full lineage. Defined in YAML, version-controlled in Git.

For business teams

What you get

  • Dashboards powered by clean data

    Feed Tableau from Bruin pipelines with built-in quality checks. No more stale or incorrect dashboards.

  • End-to-end lineage to ${pn}

    Trace data from raw sources through every transformation to Tableau. Find the root cause of wrong numbers in seconds.

  • Scheduled data refresh

    Bruin pipelines refresh Tableau data on a cron. Dependencies resolve automatically — Tableau only sees complete data.

  • Multi-source, one destination

    Combine 100+ sources into clean models that power Tableau. Bruin handles the pipeline, Tableau handles the visuals.

For data & engineering teams

How it works

  • Dependency-aware scheduling

    Bruin ensures upstream transforms complete before Tableau gets new data. No more stale or partial dashboards.

  • YAML-defined, Git-versioned

    Every pipeline feeding Tableau is a YAML file. Review in PRs, deploy with CI/CD, roll back with git revert.

  • Quality gates before visualization

    Quality checks run before data reaches Tableau. If checks fail, Tableau keeps showing the last known good data.

  • End-to-end lineage

    Trace data from raw sources through transforms to Tableau dashboards. Find root causes in seconds, not hours.

Before you start

Tableau Server or Cloud account
Personal access token created
Site admin or viewer role

Step 1

Add your Tableau connection

Connect using Tableau personal access token. Add this to your Bruin environment file — credentials are stored securely and referenced by name in your pipeline YAML.

Parameters

  • token_nameTableau personal access token name
  • token_secretTableau personal access token secret
  • server_urlTableau Server or Cloud URL
  • site_idTableau site content URL name
connections:
  tableau:
    type: tableau
    uri: "tableau://token_name:token_secret@server_url/site_id"

Step 2

Create your pipeline

Define a YAML asset that tells Bruin what to pull from Tableau and where to land it. This file lives in your Git repo — reviewable, version-controlled, and deployable with CI/CD.

Available tables

workbooksviewsdatasourcesprojectsuserssubscriptions
name: raw.tableau_workbooks
type: ingestr

parameters:
  source_connection: tableau
  source_table: 'workbooks'
  destination: bigquery

Step 3

Add quality checks

Add column-level and custom SQL checks to your Tableau data. If a check fails, the pipeline stops — bad data never reaches downstream models or dashboards.

Validate data freshness before it reaches dashboards
Ensure IDs are unique across syncs
Block stale data from appearing in visualizations
columns:
  - name: id
    checks:
      - name: not_null
      - name: unique

custom_checks:
  - name: data is fresh
    query: |
      SELECT MAX(updated_at) >
        TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
      FROM raw.tableau_workbooks

Step 4

Run it

One command. Bruin connects to Tableau, pulls data incrementally, runs your quality checks, and lands clean data in your warehouse. If a check fails, the pipeline stops — bad data never reaches downstream.

Backfill historical data with --start-date
Schedule with cron or trigger from CI/CD
Full lineage from Tableau to your dashboards
$ bruin run .
Running pipeline...

  tableau_workbooks
    ✓ Fetched 2,847 new records
    ✓ Quality: campaign_id not_null     PASSED
    ✓ Quality: spend not_null           PASSED
    ✓ Quality: no negative ad spend     PASSED
    ✓ Loaded into bigquery

  Completed in 12s

Other Data Analytics Platform integrations

Ready to connect Tableau?

Start for free, or book a demo to see how Bruin handles ingestion, quality, lineage, and scheduling for your entire data stack.