AWS Athena + Bruin
Push clean data from your warehouse into AWS Athena with quality gates, scheduling, and full lineage. Defined in YAML, version-controlled in Git.
For business teams
What you get
100+ sources into ${pn}
Pull from any tool, database, or API directly into AWS Athena. One YAML file per source, all managed by Bruin.
Data quality you can trust
Column-level and custom SQL checks on any AWS Athena table. Bad data gets blocked before it reaches dashboards.
Full lineage visibility
Trace data from ingestion through transforms to final reports. When something breaks, find the cause in seconds.
SQL + Python in one pipeline
Build transforms in AWS Athena with both SQL and Python. Bruin resolves dependencies across languages automatically.
For data & engineering teams
How it works
100+ managed connectors
Ingest from any source directly into AWS Athena with one YAML file per source. Bruin manages connections and scheduling.
YAML-defined, Git-versioned
Every pipeline is a YAML file. Review in PRs, deploy with CI/CD, roll back with git revert.
SQL + Python assets
Build transformation layers in AWS Athena with SQL and Python. Bruin resolves dependencies and handles materialization.
Quality gates between stages
Quality checks run between ingestion and transformation. Bad data gets blocked before it reaches downstream models.
Before you start
Step 1
Add your AWS Athena connection
Connect using AWS credentials and S3 bucket configuration. Add this to your Bruin environment file — credentials are stored securely and referenced by name in your pipeline YAML.
Parameters
bucketS3 bucket name for storing Parquet filesaccess_key_idAWS access key ID for authenticationsecret_access_keyAWS secret access key for authenticationregion_nameAWS region for Athena service and S3 bucketsworkgroupAthena workgroup nameprofileAWS profile name to use
connections:
athena:
type: athena
uri: "athena://?bucket=<your-destination-bucket>&access_key_id=<your-aws-access-key-id>&secret_access_key=<your-aws-secret-access-key>®ion_name=<your-aws-region>"Step 2
Create your pipeline
Define a YAML asset that tells Bruin what to pull from AWS Athena and where to land it. This file lives in your Git repo — reviewable, version-controlled, and deployable with CI/CD.
name: raw.athena_data
type: ingestr
parameters:
source_connection: athena
source_table: 'data'
destination: bigqueryStep 3
Add quality checks
Add column-level and custom SQL checks to your AWS Athena data. If a check fails, the pipeline stops — bad data never reaches downstream models or dashboards.
columns:
- name: id
checks:
- name: not_null
- name: unique
custom_checks:
- name: freshness check
query: |
SELECT MAX(updated_at) >
TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
FROM raw.athena_dataStep 4
Run it
One command. Bruin connects to AWS Athena, pulls data incrementally, runs your quality checks, and lands clean data in your warehouse. If a check fails, the pipeline stops — bad data never reaches downstream.
--start-date$ bruin run .Running pipeline...
athena_data
✓ Fetched 2,847 new records
✓ Quality: campaign_id not_null PASSED
✓ Quality: spend not_null PASSED
✓ Quality: no negative ad spend PASSED
✓ Loaded into bigquery
Completed in 12sOther Data Warehouse integrations
Ready to connect AWS Athena?
Start for free, or book a demo to see how Bruin handles ingestion, quality, lineage, and scheduling for your entire data stack.