Project Competition

Build a data engineering project with Bruin and compete for prizes. Vote for your favorites and share the best projects with the community.

Claude Pro

Participation

1 month Claude Pro subscription

Limited to the first 100 submissions

How to qualify

  • Use Bruin for ingestion, transformation, orchestration, and analysis (using the AI data analyst)
  • Post your project in the #projects channel on Bruin Data Community Slack
  • Include a GitHub repo with a README
  • Submit your project via our website
Claude Pro

Top 3 Projects

1 year Claude Pro subscription each

How to qualify

  • Use Bruin for ingestion, transformation, orchestration, and analysis
  • Post in #projects on Bruin Data Community Slack
  • GitHub repo with README
  • Submit via our website
  • Determined by community votes on Bruin Data Community Slack — most thumbs-up reactions win
Mac Mini

Outstanding Project

Mac Mini

How to qualify

  • Use Bruin for ingestion, transformation, orchestration, and analysis
  • Post in #projects on Bruin Data Community Slack
  • GitHub repo with README
  • Submit via our website
  • Create a LinkedIn post explaining which Bruin features you used, your design choices, and how Bruin compares to other tools
  • Include screenshots of analysis done with the Bruin AI data analyst
  • Top 10 posts by likes enter a random draw
Deadline: Monday, June 1st, 12:00 UTC

Important: The identity of all participants is subject to verification to ensure fair competition and prevent cheating, plagiarism, and spam.

Note: The free Claude Pro subscription for participation is limited to the first 100 project submissions. After that, you can still compete for the Top 3 and Outstanding Project prizes.

How to Build Your Project

From zero to a complete data project in four steps — plus an optional cloud deployment.

1

Set Up Your Project

  • -Install Bruin: curl -LsSf https://getbruin.com/install/cli | sh
  • -Initialize a project: bruin init empty my-project
  • -Choose your database: DuckDB (local, zero setup) or BigQuery (cloud)
  • -Configure your connection in .bruin.yml

Quickstart guide →  ·  NYC Taxi Tutorial: Setup →

2

Ingest Your Data

Three ways to get data into your project:

Ingestr YAML Assets

Built-in connectors for 100+ sources. Just define a YAML file.

name: chess.profiles
type: ingestr
parameters:
  source_connection: chess
  source_table: profiles
  destination: duckdb

Ingestr docs →

DuckDB Read from URL

Read CSV or Parquet files directly from public URLs.

SELECT *
FROM read_parquet(
  'https://...data.parquet'
);

DuckDB assets →

Python Extract

Write a Python script that returns a DataFrame.

def materialize():
  df = pd.read_csv(url)
  return df

Python assets →

Free dataset ideas

  • -Chess.com — built-in ingestr source
  • -BigQuery public datasets — Wikipedia, GitHub, Stack Overflow
  • -NYC Taxi — Parquet files via URL or Python
  • -Frankfurter API — exchange rates via Python
  • -GitHub Archive — public event data via BigQuery or Parquet
  • -Google Sheets — any spreadsheet via ingestr

NYC Taxi Tutorial: Build the Pipeline →

3

Transform with SQL

  • -Write SQL assets to clean, join, and aggregate your raw data
  • -Materialize results as tables or views for downstream use
  • -Add quality checks (not_null, unique, accepted_values) to validate your data
  • -Run the pipeline: bruin run .
/* @bruin
name: analytics.monthly_summary
type: duckdb.sql
materialization:
    type: table
@bruin */

SELECT date_trunc('month', created_at) AS month,
       count(*) AS total_records
FROM raw.my_data
GROUP BY 1;

SQL assets guide →  ·  Quality checks →  ·  NYC Taxi Tutorial: Build the Pipeline →

4

Analyze with AI

Build a context layer and let AI understand your data:

  • -Run bruin ai enhance assets/ to auto-generate descriptions, quality checks, and tags for all your assets
  • -Set up Bruin MCP in your IDE so AI agents can query and understand your data:

Cursor / Claude Code

claude mcp add bruin \
  -- bruin mcp

VS Code

"bruin": {
  "command": "bruin",
  "args": ["mcp"]
}

Codex CLI

[mcp_servers.bruin]
command = "bruin"
args = ["mcp"]
  • -Ask Cursor, Claude Code, or Codex to analyze your data, find patterns, and generate insights
  • -Alternatively: deploy to Bruin Cloud and use the AI Chat or AI Dashboard features for instant analysis

Bruin MCP setup →  ·  AI enhance docs →  ·  NYC Taxi Tutorial: Build with MCP →

5

Deploy to Bruin Cloud Optional

Take your pipeline to production with scheduling, monitoring, and AI-powered analysis.

  • -Sign up for free at getbruin.com — no credit card required
  • -Free tier includes credits to schedule and run your pipelines in the cloud
  • -Access the AI Data Analyst — ask questions about your data in natural language from Slack, Teams, or the browser
  • -Use the AI Dashboard Builder — generate dashboards with KPIs and charts from a single prompt

Cloud onboarding video →  ·  AI Data Analyst tutorial →  ·  AI Dashboard Builder tutorial →