Connect your DuckDB
Point .bruin.yml at a DuckDB file you already have, then verify the connection with dac connections and run an ad-hoc query through dac query.

What you'll do
Open the .bruin.yml that dac init generated, repoint the local_duckdb connection at your existing DuckDB file, and verify the connection with dac connections and dac query.
Why this matters
DAC reuses Bruin CLI connections - the .bruin.yml you edit here is the same file the rest of the Bruin toolchain reads. Once a connection is wired in, every dashboard, query, and validation step picks it up automatically.
Instructions
1. Look at the connection block
cat .bruin.yml
The starter ships with a placeholder DuckDB connection:
default_environment: default
environments:
default:
connections:
duckdb:
- name: local_duckdb
path: data/dac-demo.duckdb
read_only: true
2. Point it at your DuckDB
Edit path: to wherever your DuckDB file lives. An absolute path works; a path relative to the project root works too. Leave read_only: true for dashboards - DAC only needs to SELECT.
duckdb:
- name: local_duckdb
path: /Users/you/data/warehouse.duckdb # <- your file
read_only: true
Tip
For other warehouses (Postgres, Snowflake, BigQuery, Redshift, ClickHouse, MySQL...) just add a sibling block under connections:. DAC supports everything the Bruin CLI does.
3. Verify the connection
dac connections --dir .
You should see:
NAME TYPE STATUS
local_duckdb duckdb ✓ connected
4. Run an ad-hoc query
Pick any table in your database. The example below assumes an orders table with a region and amount column - swap in your own:
dac query "SELECT region, SUM(amount) AS revenue FROM orders GROUP BY 1 ORDER BY 2 DESC" \
--connection local_duckdb --dir .
DAC prints the result as a clean table. Use --output json or --output csv when you want to pipe it elsewhere.
What just happened
You proved the connection works end to end - from .bruin.yml through dac query to real rows in DuckDB. With a live connection in place, the next step is to put a dashboard on top of it.