Ask the Real Analyst Question
Put the stack to work. Combine FMP fundamentals, Yahoo prices, and FRED macro to answer the kind of question analysts actually get paid for.
What you'll do
Ask your AI analyst the signature question this module was built for - one that requires all three data sources at once. Then learn the habits that get you consistently good answers.
Why this step matters
The whole point of the last five steps - the CLI, the MCP, the ingestion, the staging + reports layers, the context file - was to make this step possible: asking a question that used to take a junior analyst a week, and getting a worked answer in 30 seconds.
You built reports.ticker_quarterly in Step 4 specifically so questions like this one don't require re-deriving TTM FCF margin or aligning fundamentals-to-prices every session. Your AGENTS.md tells the agent to default to reports.* - so this question is mostly a SELECT against a pre-joined table.
The signature question
Paste this into your AI tool:
Using reports.ticker_quarterly, find tickers (from the 5 I ingested) that meet all of the following over the last 4 quarters available in the table:
ttm_fcf_marginhas improved quarter over quarter (positivefcf_margin_qoq_changein each of the last 4 quarters, or strictly increasingttm_fcf_margin)quarterly_returnhas underperformed the equal-weighted average across the 5 tickers by more than 10 percentage points cumulatively over the same period
Then split the qualifying set by macro regime using regime_at_qend:
- 'inverted' - yield curve was inverted at the quarter end
- 'normal' - yield curve was positive at the quarter end
For each ticker that qualifies, show: ticker, the 4 quarterly ttm_fcf_margin values, cumulative quarterly_return vs. the peer average, and which regime dominated the window.
Default to reports.ticker_quarterly. Only drop to staging.* or raw.* if you need to validate a number that looks off, and tell me when you do.
Show me the SQL first. Only execute once I approve the plan.
What to look for in the agent's plan
Before approving the SQL, check that the agent:
- Queries
reports.ticker_quarterlyas the primary source - notraw.*orstaging.* - Uses the pre-computed
ttm_fcf_margincolumn instead of recomputing FCF / revenue from raw - Uses
regime_at_qendfor the macro regime split (already calculated in staging) - Computes the peer average across the 5 tickers within the same query (window function or subquery)
- States explicitly if it does need to drop into
staging.*orraw.*- and why
If the agent reaches into raw.fmp_fundamentals to recompute things, push back: "reports.ticker_quarterly already has TTM FCF margin and the regime - use that." This is the discipline that makes the layered pipeline pay off.
Follow-up questions to try
Once the first question runs clean, keep pulling threads - tap any prompt below to copy it:
For the qualifying tickers, pull operating margin and R&D spend from staging.fundamentals for the same quarters. Does the trend change your interpretation? (Acceptable to drop to staging.* here - these columns aren't surfaced in reports.ticker_quarterly.)
Rerun the analysis using reports.ticker_quarterly but tighten the threshold: require fcf_margin_qoq_change > 0.02 (200bps) in each of the last 4 quarters. Does the set of qualifiers narrow?
Using reports.ticker_quarterly, overlay fed_funds_rate_at_qend for the qualifying tickers. Were they concentrated in periods with rising rates?
For the qualifying tickers, query staging.macro to see what unemployment (unemployment_rate) was doing during their underperformance windows. Any relationship worth investigating?
Each of these takes a real analyst 30–60 minutes to produce manually. Your agent does them in the time it takes to write the prompt.
Save the good queries
When the agent writes a SQL query you like, ask it to save it as a Bruin SQL asset in the reports layer:
Take that query and save it as stock-analyst-101/assets/reports/fcf_vs_price_underperformers.sql materialized as reports.fcf_vs_price_underperformers. Use materialization view (cheap, always re-derives from reports.ticker_quarterly). Add a top-level description, column descriptions, and a tier:reports tag.
Now it's part of your pipeline. Next run of bruin run, the view is rebuilt from fresh data. You're not just asking questions - you're growing your reports layer with every analysis you save.
Habits for getting consistently good answers
- Ask for the plan before the result. Prompts that include "show me the SQL first and explain the logic" catch mistakes cheap.
- Be specific about time windows. "Recent" is ambiguous; "last 4 fiscal quarters as of today" is not.
- Name the comparison. "Underperformed" vs. what benchmark? State it. Otherwise the agent picks one and you don't know which.
- When in doubt, ask for counts first. "How many rows satisfy X?" before "here are all the rows satisfying X" - saves you from chasing ghost results.
- Iterate on
AGENTS.md. Every time the agent gets something wrong for a domain reason, add a line toAGENTS.mdso it doesn't repeat the mistake.
What just happened
You built a complete stock-analyst workstation - ingest, database, context, agent - on a single laptop, in one sitting, without writing SQL or Python yourself. The stack is yours to keep. Swap in different tickers, add new FRED series, plug in a second fundamentals source. Every addition follows the same pattern: prompt the agent, it runs Bruin, the data lands.
Where to go next
- Build an AI Data Analyst - the full deep dive on the same MCP + AGENTS.md pattern, covering BigQuery/Redshift/Postgres as destinations
- Using Bruin Templates - skip scaffolding entirely with pre-built pipelines
- Local AI Analyst for the Stock Market (video) - the BigQuery + full S&P 500 version of this module