All integrations
Frankfurter
+
Bruin

Frankfurter + Bruin

Source

Ingest Frankfurter data into your warehouse with incremental loading, quality checks, and full lineage. Defined in YAML, version-controlled in Git.

For business teams

What you get

  • API data, on schedule

    Frankfurter data lands in your warehouse automatically. No scripts to maintain, no pagination to handle.

  • Only fetch what changed

    Incremental sync means no re-processing. Bruin tracks watermarks so you only get new and updated records.

  • Catch API changes early

    Quality checks validate response data on every sync. Schema changes or missing fields get caught before they break models.

  • Transform in the same pipeline

    Reshape Frankfurter API data with SQL or Python. Compute metrics, normalize schemas, and build models — all version-controlled.

For data & engineering teams

How it works

  • Managed pagination & retries

    Bruin handles Frankfurter API pagination, rate limiting, and retries. You define the source — Bruin does the rest.

  • YAML-defined, Git-versioned

    Your Frankfurter pipeline is a YAML file. Review in PRs, deploy with CI/CD, roll back with git revert.

  • Incremental with watermarks

    Bruin tracks cursor positions and watermarks. Only new and updated Frankfurter records get fetched on each run.

  • Schema validation on responses

    Quality checks validate Frankfurter API response structure on every sync. Catch breaking API changes early.

Before you start

None - public API

Step 1

Add your Frankfurter connection

No authentication required for public API. Add this to your Bruin environment file — credentials are stored securely and referenced by name in your pipeline YAML.

connections:
  frankfurter:
    type: frankfurter
    uri: "frankfurter://"

Step 2

Create your pipeline

Define a YAML asset that tells Bruin what to pull from Frankfurter and where to land it. This file lives in your Git repo — reviewable, version-controlled, and deployable with CI/CD.

Available tables

exchange_ratescurrencies
name: raw.frankfurter_exchange_rates
type: ingestr

parameters:
  source_connection: frankfurter
  source_table: 'exchange_rates'
  destination: bigquery

Step 3

Add quality checks

Add column-level and custom SQL checks to your Frankfurter data. If a check fails, the pipeline stops — bad data never reaches downstream models or dashboards.

Validate API data freshness on every sync
Ensure record IDs are unique across fetches
Catch missing fields from API response changes
columns:
  - name: id
    checks:
      - name: not_null
      - name: unique
  - name: fetched_at
    checks:
      - name: not_null

custom_checks:
  - name: API data is fresh
    query: |
      SELECT MAX(fetched_at) >
        TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
      FROM raw.frankfurter_exchange_rates

Step 4

Run it

One command. Bruin connects to Frankfurter, pulls data incrementally, runs your quality checks, and lands clean data in your warehouse. If a check fails, the pipeline stops — bad data never reaches downstream.

Backfill historical data with --start-date
Schedule with cron or trigger from CI/CD
Full lineage from Frankfurter to your dashboards
$ bruin run .
Running pipeline...

  frankfurter_exchange_rates
    ✓ Fetched 2,847 new records
    ✓ Quality: campaign_id not_null     PASSED
    ✓ Quality: spend not_null           PASSED
    ✓ Quality: no negative ad spend     PASSED
    ✓ Loaded into bigquery

  Completed in 12s

Other API integrations

Ready to connect Frankfurter?

Start for free, or book a demo to see how Bruin handles ingestion, quality, lineage, and scheduling for your entire data stack.