All integrations
Lightspeed
+
Bruin

Lightspeed + Bruin

Source

Ingest Lightspeed data into your warehouse with incremental loading, quality checks, and full lineage. Defined in YAML, version-controlled in Git.

For business teams

What you get

  • Revenue analytics, automated

    Lightspeed orders, refunds, and transactions flow into your warehouse. Build cohort analysis, LTV, and revenue models with clean data.

  • True ROAS across channels

    Join Lightspeed revenue with ad spend from Google, Facebook, and others. Know your real return — not what each ad platform claims.

  • Inventory monitoring

    Quality checks flag low stock levels and stockout risks from Lightspeed data. Operations gets alerts before customers notice.

  • Customer 360 view

    Combine Lightspeed purchase history with support tickets, NPS, and product usage. See the full customer picture.

For data & engineering teams

How it works

  • Incremental order sync

    Only sync new and updated Lightspeed orders. No full reloads, even for high-volume stores.

  • YAML-defined, Git-versioned

    Your Lightspeed pipeline is a YAML file. Review in PRs, deploy with CI/CD, roll back with git revert.

  • Order data validation

    Quality checks catch negative totals, invalid statuses, and missing order IDs on every sync.

  • Multi-destination support

    Land Lightspeed data in BigQuery, Snowflake, Redshift, or DuckDB. Switch destinations by changing one line.

Before you start

Lightspeed API credentials via OAuth2 app registration
Lightspeed Retail or Restaurant subscription

Step 1

Add your Lightspeed connection

Connect using Lightspeed OAuth2 credentials. Add this to your Bruin environment file — credentials are stored securely and referenced by name in your pipeline YAML.

Parameters

  • client_idOAuth2 client ID from your Lightspeed app
  • client_secretOAuth2 client secret from your Lightspeed app
  • refresh_tokenOAuth2 refresh token obtained during authorization
connections:
  lightspeed:
    type: lightspeed
    uri: "lightspeed://?client_id=<your-client-id>&client_secret=<your-client-secret>&refresh_token=<your-refresh-token>"

Step 2

Create your pipeline

Define a YAML asset that tells Bruin what to pull from Lightspeed and where to land it. This file lives in your Git repo — reviewable, version-controlled, and deployable with CI/CD.

Available tables

salesproductscustomersinventorycategories
name: raw.lightspeed_sales
type: ingestr

parameters:
  source_connection: lightspeed
  source_table: 'sales'
  destination: bigquery

Step 3

Add quality checks

Add column-level and custom SQL checks to your Lightspeed data. If a check fails, the pipeline stops — bad data never reaches downstream models or dashboards.

Catch negative order totals before they reach reports
Validate order statuses against accepted values
Ensure order IDs are unique — no duplicates
columns:
  - name: order_id
    checks:
      - name: not_null
      - name: unique
  - name: total_price
    checks:
      - name: not_null
  - name: status
    checks:
      - name: accepted_values
        value: ['pending', 'paid', 'shipped', 'delivered', 'cancelled']

custom_checks:
  - name: no negative order totals
    query: |
      SELECT COUNT(*) = 0
      FROM raw.lightspeed_sales
      WHERE total_price < 0

Step 4

Run it

One command. Bruin connects to Lightspeed, pulls data incrementally, runs your quality checks, and lands clean data in your warehouse. If a check fails, the pipeline stops — bad data never reaches downstream.

Backfill historical data with --start-date
Schedule with cron or trigger from CI/CD
Full lineage from Lightspeed to your dashboards
$ bruin run .
Running pipeline...

  lightspeed_sales
    ✓ Fetched 2,847 new records
    ✓ Quality: campaign_id not_null     PASSED
    ✓ Quality: spend not_null           PASSED
    ✓ Quality: no negative ad spend     PASSED
    ✓ Loaded into bigquery

  Completed in 12s

Other E-commerce Platform integrations

Ready to connect Lightspeed?

Start for free, or book a demo to see how Bruin handles ingestion, quality, lineage, and scheduling for your entire data stack.