5-minute tutorial

Migrate X Ads to DuckDB in 60 Seconds

Learn how to copy your X Ads data to DuckDB with a single command using ingestr - no code required.

One command Zero code Production ready

What you'll learn

How to install and set up ingestr in seconds
Connect to X Ads and DuckDB with proper authentication
Copy entire tables or specific data with a single command
Set up incremental loading for continuous data synchronization

Prerequisites

  • Python 3.8 or higher installed
  • X Ads account with API access
  • Approved developer application
  • OAuth 1.0a credentials
  • DuckDB installed locally or database file accessible
  • Write permissions for database file location
  • Sufficient memory for in-memory operations
  • Compatible file format version

Step 1: Install ingestr

Install ingestr in seconds using pip. Choose the method that works best for you:

Recommended: Using uv (fastest)

# Install uv first if you haven't already
pip install uv

# Run ingestr using uvx
uvx ingestr

Alternative: Global installation

# Install globally using uv
uv pip install --system ingestr

# Or using standard pip
pip install ingestr

Verify installation: Run ingestr --version to confirm it's installed correctly.

Step 2: Your First Migration

Let's copy a table from X Ads to DuckDB. This example shows a complete, working command you can adapt to your needs.

Set up your connections

X Ads connection format:

twitter-ads://api_key:api_secret@account_id?access_token=token&access_secret=secret

Parameters:

  • • api_key: X API key
  • • api_secret: X API secret
  • • account_id: Ads account identifier
  • • access_token: OAuth access token

DuckDB connection format:

duckdb:///path/to/database.duckdb

Parameters:

  • • path: Path to database file (use :memory: for in-memory)
  • • read_only: Optional flag for read-only access
  • • threads: Number of threads to use

Run your first copy

Copy the entire users table from X Ads to DuckDB:

ingestr ingest \
    --source-uri 'twitter-ads://key:secret@acc_123?access_token=tok&access_secret=sec' \
    --source-table 'campaigns' \
    --dest-uri 'duckdb:///home/user/analytics.duckdb' \
    --dest-table 'raw.campaigns'

What this does:

  • • Connects to your X Ads database
  • • Reads all data from the specified table
  • • Creates the table in DuckDB if needed
  • • Copies all rows to the destination

Command breakdown:

  • --source-uri Your source database
  • --source-table Table to copy from
  • --dest-uri Your destination
  • --dest-table Where to write data

Step 3: Verify your data

After the migration completes, verify your data was copied correctly:

Check row count in DuckDB:

-- Run this in DuckDB
SELECT COUNT(*) as row_count 
FROM raw.campaigns;

-- Check a sample of the data
SELECT * 
FROM raw.campaigns 
LIMIT 10;

Advanced Patterns

Once you've mastered the basics, use these patterns for production workloads.

Only copy new or updated records since the last sync. Perfect for daily updates.

ingestr ingest \
    --source-uri 'twitter-ads://key:secret@acc_123?access_token=tok&access_secret=sec' \
    --source-table 'public.orders' \
    --dest-uri 'duckdb:///home/user/analytics.duckdb' \
    --dest-table 'raw.orders' \
    --incremental-strategy merge \
    --incremental-key updated_at \
    --primary-key order_id

How it works: The merge strategy updates existing rows and inserts new ones based on the primary key. Only rows where updated_at has changed will be processed.

Common Use Cases

Ready-to-use commands for typical X Ads to DuckDB scenarios.

Daily Customer Data Sync

Keep your analytics warehouse updated with the latest customer information every night.

# Add this to your cron job or scheduler
ingestr ingest \
    --source-uri 'twitter-ads://key:secret@acc_123?access_token=tok&access_secret=sec' \
    --source-table 'public.customers' \
    --dest-uri 'duckdb:///home/user/analytics.duckdb' \
    --dest-table 'analytics.customers' \
    --incremental-strategy merge \
    --incremental-key updated_at \
    --primary-key customer_id

Historical Data Migration

One-time migration of all historical records to your data warehouse.

# One-time full table copy
ingestr ingest \
    --source-uri 'twitter-ads://key:secret@acc_123?access_token=tok&access_secret=sec' \
    --source-table 'public.transactions' \
    --dest-uri 'duckdb:///home/user/analytics.duckdb' \
    --dest-table 'warehouse.transactions_historical'

Development Environment Sync

Copy production data to your development DuckDB instance (with sensitive data excluded).

# Copy sample data to development
ingestr ingest \
    --source-uri 'twitter-ads://key:secret@acc_123?access_token=tok&access_secret=sec' \
    --source-table 'public.products' \
    --dest-uri 'duckdb:///home/user/analytics.duckdb' \
    --dest-table 'dev.products' \
    --limit 1000  # Only copy 1000 rows for testing

Troubleshooting Guide

Solutions to common issues when migrating from X Ads to DuckDB.

Connection refused or timeout errors

Check your connection details:

  • Ensure database file path is accessible
  • Check file permissions for read/write access
  • Verify DuckDB version compatibility
  • Consider memory limits for large operations
Authentication failures

Common authentication issues:

  • Ensure database file path is accessible
  • Check file permissions for read/write access
  • Verify DuckDB version compatibility
  • Consider memory limits for large operations
Schema or data type mismatches

Handling data type differences:

  • ingestr automatically handles most type conversions
  • DuckDB: LIST and STRUCT types for complex data
  • DuckDB: Native support for nested data structures
  • DuckDB: Automatic type inference from files
  • DuckDB: Efficient NULL handling
Performance issues with large tables

Optimize large data transfers:

  • Use incremental loading to process data in chunks
  • Run migrations during off-peak hours
  • Split very large tables by date ranges using interval parameters

Ready to scale your data pipeline?

You've learned how to migrate data from X Ads to DuckDB with ingestr. For production workloads with monitoring, scheduling, and data quality checks, explore Bruin Cloud.

Star ingestr on GitHub