5-minute tutorial

Migrate X Ads to PostgreSQL in 60 Seconds

Learn how to copy your X Ads data to PostgreSQL with a single command using ingestr - no code required.

One command Zero code Production ready

What you'll learn

How to install and set up ingestr in seconds
Connect to X Ads and PostgreSQL with proper authentication
Copy entire tables or specific data with a single command
Set up incremental loading for continuous data synchronization

Prerequisites

  • Python 3.8 or higher installed
  • X Ads account with API access
  • Approved developer application
  • OAuth 1.0a credentials
  • PostgreSQL server accessible from your network
  • Database user with appropriate permissions
  • pg_hba.conf configured to allow connections
  • Firewall rules allowing port 5432 (or custom port)

Step 1: Install ingestr

Install ingestr in seconds using pip. Choose the method that works best for you:

Recommended: Using uv (fastest)

# Install uv first if you haven't already
pip install uv

# Run ingestr using uvx
uvx ingestr

Alternative: Global installation

# Install globally using uv
uv pip install --system ingestr

# Or using standard pip
pip install ingestr

Verify installation: Run ingestr --version to confirm it's installed correctly.

Step 2: Your First Migration

Let's copy a table from X Ads to PostgreSQL. This example shows a complete, working command you can adapt to your needs.

Set up your connections

X Ads connection format:

twitter-ads://api_key:api_secret@account_id?access_token=token&access_secret=secret

Parameters:

  • • api_key: X API key
  • • api_secret: X API secret
  • • account_id: Ads account identifier
  • • access_token: OAuth access token

PostgreSQL connection format:

postgresql://username:password@host:port/database?sslmode=disable

Parameters:

  • • username: Database user
  • • password: User password
  • • host: Database server hostname or IP
  • • port: Server port (default 5432)
  • • database: Database name
  • • sslmode: SSL mode (disable, require, verify-ca, verify-full)

Run your first copy

Copy the entire users table from X Ads to PostgreSQL:

ingestr ingest \
    --source-uri 'twitter-ads://key:secret@acc_123?access_token=tok&access_secret=sec' \
    --source-table 'campaigns' \
    --dest-uri 'postgresql://myuser:mypass@localhost:5432/mydb?sslmode=require' \
    --dest-table 'raw.campaigns'

What this does:

  • • Connects to your X Ads database
  • • Reads all data from the specified table
  • • Creates the table in PostgreSQL if needed
  • • Copies all rows to the destination

Command breakdown:

  • --source-uri Your source database
  • --source-table Table to copy from
  • --dest-uri Your destination
  • --dest-table Where to write data

Step 3: Verify your data

After the migration completes, verify your data was copied correctly:

Check row count in PostgreSQL:

-- Run this in PostgreSQL
SELECT COUNT(*) as row_count 
FROM raw.campaigns;

-- Check a sample of the data
SELECT * 
FROM raw.campaigns 
LIMIT 10;

Advanced Patterns

Once you've mastered the basics, use these patterns for production workloads.

Only copy new or updated records since the last sync. Perfect for daily updates.

ingestr ingest \
    --source-uri 'twitter-ads://key:secret@acc_123?access_token=tok&access_secret=sec' \
    --source-table 'public.orders' \
    --dest-uri 'postgresql://myuser:mypass@localhost:5432/mydb?sslmode=require' \
    --dest-table 'raw.orders' \
    --incremental-strategy merge \
    --incremental-key updated_at \
    --primary-key order_id

How it works: The merge strategy updates existing rows and inserts new ones based on the primary key. Only rows where updated_at has changed will be processed.

Common Use Cases

Ready-to-use commands for typical X Ads to PostgreSQL scenarios.

Daily Customer Data Sync

Keep your analytics warehouse updated with the latest customer information every night.

# Add this to your cron job or scheduler
ingestr ingest \
    --source-uri 'twitter-ads://key:secret@acc_123?access_token=tok&access_secret=sec' \
    --source-table 'public.customers' \
    --dest-uri 'postgresql://myuser:mypass@localhost:5432/mydb?sslmode=require' \
    --dest-table 'analytics.customers' \
    --incremental-strategy merge \
    --incremental-key updated_at \
    --primary-key customer_id

Historical Data Migration

One-time migration of all historical records to your data warehouse.

# One-time full table copy
ingestr ingest \
    --source-uri 'twitter-ads://key:secret@acc_123?access_token=tok&access_secret=sec' \
    --source-table 'public.transactions' \
    --dest-uri 'postgresql://myuser:mypass@localhost:5432/mydb?sslmode=require' \
    --dest-table 'warehouse.transactions_historical'

Development Environment Sync

Copy production data to your development PostgreSQL instance (with sensitive data excluded).

# Copy sample data to development
ingestr ingest \
    --source-uri 'twitter-ads://key:secret@acc_123?access_token=tok&access_secret=sec' \
    --source-table 'public.products' \
    --dest-uri 'postgresql://myuser:mypass@localhost:5432/mydb?sslmode=require' \
    --dest-table 'dev.products' \
    --limit 1000  # Only copy 1000 rows for testing

Troubleshooting Guide

Solutions to common issues when migrating from X Ads to PostgreSQL.

Connection refused or timeout errors

Check your connection details:

  • Check pg_hba.conf for authentication settings
  • Verify listen_addresses in postgresql.conf
  • Ensure firewall allows connections on PostgreSQL port
  • Test with psql client first to isolate issues
Authentication failures

Common authentication issues:

  • Check pg_hba.conf for authentication settings
  • Verify listen_addresses in postgresql.conf
  • Ensure firewall allows connections on PostgreSQL port
  • Test with psql client first to isolate issues
Schema or data type mismatches

Handling data type differences:

  • ingestr automatically handles most type conversions
  • PostgreSQL: JSONB fields may need special handling
  • PostgreSQL: Arrays are PostgreSQL-specific feature
  • PostgreSQL: UUID type requires proper mapping
  • PostgreSQL: Custom types may need conversion
Performance issues with large tables

Optimize large data transfers:

  • Use incremental loading to process data in chunks
  • Run migrations during off-peak hours
  • Split very large tables by date ranges using interval parameters

Ready to scale your data pipeline?

You've learned how to migrate data from X Ads to PostgreSQL with ingestr. For production workloads with monitoring, scheduling, and data quality checks, explore Bruin Cloud.

Star ingestr on GitHub