Database Migration Planner
Creates safe, zero-downtime database migration plans for schema changes of any complexity. Covers column additions, type modifications, constraint changes, table splits and merges, index creation, data backfills, foreign key management, and enum type changes with multi-phase deployment strategies, rollback procedures, and data integrity verification queries.
Usage
Describe the current schema and the desired target schema. Specify your database engine (PostgreSQL, MySQL, SQL Server), ORM/migration tool (Drizzle, Prisma, Knex, Django, Alembic), table sizes, and uptime requirements. The skill generates a step-by-step migration plan with SQL statements, deployment phases, and verification queries.
Examples
- "Rename a column from 'username' to 'display_name' on a 10M row users table with zero downtime"
- "Split a monolithic 'users' table into 'users' and 'user_profiles' without breaking existing queries"
- "Change a column type from VARCHAR(50) to TEXT on a 50M row table in PostgreSQL without locking"
- "Add a NOT NULL constraint to an existing column that currently has NULL values requiring backfill"
Guidelines
- Use expand-contract (parallel change) pattern: add new → migrate data → update code → remove old
- Create indexes CONCURRENTLY in PostgreSQL to avoid blocking writes during index creation
- Add new columns as nullable first, backfill data, then add NOT NULL constraint with a default value
- Run large data backfills in batches (1000-10000 rows) with short sleeps to avoid overwhelming the database
- Always test migrations against a production-sized dataset to catch timeouts and lock contention issues
- Write both up and down migrations; verify the down migration works before running up in production
- Use advisory locks or migration tools' built-in locking to prevent concurrent migration execution
- Verify data integrity after each migration step with count checks, null checks, and constraint validation queries