Appearance
Target Migration Script
The target migration script merges data from the legacy mx-dod-form Convex deployment into the main xentral deployment. It handles 43+ target tables with ID preservation, conflict detection, batch processing, and automatic verification.
Overview
┌─────────────────────┐ ┌─────────────────────┐
│ mx-dod-form │ │ xentral │
│ (old deployment) │ ──────► │ (new deployment) │
│ │ migrate │ │
│ 43 tables │ │ 43 tables │
└─────────────────────┘ └─────────────────────┘Quick Start
Full Migration (Recommended)
bash
# Preview what will happen
npx tsx scripts/target-migration/import-and-merge.ts --dry-run
# Execute full migration (import + merge)
npx tsx scripts/target-migration/import-and-merge.ts --executeThis script:
- Runs
convex import --replace-all -yto import all data with preserved IDs - Runs the merge script to apply any remaining updates
Features
| Feature | Description |
|---|---|
| ID Preservation | Original _id values preserved via convex import |
| Conflict Detection | "Newer wins" timestamp comparison |
| Dry-Run Mode | Preview all changes before applying |
| Live Verification | Verify against live database, not stale exports |
| Batch Processing | Configurable batch size (default: 200) |
| Large Table Handling | auditTrail skipped, sessions paginated |
| Markdown Reports | Audit trail for all operations |
Setup
1. Environment Configuration
Create .env file in scripts/target-migration/:
bash
cd scripts/target-migration
cp .env.example .envRequired environment variables:
bash
# Old deployment (source)
OLD_CONVEX_URL=https://your-old-deployment.convex.cloud
OLD_CONVEX_ADMIN_KEY=prod:your-admin-key
# New deployment (target)
NEW_CONVEX_URL=https://your-new-deployment.convex.cloud
NEW_CONVEX_ADMIN_KEY=prod:your-admin-key
# Optional: path to exported data
OLD_EXPORT_PATH=./scripts/target-migration/data/old-export2. Export Deployment Data
Export data from both deployments for comparison:
bash
# Export old deployment (download from Convex dashboard or run from old project)
# Place in: ./scripts/target-migration/data/old-export/
# Export new/local deployment
npx tsx scripts/target-migration/export-data.ts --new
# Check both exports exist
npx tsx scripts/target-migration/export-data.ts --allScripts
import-and-merge.ts (Recommended)
Full migration with ID preservation:
bash
# Preview
npx tsx scripts/target-migration/import-and-merge.ts --dry-run
# Execute
npx tsx scripts/target-migration/import-and-merge.ts --execute
# Options
--skip-import # Skip convex import (only run merge)
--skip-merge # Skip merge (only run import)run.ts
Compare and merge data between deployments:
bash
# Preview changes
npx tsx scripts/target-migration/run.ts --dry-run
# Apply changes + auto-verify
npx tsx scripts/target-migration/run.ts --execute
# Apply only UPDATEs (skip INSERTs)
npx tsx scripts/target-migration/run.ts --execute --skip-inserts
# Verify only (uses live database)
npx tsx scripts/target-migration/run.ts --verify --live
# Process specific tables
npx tsx scripts/target-migration/run.ts --dry-run --tables=dods,issues,solutionsexport-data.ts
Export deployment data for comparison:
bash
# Export new (local) deployment
npx tsx scripts/target-migration/export-data.ts --new
# Check old export exists
npx tsx scripts/target-migration/export-data.ts --old
# Export new + check old
npx tsx scripts/target-migration/export-data.ts --allMigration Strategies
Strategy 1: Full Import + Merge (Recommended)
Best when you want to fully sync from old to new:
bash
npx tsx scripts/target-migration/import-and-merge.ts --execute| Pros | Cons |
|---|---|
| ✅ Preserves original IDs | Replaces all data in target |
| ✅ All records imported | |
| ✅ Catches any updates since export |
Strategy 2: Updates Only
Best when new deployment has data you want to keep:
bash
npx tsx scripts/target-migration/run.ts --execute --skip-inserts| Pros | Cons |
|---|---|
| ✅ Preserves new deployment's unique records | ❌ Missing records won't be imported |
| ✅ Only updates existing records |
Strategy 3: Selective Tables
Migrate specific tables:
bash
npx tsx scripts/target-migration/run.ts --execute --tables=dods,issues| Pros | Cons |
|---|---|
| ✅ Fine-grained control | ❌ May miss related data |
| ✅ Faster for targeted updates |
Conflict Resolution
When the same record exists in both deployments with different values:
| Condition | Action | Reason |
|---|---|---|
| Record only in old | INSERT | Missing in new deployment |
| Records identical | SKIP | No changes needed |
| Old record newer | UPDATE | Old has more recent changes |
| New record newer | SKIP | Keep newer version |
Timestamp comparison uses updatedAt field, falling back to _creationTime.
Tables Migrated
The script processes 43 target-related tables:
Core Tables
assignees,coordinators,departments,sessionsdesignElements,milestonesissues,solutions,dodsdodNotes,dodNoteHistory,dodStatus
Planning Tables
importanceOptions,tags,targetTypestargetDefinitions,targetDescriptionItemsapprovalSteps
Resource Management
assigneeReservations,assigneeLeaveAndOTassigneeAllocationOverride,assigneeMilestoneAggregatesassigneeReservationChangeRequests
Snapshots & History
commitmentPlanSnapshots,milestoneSnapshotsauditTrail(skipped - too large),migrationStatus
Notifications
dodNotifications,notificationHistory
Settings & Admin
globalSettings,adminUsers
Scheduling
milestoneDate,sprintDate,holidayDatedeadlineSchedulers,sprintUpdateRowColors
Custom Fields
customFieldDefinitions,customFieldValues,customFieldLinks
XILO Integration
issueXiloCell,issueAnnotationLinks,issueXiloCellHistory
Other
overallFeedback,xyncAlerts,syncQueue
Verification
Live Verification
After migration, verify against live database:
bash
npx tsx scripts/target-migration/run.ts --verify --liveVerification performs:
- Record Count Check - Old count ≤ New count for all tables
- Missing Records Check - All old records exist in new
- Spot Check - Random sample of 10 records per table verified field-by-field
Auto-Verification
The --execute flag automatically runs verification after migration completes.
Reports
Reports are saved to scripts/target-migration/reports/:
| File | Description |
|---|---|
migration-dry-run-{timestamp}.md | Preview of all changes |
migration-execute-{timestamp}.md | Execution results + verification |
Performance
| Method | Speed | Notes |
|---|---|---|
| Export comparison | ~2 seconds | Fastest - compares files |
| Live API queries | ~30+ seconds | For large tables |
The script automatically uses exports if available.
File Structure
scripts/target-migration/
├── .env.example # Environment template
├── .env # Local credentials (gitignored)
├── README.md # Quick reference
├── import-and-merge.ts # Full migration with ID preservation
├── run.ts # Compare and merge entry point
├── export-data.ts # Export deployment data
├── config.ts # Configuration & table list
├── types.ts # TypeScript interfaces
├── clients.ts # Convex client setup
├── loader.ts # Data loading (export or API)
├── compare.ts # Record comparison logic
├── merge.ts # INSERT/UPDATE execution
├── verify.ts # Post-migration verification
├── report.ts # Markdown report generation
├── data/ # Exported data (gitignored)
│ ├── old-export/ # Old deployment export
│ └── new-export/ # New deployment export
└── reports/ # Generated reports
packages/convex/
├── migrationHelper.ts # Convex queries for migration
└── migrationMutations.ts # Convex mutations for migrationTroubleshooting
"Table X not found in export"
Ensure exports are extracted correctly:
bash
ls -la ./data/old-export/
ls -la ./data/new-export/Each table should have a {table}/documents.jsonl file.
Verification Failed
Verification compares against export files by default. For live verification:
bash
npx tsx scripts/target-migration/run.ts --verify --liveOr re-export after migration:
bash
npx tsx scripts/target-migration/export-data.ts --new
npx tsx scripts/target-migration/run.ts --verifyINSERT fails with "Cannot specify _id"
Convex doesn't allow inserting with specific IDs through mutations. Use the import-and-merge script:
bash
npx tsx scripts/target-migration/import-and-merge.ts --executeThis uses convex import which preserves IDs.
Timeout errors
The script includes retry logic for timeout errors. For persistent issues:
- Use exported data instead of live API
- Process tables in smaller batches with
--tables
Large tables (auditTrail, sessions)
auditTrailis skipped automatically (too large)sessionsuses pagination for handling