Microsoft Fabric
Step-by-Step Guide to Planning a Migration from On-Premises to Microsoft Fabric
A phased migration plan for moving from on-premises reporting and data platforms to Microsoft Fabric.
A practical field guide from the first discovery conversation to go-live
Data migrations usually fail because of planning, not because the target technology is unusable.
Teams often underestimate the complexity of the existing environment, skip discovery, and try to move everything at once. Months later, they are over budget, behind schedule, and still running the old system in parallel because nobody is confident enough to switch it off.
Migrating to Microsoft Fabric does not need to follow that pattern. A good migration plan starts with the business context, makes dependencies visible, tests the process on a small workload, and validates carefully before cutover.
Before Anything Else: Understand What You Are Migrating From
The biggest mistake is treating the migration as a technology project from the first day. It is a business change project that involves technology.
Before you write migration code, you need to understand your current state in detail. This discovery phase is not optional.
flowchart LR
subgraph Input["Input"]
SOURCE["Source data or request"]
end
subgraph Process["Process"]
MODEL["Model and transform"]
REVIEW["Review"]
end
subgraph Output["Output"]
DECISION["Decision or published asset"]
end
SOURCE --> MODEL --> REVIEW --> DECISION
CONTROL["Governance and quality control"] -.-> Process
CONTROL -.-> OutputPhase 0: Discovery and Assessment
flowchart LR
subgraph Assess["Assessment"]
INVENTORY["Inventory reports and sources"]
DEPEND["Map dependencies"]
RISK["Risk and priority scoring"]
end
subgraph Plan["Planning"]
DESIGN["Target Fabric design"]
WAVES["Migration waves"]
SUCCESS["Success criteria"]
end
subgraph Execute["Execution"]
SETUP["Environment setup"]
PILOT["Pilot migration"]
FULL["Wave migration"]
UAT["Validation and UAT"]
CUTOVER["Cutover"]
end
INVENTORY --> DEPEND --> RISK --> DESIGN --> WAVES --> SUCCESS
SUCCESS --> SETUP --> PILOT --> FULL --> UAT --> CUTOVER
GOV["Governance, security, rollback plan"] -.-> Plan
GOV -.-> ExecuteTimeline: 2 to 4 weeks
The goal of this phase is to answer one question honestly: what exactly are we migrating?
Inventory your current data assets. Create a catalogue of every database, data warehouse, reporting system, ETL pipeline, and scheduled job that touches data. Include the owner, purpose, upstream dependencies, and downstream consumers. This is detailed work, but it prevents surprises later.
Classify your workloads. Not everything needs to move to Fabric. Some workloads may be better decommissioned. Others may be better served by a different cloud service. Group workloads by:
- Complexity (simple table sync versus complex multi-stage transformation pipeline)
- Business criticality (core financial reporting versus rarely used operational report)
- Data volume and frequency (10MB daily batch versus 5TB incremental warehouse)
- Regulatory sensitivity (PII, GDPR, SOX, or other compliance requirements)
Identify dependencies. A single reporting database might feed 47 reports, three downstream applications, and an external partner feed. Understanding those dependencies prevents you from breaking something important during the move.
Assess technical debt. Be direct about the state of what you are migrating. If the existing ETL pipelines are undocumented, untested, and understood only by someone who left two years ago, that is a migration risk. Document it and build it into the plan.
Deliverable: A migration inventory spreadsheet and a risk register.
Phase 1: Planning and Design
Timeline: 2 to 3 weeks
Once you understand the current state, design the target state.
Define your target architecture. For most organisations moving to Fabric, the target architecture will follow the Medallion pattern: Bronze for raw data, Silver for cleaned data, and Gold for business-ready data. Agree this structure with the team before building.
Design your naming conventions. This sounds administrative, but it prevents confusion later. Agree how workspaces, Lakehouses, tables, pipelines, and semantic models will be named. Write the convention down and apply it consistently.
Choose your migration approach per workload. There are three common strategies:
| Approach | When to Use | Risk |
|---|---|---|
| Lift and Shift | Simple pipelines and straightforward data models | Low: fastest to migrate |
| Refactor | Complex legacy pipelines worth improving | Medium: more effort, better outcome |
| Re-engineer | Old systems that are fundamentally broken | High: most effort, cleanest result |
Do not apply the same approach to every workload. Use Lift and Shift for simple cases to build momentum early. Save Re-engineering for complex, valuable, genuinely broken processes.
Plan your migration waves. Do not migrate everything at once. Group workloads into waves based on risk and dependency:
- Wave 1 (Pilot): One or two low-risk, non-critical workloads. Prove the process before touching anything important.
- Wave 2 (Core): Common reporting and analytics workloads, moved in small batches.
- Wave 3 (Complex): Large, complex, or business-critical workloads with more testing time.
- Wave 4 (Tail): Legacy or rarely used workloads. Some may be decommissioned rather than migrated.
Define your success criteria. For each workload, define what "done" means. Row counts match. Column definitions are equivalent. Reports produce the same output. These criteria become your validation tests.
Deliverable: Migration design document, wave plan, and validation criteria per workload.
Phase 2: Environment Setup
Timeline: 1 to 2 weeks
Before moving data, set up the Fabric environment properly. Shortcuts here tend to create issues throughout the rest of the project.
Provision Fabric capacity. Work with your Microsoft account team or purchase an F-SKU capacity that fits the expected workload. Do not size only for today. Consider where usage is likely to be in 12 months.
Set up workspaces. Create separate workspaces for Development, Testing, and Production. You should not develop directly in production. Define who can publish to Production, who approves changes, and how permissions are managed.
Configure networking and security. If you are migrating from on-premises systems, set up either a VNet Data Gateway or a self-hosted On-Premises Data Gateway to connect Fabric securely to existing sources. This step often causes delays, so schedule it early.
Set up monitoring. Before data starts moving, decide how you will monitor the migration. Fabric has built-in capacity monitoring, but you should also configure pipeline alerting and data quality checks.
Deliverable: Fabric environments provisioned, access controls configured, and connectivity tested.
Phase 3: Pilot Migration
Timeline: 2 to 3 weeks
The pilot tests the migration process as well as the technology.
Pick a Wave 1 workload that is low risk, well understood, and testable from end to end. Migrate it fully:
- Ingest the source data into the Fabric Lakehouse Bronze layer using Dataflow Gen2 or Data Pipeline
- Apply transformations to create the Silver and Gold layers
- Build or recreate the Semantic Model
- Validate outputs against the source system
Validation is the most important step. Run the success criteria defined earlier. Compare row counts and aggregated totals. If the legacy report shows GBP 4,231,450 in revenue for March, the new Fabric dashboard needs to show the same number.
The pilot will probably reveal issues. That is useful. Document them, fix them, update the process documentation, and repeat the test until it passes.
Deliverable: Pilot completed and signed off, process documentation updated, and lessons captured for future waves.
Phase 4: Full Migration Wave by Wave
flowchart LR
subgraph WaveInput["Wave input"]
REPORTS["Candidate reports"]
SOURCES["Source systems"]
OWNERS["Business owners"]
end
subgraph Build["Fabric build"]
INGEST["Ingest"]
MODEL["Model"]
REPORT["Rebuild report"]
end
subgraph Validate["Validation"]
PARALLEL["Parallel run"]
RECON["Reconciliation"]
SIGNOFF["User sign-off"]
end
subgraph Release["Release"]
CUT["Cutover"]
MONITOR["Monitor adoption"]
RETIRE["Retire old assets"]
end
REPORTS --> INGEST
SOURCES --> INGEST
OWNERS --> SIGNOFF
INGEST --> MODEL --> REPORT --> PARALLEL --> RECON --> SIGNOFF --> CUT --> MONITOR --> RETIRE
CONTROL["Access, refresh, quality gates"] -.-> Build
CONTROL -.-> ValidateTimeline: 4 to 12 weeks depending on volume
Now run the planned migration waves. Each wave follows the same process as the pilot, with more confidence as the team settles into the rhythm.
A few practices matter during this phase:
Run old and new in parallel. Do not switch off the source system until the new system has been validated and users are comfortable. Parallel running creates extra work, but it reduces cutover risk.
Communicate with business users. People who rely on existing reports and dashboards need to know what is changing and when. Regular updates protect trust. For example: "Wave 2 completes on 15 May. These reports will move to the new platform. Here is what will change."
Freeze changes on the source system. During migration of a workload, put a temporary change freeze on the source. It is difficult to validate a migrated pipeline if the source keeps changing underneath you.
Track progress visually. A simple migration tracker, even in a spreadsheet, is useful. Show each workload, wave, status, owner, and sign-off date.
Phase 5: Validation and User Acceptance Testing
Timeline: 2 to 3 weeks
Before any workload is declared complete, it needs to be validated by the people who use it.
Data validation: Row counts, aggregates, and key KPIs must match between old and new systems. Run validation across at least two weeks of data to catch edge cases.
Report validation: Every recreated report and dashboard should be reviewed by a business user who knows the original. The data team can check technical accuracy, but business users must confirm the logic.
Performance testing: The new system needs to perform adequately. A report that took 10 seconds to load in the old system should not take two minutes in the new one. Optimise before cutover if needed.
Deliverable: Signed UAT approval from business stakeholders for each migrated workload.
Phase 6: Cutover and Decommission
Timeline: 1 to 2 weeks
The final step deserves care.
Plan your cutover weekend. For critical systems, cutover should happen during a low-traffic period, often a weekend. Prepare a clear runbook: timing, owners, validation steps, communication plan, and rollback approach.
Communicate the switch. Tell users the exact date and time when the old system stops being the source of truth and where they should access the new dashboards and reports.
Decommission deliberately. Do not rush to turn off old systems. Keep them available in read-only mode for 30 to 60 days after cutover as a safety net. Once the new environment is stable, decommission formally and release the compute or licences.
Document the final state. Update the data catalogue, architecture documentation, and runbooks so the new environment is understandable to the people who will maintain it.
What Most Plans Get Wrong
After working through migrations in several organisations, I see the same failure modes repeat:
Underestimating data quality issues. Source systems usually contain more mess than people expect. Budget time for it.
Insufficient business involvement. Data teams cannot validate business logic alone. Business users need to be involved from Phase 1, not brought in only for UAT.
No rollback plan. Hope is not a recovery strategy. Know how you will respond if cutover fails.
Migrating everything before proving anything. Pilots exist for a reason. Treat them as a serious test of the migration approach.
The Main Lesson
A migration to Microsoft Fabric can improve more than infrastructure. Done well, it forces the organisation to clean up technical debt, agree definitions, and rethink how data moves through the business.
That takes discipline, but the result is worth it: a data platform the organisation can build on instead of one it constantly works around.
Plan carefully. Move in waves. Validate everything. Do not skip discovery.
Next in this series: Top 5 AI tools for video content generation and which one fits each use case.
Reader Comments
Add a comment with your name and email. Your email is used only for basic validation and is not shown publicly.