Power BI
Migrating Power BI From On Premises to the Cloud Without Breaking the Business
A practical migration playbook for moving Power BI from on-premises infrastructure to the cloud.
Most organisations that still run Power BI Report Server on a Windows VM somewhere in their data centre know the migration conversation is overdue. They postpone it because the tooling is unfamiliar, the licensing model has changed several times, and nobody wants to be the person who explains a broken Monday morning report to the leadership team.
This article maps out a migration path that has worked for several teams I have advised. It covers the planning work that nobody enjoys, the technical lift, the cutover strategy, and the post migration cleanup that actually decides whether the project succeeds.
Why Move at All
The honest answer is that the on premises product is on a slow path to retirement. Microsoft continues to ship security updates, but new features land in the Power BI service first and reach Report Server many months later, sometimes never. Composite models, Microsoft Fabric integration, Copilot in Power BI, deployment pipelines, and most agentic AI tooling either do not exist on Report Server or arrive in stripped down form.
Beyond features, the operational maths usually favours the cloud after the first year. Patching cycles, backup strategy, gateway maintenance, and the dance of getting SSRS and Analysis Services to behave nicely on the same Windows server all evaporate. So does the cost of running an underused VM 24 hours a day for a workload that peaks at 9am on weekdays.
The Decision Tree
Before any technical work begins, three questions need clean answers.
The first is the licensing model. Power BI Pro, Premium Per User, and Premium capacities each have very different price points and feature sets. Microsoft Fabric capacities now sit alongside Premium and complicate the picture further. The right pick depends on user count, dataset size, and the appetite for pipelines and lakehouses outside the BI surface.
The second is the data residency constraint. Some industries and some countries require data to stay inside specific regions. Power BI lets you deploy capacities in dozens of regions, but the default tenant region is set when the Microsoft 365 tenant is first created and changing it later is a project of its own.
The third is the connection strategy. If the source databases stay on premises, you will need an On Premises Data Gateway. If the migration includes lifting the warehouse into Azure, the gateway can be retired entirely once the cloud sources are in place.
Migration Architecture
flowchart LR
subgraph OnPrem["On Premises, Today"]
SQL1[("SQL Server<br/>Warehouse")]
SSAS[("SSAS Tabular<br/>Cubes")]
RS[("Report Server<br/>Reports")]
Users1[Users via Browser]
Users1 --> RS
RS --> SSAS
SSAS --> SQL1
end
subgraph Cloud["Azure and Power BI Service, Target"]
ASQL[("Azure SQL<br/>or Synapse")]
Fabric[("Fabric Capacity<br/>Semantic Models")]
PBI[("Power BI Service<br/>Workspaces")]
Users2[Users via App or Teams]
Users2 --> PBI
PBI --> Fabric
Fabric --> ASQL
end
Gateway[On Premises Data Gateway] -.transition state.-> Fabric
SQL1 -.during cutover.-> GatewayPhase 1, Inventory and Triage
You cannot migrate what you have not measured. Begin with a full audit of the existing Report Server estate. Record every report, its consumers, its data sources, its refresh schedule, its row count, and the last time someone actually opened it.
A surprising amount of this estate is dead weight. In one engagement we cut 64 percent of reports during inventory because nobody had viewed them in the previous 90 days. The cleaner the backlog, the faster the migration.
Triage the survivors into three buckets. Lift and shift means the report can be republished with minimal change. Refactor means the report needs work, often because it relies on shared datasets, paginated layouts, or stored procedures that need rewriting. Retire means the report is duplicated, broken, or replaced by something newer.
A simple inventory spreadsheet works fine, but if you prefer to automate the audit, the REST APIs on Report Server expose enough metadata to script the whole thing. The Power BI service side has equivalent admin APIs.
Phase 2, Prepare the Cloud Tenant
Provision the Power BI tenant, set the home region, and assign capacity. If you are buying a Fabric or Premium capacity, decide whether to consolidate everything into one or split capacities by department. Splitting feels safer at first but creates governance overhead later. A single shared capacity with workspace level isolation is usually easier to manage.
Set up workspaces with a consistent naming convention. A pattern that has held up well across many tenants is BusinessUnit_Domain_Stage, for example Finance_Sales_Prod. Stages cover Dev, Test and Prod, and pipelines automate promotion between them.
Configure tenant level settings before any user content lands. Restrict export to specific groups, lock down publish to web, set up sensitivity labels if Microsoft Information Protection is in use, and turn on usage metrics for every workspace.
Phase 3, Migrate the Data Layer First
Reports that point at on premises sources will work in the cloud through a gateway, but the gateway becomes a bottleneck and a single point of failure if the workload is heavy. Plan to migrate the warehouse to Azure SQL, Synapse, or Fabric Lakehouse early in the project, ideally before any reports move.
If a true warehouse migration is out of scope, at least replicate the data to Azure using Azure Data Factory or Fabric Data Pipelines. Reports then point at the cloud copy, which removes the gateway dependency for analytics traffic.
A common middle ground is to keep transactional databases on premises and replicate only the analytical schemas. The on premises systems remain authoritative for operational use, while the cloud copy serves Power BI exclusively. Latency is rarely an issue for analytics use cases that already tolerate overnight refreshes.
Phase 4, Refactor the Models
This is where the migration earns its keep. Most reports built on Report Server were authored when DirectQuery against SSAS Tabular was the default. In the new world, Import mode against a Fabric Lakehouse or composite mode against a semantic model often performs better and costs less.
Refactor in this order. Build the semantic model first, with a clean star schema and explicit measures. Connect a sample report to validate the model. Migrate the existing reports one by one, pointing them at the new semantic model rather than the old SSAS cube. Decommission the SSAS cube only after every dependent report has switched.
Treat semantic models as products. Give them owners, documentation, and a release process. The shared dataset pattern in the Power BI service is the single biggest reason teams find the cloud experience cleaner than Report Server.
Phase 5, The Cutover
Run the old and new estates in parallel for at least two refresh cycles. During this period, both systems serve reports and both refresh on schedule. Compare numbers. Differences will appear, almost always because of subtle changes in model semantics, time intelligence, or filter context.
Maintain a comparison sheet that shows the same KPI calculated against both estates for the same period. When the numbers match for two consecutive weeks across every report you actually care about, the migration is ready.
Communicate the cutover date to users at least three weeks in advance. Send reminders at three weeks, one week, and two days. On cutover day, redirect traffic, leave the old environment in read only mode for a fortnight in case rollback is required, and then decommission.
Phase 6, The Cleanup Nobody Enjoys
The temptation after cutover is to declare victory and move on. Resist it. The most valuable two weeks of the project come straight after cutover.
Review usage metrics every day for the first month. Reports that nobody opens in the new environment are often reports that nobody opened in the old environment either, but the move provides air cover for a real conversation about retirement. Capture refresh durations and tune the schedule so heavy datasets do not collide. Set up deployment pipelines so that future changes flow through Dev, Test and Prod rather than direct to production.
Document everything in a place developers will actually read. A wiki tied to the workspace, with one page per semantic model, works better than a SharePoint folder of Word files.
Common Pitfalls
The biggest is underestimating the gateway. If you choose to keep on premises sources, the gateway needs to be highly available with at least two cluster members, sized for the peak refresh window, and monitored. A single under provisioned gateway is the most common reason cloud migrations feel slower than the old environment.
The second is letting paginated reports linger. Power BI Report Server hosted both interactive and paginated reports. The Power BI service supports paginated reports too, but the experience is different and the licensing differs. Audit which paginated reports really need to remain paginated, and convert the rest to interactive.
The third is ignoring identity. On premises Report Server typically used Windows authentication. The cloud uses Microsoft Entra ID. Service accounts that worked in the old world need to become app registrations or service principals in the new one. Plan this conversation with the identity team early because it has the longest lead time of any task in the project.
What You Get on the Other Side
The benefits start showing within the first month. Refresh failures drop because the platform itself is healthier. Reports load faster because the engine has caught up to several years of optimisation that on premises customers never received. New features begin landing without server upgrades. Users start asking for AI features and finally have a platform that supports them.
The deeper benefit is harder to quantify. The cloud platform makes it possible to bring agentic AI patterns, Copilot, and Fabric integration into reach. None of those exist in any meaningful form on Report Server. Once the migration is complete, the analytics function stops being a maintenance operation and starts being a product team again.
A Final Note
Treat this migration as an opportunity to prune as well as copy. The reports you bring across will become the foundation of the next decade of analytics in your organisation. Bring across the ones that earn their place, refactor the ones that need work, and let the rest go. The cloud rewards small, well designed estates and punishes sprawl. Choose accordingly.
References and Further Reading
| # | Source | Type | Link |
|---|---|---|---|
| 1 | Microsoft Learn, Migrate to Power BI overview | Free official documentation | https://learn.microsoft.com/en-us/power-bi/guidance/powerbi-migration-overview |
| 2 | Microsoft Learn, Power BI Adoption Roadmap | Free official white paper | https://learn.microsoft.com/en-us/power-bi/guidance/powerbi-adoption-roadmap-overview |
| 3 | Microsoft Learn, On premises data gateway | Free official documentation | https://learn.microsoft.com/en-us/data-integration/gateway/ |
| 4 | Microsoft Learn, Deployment pipelines | Free official documentation | https://learn.microsoft.com/en-us/power-bi/create-reports/deployment-pipelines-overview |
| 5 | Microsoft Learn, Microsoft Fabric overview | Free official documentation | https://learn.microsoft.com/en-us/fabric/get-started/microsoft-fabric-overview |
| 6 | Microsoft Learn, Power BI Report Server documentation | Free official documentation | https://learn.microsoft.com/en-us/power-bi/report-server/ |
| 7 | Microsoft Learn, Power BI licensing | Free official documentation | https://learn.microsoft.com/en-us/power-bi/enterprise/service-admin-licensing-organization |
| 8 | Tabular Editor, free version on GitHub | Open source modelling tool | https://github.com/TabularEditor/TabularEditor |
Reader Comments
Add a comment with your name and email. Your email is used only for basic validation and is not shown publicly.