Loading Knowledge Sharing

Power BI

Real Time Streaming Dashboards in Power BI, Without the Architectural Mistakes

By Syed Hussnain Sherazi | November 4, 2025 | Real Time | Streaming | Dashboards

A practical architecture guide for real-time streaming dashboards in Power BI.

The phrase real time is used very loosely in BI. Some teams mean refreshes every fifteen minutes. Others mean second by second updates from a factory floor. The right architecture differs sharply between those two cases. Picking the wrong one creates either a dashboard that lies about its freshness or a system that costs ten times more than it needs to.

This article works through the streaming options available to Power BI today, explains the trade offs honestly, and walks through a practical setup that updates a dashboard every few seconds from an event source.

What Real Time Means in Practice

Three distinct latency tiers cover almost every real world requirement.

Near real time means under fifteen minutes from event to dashboard. This serves most operational reporting needs. It is achieved with frequent refreshes against a warehouse, or with DirectQuery against a fast source.

Real time means under one minute. This is the comfortable range for monitoring dashboards and live operations centres. It needs streaming infrastructure but the architecture stays manageable.

Sub second means under one second. This is genuine real time, suitable for trading desks, factory floors, and emergency response. It requires push streaming all the way to the visual and a different mindset about cost and reliability.

Be honest about which tier you actually need. Most teams ask for the third tier and then realise the fifteen minute tier solves their problem at a fraction of the cost.

The Building Blocks

Power BI offers four mechanisms for fresh data. They sit at different points on the latency curve.

Scheduled refresh runs an Import dataset on a schedule, typically eight times per day on Pro or up to forty eight times on Premium. Easiest to operate, slowest to refresh.

DirectQuery sends every query to the source on demand. Freshness equals source freshness. Performance depends on the source.

Push datasets accept rows pushed via the REST API. Each push appends to the dataset, which can hold up to five million rows. Latency is in the seconds, but the API has rate limits.

Streaming datasets are similar to push but the data is held in a buffer rather than persisted. Visuals built on streaming datasets update without page refresh. Latency is sub second.

A real time architecture usually combines these. A streaming dataset for the live tile, a push dataset for the rolling window, and an Import dataset for the historical context. Each tier serves a different visual.

Reference Architecture

Reference Architecture
flowchart LR
    Source[Source System or IoT Device]
    EH[Azure Event Hubs]
    Stream[Stream Analytics or Fabric Real Time]
    Streaming[Streaming Dataset]
    Push[Push Dataset]
    Lake[Lakehouse or Warehouse]
    SemModel[Import or Direct Lake Semantic Model]
    Dashboard[Dashboard with Mixed Tiles]

    Source --> EH
    EH --> Stream
    Stream --> Streaming
    Stream --> Push
    Stream --> Lake
    Lake --> SemModel
    Streaming --> Dashboard
    Push --> Dashboard
    SemModel --> Dashboard

Events land in Event Hubs. A stream processor splits them into three flows. The streaming dataset feeds the live tile. The push dataset feeds the rolling window. The lakehouse stores everything for historical analysis through the regular semantic model.

This pattern handles a million events per minute on a modest Fabric capacity, with sub second latency on the live tile and minute scale latency on the historical layer.

Tutorial, A Working Example

Suppose you operate a delivery fleet and want a dashboard that shows live vehicle positions, average speed in the last five minutes, and total deliveries today.

Step 1, Create the Streaming Dataset

In the Power BI service, create a new streaming dataset of type API. Define the schema.

{
  "vehicle_id": "string",
  "timestamp": "datetime",
  "latitude": "number",
  "longitude": "number",
  "speed_kmh": "number",
  "status": "string"
}

After creation, copy the push URL. This is the endpoint your stream processor will call.

Step 2, Set Up Event Hubs

Create an Event Hubs namespace in Azure. Create a hub called vehicle_positions. Configure the producer SDK on each vehicle to send a JSON event every two seconds with the schema above.

from azure.eventhub import EventHubProducerClient, EventData
import json, time

producer = EventHubProducerClient.from_connection_string(EVENT_HUB_CONN, eventhub_name="vehicle_positions")

while True:
    event = {
        "vehicle_id": vehicle_id,
        "timestamp": now_iso(),
        "latitude": gps.lat,
        "longitude": gps.lon,
        "speed_kmh": speedometer.kmh,
        "status": "moving" if speedometer.kmh > 0 else "stopped"
    }
    batch = producer.create_batch()
    batch.add(EventData(json.dumps(event)))
    producer.send_batch(batch)
    time.sleep(2)

Step 3, Configure the Stream Processor

Use either Azure Stream Analytics or Fabric Real Time Intelligence to read from Event Hubs and route to the destinations.

In Stream Analytics, the query splits the input into three sinks.

-- Sink 1, push to streaming dataset
SELECT vehicle_id, timestamp, latitude, longitude, speed_kmh, status
INTO PowerBIStreaming
FROM EventHubInput;

-- Sink 2, aggregate and push to push dataset
SELECT
    System.Timestamp() AS window_end,
    COUNT(*) AS event_count,
    AVG(speed_kmh) AS avg_speed
INTO PowerBIPush
FROM EventHubInput
GROUP BY TumblingWindow(minute, 5);

-- Sink 3, persist to lakehouse
SELECT *
INTO LakehouseSink
FROM EventHubInput;

The first sink streams every event directly. The second produces five minute windows for the rolling KPI. The third archives every event for later analysis.

Step 4, Build the Dashboard

In a Power BI dashboard, add three kinds of tiles.

A streaming tile that visualises live positions on a map. The tile auto updates without refresh. It shows the most recent five hundred events from the streaming dataset.

A push tile that shows the average speed over five minutes as a card and as a line chart. This tile reads from the push dataset and updates on each window emission.

A regular tile pinned from a Power BI report that shows the historical context. Today's deliveries, comparison to yesterday, weekly trend. This tile reads from the Import or Direct Lake semantic model on the lakehouse.

The three tiers coexist on the same dashboard. The user sees a coherent picture spanning seconds to weeks.

Step 5, Handle Failures Gracefully

Streaming systems fail in interesting ways. Event hub partitions get hot. Stream processors crash. The push API rate limits when traffic spikes. Three defensive patterns matter.

Buffer producer side. Each device should keep a local buffer for several minutes of events and replay them when connectivity returns. Without this, a brief network outage produces gaps that never get filled.

Idempotent destinations. The streaming and push datasets are append only by design, so duplicates can show up after retries. Either dedupe in the stream processor using event id, or design the visuals to tolerate occasional duplicate rows.

Backpressure on rate limits. Power BI streaming push has a rate limit of one million rows per hour per dataset. If your traffic exceeds this, aggregate in the stream processor before pushing. The dashboard does not need every raw event, only the windowed result.

When Direct Lake Changes the Picture

Fabric Direct Lake on a lakehouse with continuous ingestion can serve some real time scenarios that previously required streaming datasets. If your data lands in OneLake within seconds of the event, the semantic model picks it up on the next query.

The advantage is architectural simplicity. One semantic model, one set of measures, one place to enforce row level security. The disadvantage is that the granularity of freshness is bounded by the parquet write cadence, which is typically tens of seconds rather than sub second.

For most operational dashboards, this is good enough and far simpler to operate than the three tier architecture above. Reserve the multi tier streaming setup for cases where sub second updates on a specific tile genuinely matter.

Cost Considerations

Streaming costs money in three places. Event Hubs throughput units are billed hourly. Stream Analytics streaming units are billed per second of execution. Push and streaming datasets are free in terms of dataset cost but they consume capacity time on the BI side.

A workload of one thousand events per second through one Event Hub partition, one Stream Analytics streaming unit, and a P1 capacity costs roughly the same as a small DirectQuery workload against a SQL warehouse. Below this scale, the marginal cost is small. Above ten thousand events per second, the costs scale up and start to matter. Plan capacity accordingly.

A useful rule of thumb is that the cost of a streaming pipeline is two to four times the cost of an equivalent batch pipeline at the same data volume. The premium pays for the latency. If the latency is not actually required, the batch pipeline is the better answer.

Common Mistakes

The first mistake is to put real time data and historical data into the same dataset. The technical reason is that streaming datasets cannot have relationships, and push datasets cannot easily join to dimensions. The pragmatic reason is that the two tiers have different governance and different SLAs. Keep them separate.

The second mistake is to build a dashboard with only streaming tiles and no historical context. Operators need both. A live speed reading is interesting only when it can be compared to the average for this time of day, this week.

The third mistake is to assume streaming is always more accurate. It is more current. It is also more brittle. Numbers from a streaming source can flicker, jump, or briefly contradict each other while late events arrive. Historical numbers from a refreshed warehouse have had time to settle. Both have their place.

The Main Lesson

Real time dashboards are seductive. They look impressive, and stakeholders enjoy watching numbers move on the screen. But the right architecture depends on the question being asked, not on the visual effect.

If the user makes decisions in seconds, build streaming. If the user makes decisions in minutes, near real time refresh works fine. If the user reviews the dashboard once a day, scheduled refresh is sufficient and often more reliable. Match the architecture to the cadence of decisions, not to the appetite for novelty. The architecture you do not build is the cheapest one to operate.

References and Further Reading

#SourceTypeLink
1Microsoft Learn, Real time streaming in Power BIFree official documentationhttps://learn.microsoft.com/en-us/power-bi/connect-data/service-real-time-streaming
2Microsoft Learn, Azure Event HubsFree official documentationhttps://learn.microsoft.com/en-us/azure/event-hubs/
3Microsoft Learn, Azure Stream AnalyticsFree official documentationhttps://learn.microsoft.com/en-us/azure/stream-analytics/
4Microsoft Learn, Real Time Intelligence in Microsoft FabricFree official documentationhttps://learn.microsoft.com/en-us/fabric/real-time-intelligence/overview
5Apache Kafka documentationOpen source documentationhttps://kafka.apache.org/documentation/
6Microsoft Learn, Push and streaming datasets in Power BIFree official documentationhttps://learn.microsoft.com/en-us/power-bi/connect-data/service-real-time-streaming
7Microsoft Learn, KQL Query Language referenceFree official documentationhttps://learn.microsoft.com/en-us/kusto/query/
8Microsoft Learn, Direct Lake mode in FabricFree official documentationhttps://learn.microsoft.com/en-us/fabric/get-started/direct-lake-overview
Back to Knowledge SharingContact Syed Hussnain

Reader Comments

Add a comment with your name and email. Your email is used only for basic validation and is not shown publicly.