AI and Automation
How Agentic AI Will Quietly Replace Half of What Dashboards Do Today
How agentic AI changes the role of dashboards from static reporting surfaces to diagnostic assistants.
The dashboard, as a category, has not changed in twenty years. A grid of charts, a row of slicers, a filter pane on the right, and a refresh schedule that runs at 6am every morning. Users glance at it for ninety seconds, screenshot the chart they care about, and paste it into a slide. Most of them will never open the rest of the report.
Agentic AI is going to dismantle a lot of this. Not by replacing dashboards entirely, but by replacing the parts of the workflow where dashboards are an awkward middleman between a question and an answer. This article looks at how that shift is unfolding inside Power BI specifically, and where serious analytics teams should be placing their bets.
What Counts as Agentic AI
The phrase has become loose. For the purposes of this article, an agent is a system that can reason about a goal, decide which tools or actions to use, execute them, observe the result, and iterate until the goal is met or it gives up. A simple Q&A chatbot is not agentic. An assistant that takes a vague business question, explores a semantic model, runs DAX, validates the answer, and returns a chart with commentary is agentic.
The distinction matters because the second pattern can do work that has historically required a human analyst. The first pattern can only answer questions that were already in scope.
The Work Dashboards Were Quietly Doing
To see where agents fit, it helps to look at what dashboards have actually been doing all these years. Three jobs dominate.
The first is monitoring. A user wants to know if a number is on track. A KPI tile or a sparkline answers that. The user does not need a dashboard. They need an answer, and ideally a notification when something changes.
The second is diagnosis. A user notices a number has moved and wants to know why. This is the question where most dashboards fail. The drill through path is fixed at design time, so the user can only ask the questions the analyst anticipated. Anything else turns into an email asking for a custom view.
The third is exploration. The user is trying to find something they cannot yet articulate. Dashboards are bad at this because the visuals are pre selected. Exploration belongs in tools like the analyse in Excel pivot, or in custom queries written by the analyst.
Agents address all three of these jobs, but they address diagnosis and exploration most powerfully.
Architecture of an Agentic Dashboard
flowchart LR
User[User asks a question in natural language]
Orchestrator[Agent Orchestrator]
LLM[Reasoning LLM]
Tools[Tool Catalogue]
DAX[DAX Generator]
Semantic[Semantic Model]
Validator[Result Validator]
Memory[Conversation Memory]
Viz[Chart Renderer]
User --> Orchestrator
Orchestrator --> LLM
LLM --> Tools
Tools --> DAX
DAX --> Semantic
Semantic --> Validator
Validator --> LLM
LLM --> Viz
Viz --> User
Memory --> LLM
LLM --> MemoryThe orchestrator is the heart of the system. It receives the user query, plans the steps, calls the language model with tool descriptions, and routes the language model decisions to the appropriate tools. The semantic model acts as the source of truth. The important detail is that the agent does not query the warehouse directly. It queries the semantic model, which means existing measures, security, and business definitions are honoured automatically.
A Concrete Example
Imagine a sales director asks the agent why revenue dropped in the European region last week. The journey looks roughly like this.
The agent first decodes the question. It identifies the metric (revenue), the dimension filter (region equals Europe), and the time grain (week, comparing to the previous week). It calls a DAX generation tool to produce a query against the semantic model.
EVALUATE
SUMMARIZECOLUMNS(
DimDate[YearWeek],
FILTER(DimGeography, DimGeography[Region] = "Europe"),
"Revenue", [Total Sales],
"Revenue PW", CALCULATE([Total Sales], DATEADD(DimDate[Date], -7, DAY))
)
ORDER BY DimDate[YearWeek] DESC
The result confirms a 12 percent drop. The agent now plans a diagnosis. It decomposes the metric across each major dimension, looking for the largest contributors to the variance. It might run several queries in parallel.
After comparing contributions across product category, country, channel and customer segment, the agent observes that one country fell off a cliff while the rest stayed flat. It drills further and isolates a specific category within that country. Then it queries an external tool, perhaps a calendar of marketing campaigns, and finds a campaign that ended exactly on the boundary of the drop.
The agent presents the answer. Revenue in the European region fell 12 percent week on week. Roughly 80 percent of the decline came from the United Kingdom, concentrated in the home appliances category, and the timing aligns with the end of a promotional campaign that ran the week before. It offers to recalculate the figure if the campaign is excluded.
A human analyst could have produced the same answer. The difference is that the agent did it in twelve seconds, at midnight, with no human in the loop, and it did it for hundreds of users that day.
What Agents Are Actually Good At
Agents are strong at three classes of work that traditional dashboards struggle with.
Variance analysis is the first. Whenever a number changes, the question that follows is always why, and the answer always lives in a multidimensional decomposition. Agents do this kind of decomposition well because the search space is structured.
Anomaly summarisation is the second. A dashboard with thirty charts will hide an anomaly inside one of them. An agent that scans every metric every day and surfaces only the ones that broke their pattern is dramatically more useful than a static report that asks the user to spot the anomaly themselves.
Personalised narrative is the third. Two users looking at the same revenue chart care about different things. The Chief Financial Officer wants the year on year shape. The product manager wants the contribution of the new launch. An agent can frame the same underlying data into the narrative each user actually needs, on demand, rather than requiring three separate dashboards.
What Agents Are Not Good At
There are clear limits and pretending otherwise will damage trust.
Agents are not good at producing the canonical financial number. The number that goes on the board pack must be governed, repeatable, and signed off. Agents introduce variability, and variability is the enemy of audit. Use agents for diagnosis and exploration, not for the headline KPI that the regulator will examine.
Agents are not good at handling messy or undocumented data. They produce confident answers from poor inputs, which is worse than no answer. The pre requisite for agentic analytics is a clean semantic model with measures named clearly, dimensions properly related, and definitions documented. The investment in dimensional modelling pays off twice in this world. Once for human analysts, and again for the agent.
Agents are not yet good at very long horizon multi step plans. They do well on three to seven step tasks, especially when the steps are validated. They drift on much longer chains, and they hallucinate when no relevant data exists.
The Practical Path Into Agentic Power BI
There are three accessible entry points right now for teams that want to begin without a research budget.
The first is Copilot in Power BI. It is the easiest to enable, requires Premium or Fabric capacity, and works against existing semantic models. It is not a fully agentic system, but it covers natural language Q&A and narrative generation.
The second is custom agents built with the Microsoft Copilot Studio or with the Azure AI Foundry plus the Power BI REST APIs and the XMLA endpoint. These agents can be exposed inside Teams, inside a Power Apps canvas, or via a chat surface in the company portal. The reasoning model talks to the semantic model through DAX queries.
The third is open source orchestration with frameworks such as LangChain or Semantic Kernel, calling Azure OpenAI as the language model and using the XMLA endpoint as the data tool. This route gives the most control and the most work. It is appropriate when the organisation wants tight observability and custom evaluation pipelines.
A Tutorial Sketch
A minimal custom agent that answers questions against a Power BI dataset can be built in a few hundred lines. The shape of it looks like this.
from azure.identity import DefaultAzureCredential
from openai import AzureOpenAI
import requests
# 1. Authenticate
credential = DefaultAzureCredential()
token = credential.get_token("https://analysis.windows.net/powerbi/api/.default").token
# 2. Tool that runs a DAX query against a published dataset
def run_dax(workspace_id, dataset_id, query):
url = f"https://api.powerbi.com/v1.0/myorg/groups/{workspace_id}/datasets/{dataset_id}/executeQueries"
response = requests.post(
url,
headers={"Authorization": f"Bearer {token}", "Content-Type": "application/json"},
json={"queries": [{"query": query}]}
)
return response.json()
# 3. The agent loop
client = AzureOpenAI(api_version="2024-08-01-preview")
def answer(question, workspace_id, dataset_id, schema_summary):
messages = [
{"role": "system", "content": f"You analyse a Power BI dataset. Schema: {schema_summary}. Use DAX queries to answer."},
{"role": "user", "content": question}
]
while True:
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=[{
"type": "function",
"function": {
"name": "run_dax",
"description": "Execute a DAX query against the Power BI dataset",
"parameters": {
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"]
}
}
}]
)
msg = response.choices[0].message
if msg.tool_calls:
for call in msg.tool_calls:
args = call.function.arguments
result = run_dax(workspace_id, dataset_id, args["query"])
messages.append({"role": "tool", "tool_call_id": call.id, "content": str(result)})
else:
return msg.content
This is a starting point. A production agent adds query validation, cost controls, conversation memory, evaluation harnesses, and access control. None of those are difficult, but each one needs deliberate design.
Where Dashboards Still Win
Dashboards remain the right answer for the operational morning glance. A factory floor manager who needs the line health at a glance does not want to chat with an agent. A trading desk needs a rich, dense view that updates every second. These are user experiences that agents will not replace any time soon.
The shift is more subtle. Dashboards stop being the primary lens for diagnosis. They become a window into the most stable, most operational metrics. Agents take over the long tail of ad hoc questions, where the dashboard was always a poor fit anyway.
A Final Thought
The interesting thing about agentic AI in analytics is that it inverts the historical cost curve. For decades, the cheapest analytics work was the standard report and the most expensive was the bespoke question. Agents make bespoke questions almost free, while standard reports remain comparable in cost. Whichever organisation gets there first will make decisions faster than its competitors. The technology stack to build it is already on the shelf.
References and Further Reading
| # | Source | Type | Link |
|---|---|---|---|
| 1 | Anthropic, Building Effective Agents | Free engineering article | https://www.anthropic.com/engineering/building-effective-agents |
| 2 | LangGraph documentation | Open source framework documentation | https://langchain-ai.github.io/langgraph/ |
| 3 | Microsoft Semantic Kernel | Open source GitHub project | https://github.com/microsoft/semantic-kernel |
| 4 | Microsoft AutoGen | Open source GitHub project | https://github.com/microsoft/autogen |
| 5 | ReAct, Synergizing Reasoning and Acting in Language Models | Free arXiv paper | https://arxiv.org/abs/2210.03629 |
| 6 | Microsoft Learn, XMLA endpoints in Power BI | Free official documentation | https://learn.microsoft.com/en-us/power-bi/enterprise/service-premium-connect-tools |
| 7 | Microsoft Learn, Power BI REST API | Free official documentation | https://learn.microsoft.com/en-us/rest/api/power-bi/ |
| 8 | LangChain documentation | Open source framework documentation | https://python.langchain.com/docs/ |
Reader Comments
Add a comment with your name and email. Your email is used only for basic validation and is not shown publicly.