13 min read AI Data Stream Team

Writing Better Questions for AI Analytics Tools

Vague questions get vague answers - and with BYOK AI tools, they cost more too. Here's how to ask your analytics AI questions it can actually answer.

You ask your analytics AI “how is my traffic doing?” and get three paragraphs of hedged generalities. Or you ask why traffic dropped and the AI spends half its response clarifying what it’s assuming, then gives you something frustratingly inconclusive.

The temptation is to think the tool isn’t very good. But the tool is doing exactly what it was asked - the problem is the question.

This isn’t about learning to write “prompts.” It’s just about understanding how these tools work, and asking in a way that gives them something to work with.

What the AI is actually doing

When you ask a question, the AI doesn’t think - it fetches. It goes to your connected data sources, retrieves what it needs, and constructs an answer. The question determines what it fetches and how confidently it can answer.

A precise question maps directly to specific data. “How did organic sessions change last week compared to the week before?” has a clear answer - pull organic traffic for two date ranges, compare them, done.

A vague question forces the AI to guess what you meant and check multiple possibilities. “How is my traffic doing?” could mean anything: overall sessions, organic only, paid only, a specific channel, a specific page, this week, this month, compared to last month, compared to last year. The AI doesn’t know which of those you care about, so it either picks one (often the wrong one), hedges across all of them, or asks you to clarify - which is the most honest response, but also the most frustrating when you just wanted a quick answer.

This has a direct cost with BYOK tools

With most AI analytics platforms, the AI usage cost is absorbed by the provider - you pay a flat subscription and don’t think about what’s happening underneath. With bring-your-own-key tools like AI Data Stream, you’re using your own API key, which means every question has a direct cost in API tokens - and you can see exactly how many tokens each conversation uses.

Vague questions are expensive. Not hugely - a well-structured question might cost a fraction of a cent - but the difference between a precise question and a vague one isn’t just answer quality, it’s how many data fetches the AI has to make before it can respond.

A specific question triggers one or two targeted queries to your data sources. A vague question might trigger five or six - the AI fishing around, checking GA4, then Search Console, then comparing date ranges it picked somewhat arbitrarily, then hedging everything because it’s not confident it looked at the right thing. You pay for all of that, and the answer you get at the end is still less useful than what a well-aimed single question would have produced.

This isn’t a criticism of how BYOK tools work - it’s just useful to understand. With a proxied tool where costs are hidden, you’d never notice. With your own API key, you might occasionally wonder why a conversation used more tokens than expected. Usually it’s because the questions were underspecified.

What makes a question hard to answer

There are a few patterns that consistently produce poor results, and they’re all easy to fix once you recognise them.

No time frame. “My traffic dropped” doesn’t tell the AI when, for how long, or compared to what. “Traffic dropped” is not a data query - it’s a feeling. The AI has to invent a time frame, which means it might be looking at a period you don’t care about, or comparing to a baseline that doesn’t match your mental model.

No metric. “How are things going?” could mean sessions, rankings, conversions, ad spend, bounce rate, or all of the above. The AI will usually try to give a general overview, which ends up being a lot of words that don’t quite answer what you were wondering.

No comparison point. “Is my traffic good?” has no answer without knowing what good looks like for your site, your industry, and your history. The AI can tell you what your traffic is. It can’t tell you whether that’s satisfactory without either knowing your benchmarks or making up a proxy.

Asking for explanation without supplying context. “Why did traffic drop?” is a hard question if the AI doesn’t know that you launched a site redesign last week, or that you paused a Google Ads campaign, or that there was a Google core update. It will look for correlations in the data it can see - and it might find one - but it’s working without half the picture. This is where annotations make a real difference. When you record events as they happen - a campaign launch, a site deployment, a known algorithm update - the AI has that context available when it’s answering questions about those periods. Instead of speculating about what might explain a change, it can check what you already noted. “Why did traffic drop on the 14th?” becomes a much cheaper question to answer if there’s already an annotation saying the GA4 tracking tag was broken during a site update. We’ll come back to this in more detail below.

A useful test: if you couldn’t answer the question from memory without opening several reports, the AI can’t either. It can answer faster and check more sources simultaneously, but it still needs to know what it’s looking for.

The four things that make a question answerable

Not every question needs all of these - sometimes two or three are enough. But these are the ingredients:

A metric. What are you actually measuring? Sessions, clicks, impressions, conversions, cost-per-click, rankings, bounce rate. Name it specifically.

A time frame with a comparison. Not just “last month” but “last month compared to the month before” or “the last 28 days vs the same 28 days last year.” The comparison is often the most important part - a number without context is just a number.

A scope. Site-wide? A specific page or section? A specific channel (organic, paid, email)? A specific campaign? A specific country or device type? Narrowing the scope often means a faster, more focused answer.

A hypothesis. This is optional, but it’s the one that makes the biggest difference. If you have a hunch about what happened - “I think rankings dropped on our product pages” or “I wonder if the campaign we paused is affecting organic impressions” - say it. The AI can check whether your hypothesis holds up far more efficiently than it can scan everything looking for something interesting. And even if you’re wrong, ruling out a plausible explanation is useful.

Better questions in practice

Here are some common vague questions and what they look like when they have enough to work with. These aren’t scripts - they’re just examples of the kind of specificity that produces a useful answer.

Instead of: “How is my traffic doing?” Try: “How did total organic sessions change last month compared to the month before? Were there any pages that bucked the trend?”

Instead of: “Why did traffic drop?” Try: “Organic traffic looks down compared to two weeks ago. Did anything change in Search Console rankings around that time, particularly for our blog posts?”

Instead of: “Are my Google Ads working?” Try: “What was the cost-per-conversion for Google Ads last week, and how does that compare to the previous four weeks?”

Instead of: “What should I look at?” Try: “I’ve got a client meeting on Monday about Q1 performance. What were the biggest organic wins and losses in Q1 compared to Q4?”

Instead of: “How is my SEO?” Try: “Have our top ten ranking keywords changed in the last 30 days? Are we losing ground on any high-impression queries?”

The pattern is the same each time: name what you’re measuring, give it a time frame with something to compare against, and say where you want it to look.

Follow-up questions are often better than one big question

People tend to approach AI tools as if they’re a search engine - you type your question, you get your answer, conversation over. But the best analytics conversations work more like talking to a colleague. You start somewhere specific, see what comes back, then drill down.

This works well for a few reasons. A narrow opening question is cheap and fast. The answer usually tells you something, even if it’s just confirming there’s nothing interesting there. And each follow-up builds on a narrowing context - instead of the AI searching everything, it’s refining what it already found.

An example of how this might go:

“Which pages lost the most organic traffic last month compared to the month before?”

That gives you something to work with. Then:

“Were those drops across all devices or mainly mobile?”

Then:

“Did any of those pages have annotations - site changes, algorithm updates, anything around that period?”

Each question is small. The conversation builds to a conclusion that would have taken much longer to reach with a single sprawling question - and would have cost more to get there.

The same logic applies when you’re not sure what to look at. “Did anything notable happen with organic traffic in the last two weeks?” is a better opening than “give me a full performance overview.” You’ll get to the overview if you need it, but you might find what you were looking for faster by letting the AI check one thing at a time.

Annotations: the context layer that makes everything easier

There’s a category of question that seems simple but forces the AI to work much harder than it needs to: anything that involves correlating data changes with real-world events.

“Why did traffic spike on the 22nd?” is an easy question if the AI already knows you sent a newsletter to 15,000 subscribers that day. Without that context, it has to go looking - checking GA4 traffic sources, cross-referencing Search Console, scanning for algorithm updates, trying to work out what was different about that specific date. More tool calls, more speculation, less confidence in the answer.

Annotations solve this at the source. When you record events as they happen - campaign launches, site deployments, tracking changes, algorithm updates, anything that might affect your data - you’re building a layer of context that the AI can draw on for every future question about those periods. Not as a visual marker on a chart, but as genuine information that shapes how it interprets the data.

The things worth annotating aren’t just the dramatic ones. A campaign launch that performed exactly as expected. A redesign that had no measurable effect. Pausing ads for a week while a promotion ended. These feel unremarkable at the time, but three months later when you’re trying to understand a data shape you don’t recognise, knowing what was happening matters. Without annotations, you’re relying on memory - and nobody reliably remembers what was deployed on which date two quarters ago.

In AI Data Stream, annotations are a connected data source like any other - the AI queries them as a tool when it judges the question warrants it. Ask about a traffic change over a specific period and it will likely check whether you’ve noted anything relevant around those dates, the same way it might cross-reference Search Console or check for algorithm updates. The more you’ve recorded over time, the more likely it is to find something concrete rather than having to speculate from the numbers alone.

If you’re starting fresh, the most valuable annotations to add are your own events - campaign launches, site changes, tracking updates, anything you know happened. Two categories of annotation are handled for you automatically. The first comes directly from Google’s own incident feed: algorithm updates and search infrastructure incidents are synced automatically throughout the day, with no manual work required. The second is a layer of SEO context - when major SEO publications analyse what a particular update affected and why, that analysis gets published as a global annotation that users can opt in or out of. Between those two layers and your own event history, most “why did this change?” questions have somewhere concrete to start.

System prompts: teaching the AI how you work

Everything above is about how you phrase individual questions. System prompts are about something slightly different: defining how the AI behaves across all your conversations, so you don’t have to repeat yourself.

A system prompt is a set of instructions that sits behind every conversation - your standing brief to the AI about your priorities, your preferred way of working, and any shortcuts you want to establish. You set it once and it applies every time.

The simplest use is setting a default behaviour. If you always want comparisons to run over the same period, you can say so. If you want answers to lead with the headline finding rather than methodology, say that. If there’s a specific page or campaign you consider the benchmark for your site’s performance, make that part of the prompt. The AI will work within those parameters without you needing to restate them each time.

You can also use system prompts to create shortcuts for more complex workflows. Something like “when I ask what happened around an event, always compare the two weeks before and two weeks after” turns a multi-step analytical process into a single casual question. The AI knows what you mean.

AI Data Stream includes a set of starter system prompts built around common roles - data analyst, SEO specialist, marketing manager, content strategist, ecommerce analyst, and an executive summary mode for high-level overviews. You can import any of these as-is, edit them to fit your actual situation, or ignore them entirely and write your own from scratch. They’re a starting point, not a prescription.

System prompts are set at the team level and shared across properties, so if you’re managing multiple sites or working with others, everyone operates with the same baseline behaviour.

The practical effect is that a well-written system prompt shifts some of the effort from the question to the setup. Instead of remembering to specify your preferred date comparison format every time, you define it once. Instead of explaining your reporting priorities every session, they’re already there. It’s the difference between briefing a new contractor every Monday and working with someone who already knows how you operate.

What the AI is and isn’t good at

AI analytics tools are fast at pulling data together and checking correlations across sources. The manual work they replace - opening four tools, setting matching date ranges, mentally comparing numbers, checking if a timing correlation is meaningful - genuinely does take time, and they do it in seconds.

What they’re less good at is knowing your business. The AI doesn’t know that the page which just lost 40% of its traffic was already scheduled for a rewrite. It doesn’t know that the conversion drop is explained by a pricing change your sales team made last week. It doesn’t know that the spike in traffic from Germany is because you got a mention in a newsletter you didn’t realise went out.

This is why the combination of specific questions and good annotations matters. The AI brings the data; you bring the business context. When those two things are working together, the answers are genuinely useful. When you’re asking vague questions with no contextual annotations, you’re asking the AI to speculate - and it will, but the answers will be proportionally uncertain.

Think of it as working with a very fast analyst who has access to all your data but doesn’t know your history. The more you tell them about what matters and what’s been happening, the better the analysis they can do.

Practical summary

These are the habits that make a consistent difference:

  • Include a time frame and a comparison point in every question that involves change (up, down, better, worse)
  • Name the metric you care about rather than asking for a general health check
  • Say where you want it to look - site-wide, a specific channel, a specific page
  • State your hypothesis if you have one - the AI can confirm or rule it out faster than it can find an answer from scratch
  • Start specific and drill down with follow-ups rather than asking everything at once
  • Add annotations when things happen - campaigns, site changes, algorithm updates - so future questions have context to draw on
  • Use a system prompt to define standing preferences and shortcuts, so you don’t have to repeat context every session

None of this is complicated. It’s the same instinct you’d use asking a question to a colleague who doesn’t know the background. Give them enough to work with, and you get a straight answer.


Ready to ask better questions about your data? Try AI Data Stream free →

Related Posts