Introducing the Monitoring Dashboard
Reports give you a snapshot. But AI search doesn't stand still. Models update, competitors adjust their content, and your visibility shifts with it. Today we're launching the Sill Monitoring Dashboard: continuous tracking of how AI engines perceive your brand, updated daily.
A Single Snapshot Isn't Enough
AI search results change constantly. A single report can't capture how your brand's visibility shifts over time as models update and competitors adjust their strategies.
When we launched Reports, we gave brands a way to measure how AI platforms see them. But AI search results are volatile. Research from Superlines found that AI Overview content changes roughly 70% of the time for identical queries, and when the answer updates, nearly half of cited sources get replaced. Only about 30% of brands persist across back-to-back AI responses for the same question.
At the same time, the shift to AI search is accelerating. An Ahrefs study of 300,000 keywords found that AI Overviews now correlate with a 58% lower click-through rate for the top-ranking page. The organic traffic you used to count on is eroding, and where it goes matters more than ever.
That's why a one-time report, no matter how thorough, can't keep up. You need ongoing measurement. You need to know when a model update tanks your visibility, or when a competitor starts outranking you on a platform where you were dominant last week.
Track Your Share of Voice Over Time
The dashboard tracks your brand's AI Share of Voice daily, showing how often AI platforms recommend you compared to competitors across every major AI engine.
At the center of the dashboard is Share of Voice: a daily measurement of how often AI platforms recommend your brand when buyers ask purchase-intent questions in your category. This isn't a vanity metric. According to G2's 2025 Buyer Behavior Report, GenAI chatbots are now the #1 source influencing vendor shortlists, ahead of software review sites. In a survey of 1,169 B2B decision-makers, 29% said they start research via LLMs more than Google.
The numbers only grow from there. A Responsive study found GenAI has overtaken traditional search for 25% of B2B buyers. Among tech buyers specifically, 80% now use GenAI as much as or more than search engines for vendor research.
The dashboard shows your brand's SOV trend line alongside your top competitors, updated daily. You can see exactly when visibility shifts happen and correlate them with your content changes, competitor moves, or model updates.
See Where You Rank
The competitive rankings table shows your position against competitors across three dimensions: visibility score, sentiment score, and position score.
Knowing your own score isn't enough. You need context. The competitor rankings table shows you exactly where you stand relative to the brands you're competing against, ranked across three dimensions:
- Visibility: How often each brand gets recommended overall
- Sentiment: Whether AI models frame each brand positively, neutrally, or negatively
- Position: When a brand is mentioned, is it the primary recommendation or a secondary afterthought?
A brand might have high visibility but poor sentiment, meaning it gets mentioned often but with caveats. Another might rarely appear but always as the top recommendation. This three-dimensional view gives you the full picture of competitive positioning in AI search.
| # | Brand | |||
|---|---|---|---|---|
| 1 | 37% | 74 | 82 | |
| 2 | 34% | 71 | 68 | |
| 3 | 32% | 63 | 72 | |
| 4 | 22% | 69 | 59 | |
| 5 | 15% | 55 | 49 | |
| 6 | 12% | 61 | 44 |
Understand How AI Talks About You
Sentiment analysis breaks down how AI platforms frame your brand, tracking keyword associations and the distribution of positive, neutral, and negative mentions.
Being mentioned by AI isn't always a good thing. The way AI frames your brand matters as much as whether it mentions you at all. As Conductor's research puts it, brand sentiment determines whether AI systems recommend, ignore, or actively discourage consideration of your solution. Sentiment acts as an early warning system, catching negative portrayals before a narrative takes hold.
The dashboard breaks this down in two ways. The sentiment distribution chart shows the overall ratio of positive, neutral, and negative mentions across all your monitored prompts. The keyword cloud surfaces the specific words AI platforms associate with your brand: are they saying "reliable" and "innovative", or "expensive" and "complicated"?
Together, these give you a clear picture of how AI platforms are framing your brand to potential buyers.
Map Your Semantic Position
The Semantic Map plots your brand and competitors on custom axes, showing how AI models perceive each brand's positioning on dimensions you define.
The Harvard Business Review introduced the concept of "Share of Model" in June 2025 to describe how LLMs build internal representations of brands. Their finding: 58% of consumers now turn to GenAI tools for product recommendations, up from 25% in 2023. The brands that understand how these models position them will have a real advantage.
The Semantic Map takes this concept and makes it actionable. You define two axes that matter for your market, like "Premium vs. Budget" and "Enterprise vs. SMB", and the map plots where AI models place your brand relative to competitors. You can use our built-in presets or create your own custom axes.
This is not survey data or assumed positioning. It's derived from how AI models actually describe and compare brands in their responses. If you think you're positioned as premium but AI models consistently place you in the budget quadrant, that gap matters.
Define What You Monitor
The Library lets you define your monitoring scope: products, buyer personas, geographic locations, and the specific prompts you want to track daily.
Every brand's market is different. A B2B SaaS company selling to enterprise CTOs needs to monitor different prompts than a D2C brand targeting budget-conscious consumers. The Library gives you full control over your monitoring scope.
You define four things:
- Products: The specific products or services you want to track, organized by topic
- Personas: Your buyer profiles, with attributes that influence how they search. This matters because adoption varies dramatically by role. Responsive found that 80% of tech buyers use GenAI at least as much as search, compared to 59% in other industries
- Locations: Geographic regions where you want to track visibility. AI recommendations vary by location, and the gap is real: 48% of U.S. buyers use GenAI for vendor discovery, compared to just 14% in other regions
- Prompts: The actual questions you want to monitor daily. You can write your own or use our AI-powered prompt generator, which creates relevant queries based on your products, personas, and locations
A guided setup wizard walks you through the process, and AI-powered discovery helps fill in the gaps. You don't need to be a GEO expert to set up meaningful monitoring.
Every Platform Sees You Differently
Different AI platforms cite brands based on different source types, making platform-specific monitoring essential for a complete visibility picture.
One of the most important findings in AI visibility research is that platforms don't agree on who to recommend. A Yext analysis of 6.8 million citations across 1.6 million AI responses found striking differences in how each platform decides what to cite:
- Gemini trusts what your brand says directly: 52% of its citations come from brand-owned websites
- ChatGPT trusts what the internet agrees on: 49% of citations come from third-party sites like Yelp, TripAdvisor, and review platforms
- Perplexity trusts industry experts: it leans into specialized directories and niche sources, which make up 24% of citations for subjective queries
This is why we monitor across ChatGPT, Perplexity, Gemini, Google AI Overviews, Copilot, and Grok. A brand that dominates ChatGPT recommendations might be invisible on Perplexity, and the fix for each platform is different. Your first-party content strategy matters for Gemini; your third-party review presence matters for ChatGPT; your expert directory listings matter for Perplexity.
The dashboard breaks visibility down by platform so you can see exactly where you're strong and where you have gaps.
Built on Peer-Reviewed Research
Every metric and recommendation in Sill traces back to published GEO research from institutions like Princeton, Georgia Tech, CMU, and Berkeley.
We don't guess at what matters. The monitoring dashboard is built on the same research foundation as our reports, starting with the foundational GEO paper by Aggarwal et al. (KDD 2024, Princeton/Georgia Tech/IIT Delhi), which tested nine optimization methods across 10,000 queries and demonstrated visibility improvements of up to 40%.
The content scoring factors, sentiment analysis methods, and competitive benchmarking approaches all trace back to published research. When we score your pages against on-site and off-site factors, those factors are the ones that peer-reviewed studies from CMU, Berkeley, and Columbia have shown actually influence AI citations.
We also stay current. An SE Ranking study of 2.3 million pages found that pages updated within 2 months earn 5.0 AI citations on average, compared to 3.9 for older pages. Freshness matters in this space, and continuous monitoring is how you stay ahead of the curve.
Get Started
The monitoring dashboard is available to all Sill subscribers. Sign up, set up your library, and start tracking your AI visibility daily.
The monitoring dashboard is available today to all Sill subscribers. Sign up, walk through the setup wizard to define your products, personas, locations, and prompts, and you'll have daily visibility tracking running within minutes.
If you're not sure where to start, our AI-powered discovery tools will help you identify the right personas and generate relevant prompts based on your market. And if you want to see what your baseline looks like before committing, a free one-time snapshot is available to every new account.
Questions? Want help setting up monitoring for your brand? Reach out below.
AI visibility isn't static. Your measurement shouldn't be either.
Get Your Report
Request your first analysis today to see where you stand.
