Last updated: May 2026
Every number Astiva reports is defined, computed, and refreshed on a published cadence. This page documents the full pipeline — query sampling, mention extraction, brand normalization, sentiment scoring, and freshness — so that every claim in the product is traceable back to a method. Nothing is inferred, rounded, or smoothed.
AISO (AI Search Optimization) is the practice of monitoring, analyzing, and improving how a brand appears in AI-generated answers across platforms like ChatGPT, Claude, Gemini, and Perplexity. Unlike traditional SEO, where rankings are observable and reproducible, AI platform responses are probabilistic, personalized, and vary by query phrasing, region, and model version. Rigorous, daily measurement is the only way to build a reliable picture of AI brand presence.
Each metric has a precise definition, a reproducible compute formula, and a fixed set of reporting windows. All 7 metrics are reported per platform and aggregated across platforms at 24-hour, 7-day, and 30-day windows.
Every tracked prompt passes through the same five pipeline steps every day across every AI platform on the customer plan. No step is skipped, sampled, or cached across sessions.
Accuracy at Astiva refers specifically to brand normalization accuracy — the rate at which the normalization engine correctly resolves a brand mention in an AI response to the intended brand entity.
It is measured against a human-reviewed evaluation set sampled across industries and naming patterns. The evaluation set is versioned, and the current accuracy number on the product corresponds to the most recent evaluation run. When the methodology, model, or evaluation set changes, the date on this page changes with it.
Accuracy does not refer to sentiment classification, position estimation, or any aggregate metric — those have their own error bounds, documented per metric above.
Metrics are computed independently per platform first, then aggregated. Aggregation weights are proportional to the number of prompts run per platform in the window, which is consistent across all platforms in a given customer plan. This means that a platform with twice the prompt volume carries twice the weight in aggregated metrics. Platform-level breakdowns are always available alongside aggregated numbers so customers can audit the composition of any aggregate figure.
AI platforms cache responses to recently-seen queries. Running the same prompt repeatedly at short intervals increases the probability of receiving a cached response rather than a fresh inference. Astiva rotates prompts within a query pool for each tracked topic, ensuring that each daily run receives a fresh model inference. Query rotation methodology and pool sizes are available on request for Enterprise customers subject to NDA.
Any material change to a metric definition, formula, normalization model, or evaluation set is documented in the methodology change log with the effective date. Customers with historical data spanning a methodology change receive an annotation in their dashboard on the date of the change so that trend comparisons account for the definitional shift rather than attributing it to a real visibility change.
Astiva fires real buyer-intent queries across all tracked AI platforms every day and parses each response for brand mentions, position, sentiment, and the competitive set named. Results are scored against 7 AISO metrics and reported at 24h, 7d, and 30d windows per platform.
Accuracy refers to brand normalization accuracy: the rate at which the normalization engine correctly resolves a brand mention to the intended brand entity. Measured against a human-reviewed evaluation set. Does not refer to sentiment classification or position estimation, which have separate documented error bounds.
Tracked prompts run on a daily automated schedule. Dashboard data refreshes within a 24-hour rolling window. Alerts are delivered within 24 hours when visibility, sentiment, or competitor presence shifts meaningfully beyond configured thresholds.
Share of Voice = brand mentions / (brand mentions + competitor mentions). Computed over the same prompt set that covers both brand and all tracked competitors in the same run, ensuring fair comparison on identical query surfaces. Available at 24h, 7d, and 30d windows per platform.
Sentiment Volatility is the standard deviation of daily sentiment scores within a rolling window. A rising volatility number with no change in average sentiment is an early signal that perception is polarising across AI platforms — often a leading indicator of a reputational shift before the average score moves. PR and brand teams use Sentiment Volatility as their primary early-warning metric for emerging narrative issues.
Start a free 14-day trial and track your brand across 10 AI platforms using the same pipeline documented above. Every number in your dashboard maps back to a formula on this page. Cancel before day 14 and pay nothing.
Start Free Trial | See the Product | Free AI Visibility Analysis | AI Search Glossary
← Back to Astiva AI