Frequently Asked Questions
Direct answers about what scores mean, how teams should use rescans, and what the product is designed to help prevent.
Risk
Weak pages usually become visible too late.
Response
Find the blocker, fix it, then rescan with intent.
Control
Use the output to replace guesswork with a decision path.
What teams usually need clarified
Most questions come from the same place: teams want to know whether weak pages are a real operating risk, how much they should trust a score, and what to do next if they do not want uncertainty to compound.
What does Generative Metrics do?
Generative Metrics helps teams find and fix page-level weaknesses that make important surfaces harder for machines to interpret, trust, or reuse.
Can't I just ask ChatGPT to review my page?
ChatGPT can't do this reliably — it doesn't render your page, run programmatic validation, or produce a repeatable score you can track across fixes. A language model reviewing pasted HTML gives you a one-off opinion. Generative Metrics renders the page with a browser engine, runs programmatic schema validation against actual standards, measures seven signals consistently, and produces a baseline you can rescan after every meaningful change.
Why not use Screaming Frog, a JSON-LD validator, and manual review instead?
You can approximate individual checks that way. What manual workflows do not give you is a repeatable integrated baseline. Generative Metrics scores seven signals together in one pass, handles bot-vs-browser dual-view extraction, manages protected sites through ownership verification, and produces deterministic rescans — so the same page scores consistently and you can prove whether a fix actually worked.
Do higher scores guarantee citations or rankings?
No. Diagnostics reduce preventable on-page risk. They do not control every external system or outcome.
Why can scores change between rescans?
Scores move when measurable inputs move. Content, structure, rendering behavior, and recency signals all affect what the next scan sees.
If content is unchanged, should scores stay stable?
That is the intent. If the meaningful inputs are unchanged, teams should be able to trust the baseline enough to use rescans as an operational check.
How should teams use this in practice?
Start with an important page, establish the baseline, fix the highest-impact blockers, then rescan before more changes pile up.
How is this different from visibility monitoring tools?
Monitoring tools measure whether your brand appeared in AI results. Generative Metrics is for the earlier question: can AI systems actually read and extract the page in the first place? These are different problems at different points in the chain. Monitoring is meaningful after readiness is in place — not before.