GenAI Knowledge Readiness Quick Check
Three minutes. Twelve questions. No personal details collected.
This self assessment highlights whether your knowledge base is ready to support GenAI answers, and what to fix first.
Answer each question with Yes, Sometimes, or No. Click Show my result to see your score and the next step.
How it works
Scoring
Yes equals 2
Sometimes equals 1
No equals 0
Sometimes equals 1
No equals 0
Sections
Questions 1 to 4 look at structure and atomicity.
Questions 5 to 8 look at governance and trust.
Questions 9 to 12 look at optimisation and action.
Questions 5 to 8 look at governance and trust.
Questions 9 to 12 look at optimisation and action.
Structure and atomicity
1
Does each article solve one clear user intent rather than combining multiple topics?
Why this matters
Clear single intent articles reduce retrieval collisions and prevent blended answers.
2
Can the core answer be understood quickly near the top of the article?
Why this matters
When the key answer is near the top, GenAI extracts it cleanly and avoids filler.
3
Are requirements listed in a short bulleted list at the very top of the article?
Why this matters
Missing prerequisites cause the most common wrong outcomes and avoidable.
4
Are instructions written as a numbered sequence of actions rather than a narrative paragraph?
Why this matters
Numbered steps reduce ambiguity and help GenAI return safer, repeatable guidance.
Governance and trust
5
Does every article have an owner and a next review date, with reminders enabled?
Why this matters
Ownership plus review reminders prevents silent decay and broken guidance.
6
Is there a process to hide or archive articles that are overdue for review?
Why this matters
If stale articles stay visible, GenAI will surface them and trust drops quickly.
7
Do approvals check both technical accuracy and clarity for safe use?
Why this matters
GenAI amplifies whatever you publish, so content must be correct and usable.
8
Does each article include a stable source of truth link that can be referenced consistently?
Why this matters
Stable source links give traceability and help teams validate answers fast.
Optimisation and action
9
Do articles include common user terms alongside official system names in metadata or tags?
Why this matters
User language improves match rates, especially when people search by what they see. Example: The blue icon as well as Citrix Workspace.
10
Do articles clearly state when not to use these instructions and what to do instead?
Why this matters
Clear stop conditions prevent the wrong fix being applied to the wrong scenario.
11
When the right outcome is a request or automation, does the article link to the exact catalogue item or action to take?
Why this matters
Good knowledge drives completion by linking directly to the right action or request.
12
Do you have a way to flag an article for rewrite when user feedback drops over time?
Why this matters
Feedback trends tell you what is failing in practice so you can improve the right articles first. Example: Helpfulness ratings trend down over time.
