Most documentation teams face a critical challenge: they rarely test systematically whether their documents work for different types of users. Without dedicated user research teams focused on documentation, teams are left guessing about effectiveness.
An innovative solution
Casey (CT) Smith of Payabli has developed an AI-powered documentation testing tool called reader-simulator. This tool simulates different user personas navigating through documents to identify navigation issues and measure success rates.
How different users read documentation
The reader-simulator recognizes that different users don’t just prefer different content: they consume it in fundamentally different ways. The tool simulates four distinct personas:
- Confused beginner
- Rapidly cycles through documents, trying to find their bearings and understand basic concepts.
- Efficient developer
- Jumps directly to API references and uses Ctrl+F to find specific information quickly.
- Methodical learner
-
- Reads documentation from start to finish, building understanding sequentially.
-
- Desperate debugger
-
- Searches frantically for error messages and immediate solutions to blocking problems.
-
The simulator simulates how the different personas navigate and read:
- Beginners receive progressive disclosure: Previews first, with full content revealed only when needed
- Experts get keyword extraction that simulates real-world Ctrl+F behaviour
- Methodical learners always receive complete content to support their linear approach
Key features
Reader-simulator includes several sophisticated capabilities:
- Link prioritisation
- The tool weights navigation choices based on persona preferences. Beginners gravitate toward tutorials, while experts favour API references.
- Success metrics
After each session, the tool evaluates whether the content format matched the persona’s preferences and whether users successfully completed their tasks. - Actionable insights
When users fail to complete tasks, the tool provides specific, actionable recommendations for improving documentation structure and content. - Cross-site compatibility
Its configuration allows the tool to work across different documentation platforms. - Cost control
It tracks token use to help teams manage Claude API costs effectively.
Testing alternative approaches
CT built the entire tool through conversational coding with Claude. To explore whether this approach could be replicated on other AI platforms, we conducted experiments with different tools:
The ChatGPT Agent mode implementation
We first tested whether ChatGPT’s Agent Mode could produce similar results. The experiment succeeded. ChatGPT generated comparable insights to the original Claude implementation.
Claude’s results:

ChatGPT’s results:
The no-code app implementation
We also investigated whether a reader simulator could be built using no-code app platforms. These platforms allow users to create applications through text prompts rather than writing code, offering a more polished, application-like interface compared to chatbot interactions.
After several iterations, we successfully replicated the functionality of both the original Claude version and our ChatGPT Agent implementation. The no-code version provides a more visually appealing user experience while maintaining the core testing capabilities. The approach offers some extensibility – incorporating a back-end database for storing historical results and different personas.
Impersonaid
Fabrizio Ferri commented on Reddit that he had developed Inpersonaid, a similar system some months ago that lets you define custom personas following a schema. We recreated this as an app. It provides a more emotional psuedo-user response.
Summary
Reader simulators represents a new tool for documentation quality assurance. If you have a preference for/subscription to a particular AI tool, that shouldn’t stop you from creating your own version of it.
If you want to learn more
If you’re interested in leveraging AI-generated applications for technical documentation, explore our Managing and Mastering Documentation Projects with AI course.
Free review of your Help
If you’re an organisation with user guides or an online Help site, and you’re not certain they meet the needs of your target audience, contact us. Send us a few documents or a link and we’ll analyse them using the app and send you the results – completely free, with no obligation.

Leave a Reply