Podcast 123: Document quality checklists

In this episode of the Cherryleaf Podcast, we look at different ways to measure and check the quality of the content we produce.

Transcript

In this episode of the podcast, we’re going to be looking at document quality, or specifically how we can check our content our technical writing content before it goes live.

We could call it editing and reviewing, but a common way of describing it is doc quality.

We’re going to look at different ways in which documentation can be assessed. Different forms of checklists that can be gone through to check, whether the content is ready for delivery or not. And this comes from some of the content that we include in a number of our e-learning courses and also in our classroom policies and procedures course.

We just start really by addressing the question, does the quality of the content that we produce matter that much in a world of minimum viable products? Can we have minimum viable content where some bits just aren’t of great quality?

Be of acceptable quality.

Well, we can, but there are some elements, some parts, or characteristics of content that are minimum that have to be there.

For example, the content is there for a purpose. It’s there to stop people getting stuck. So if the content isn’t of a quality that solves that problem.

If people read it and are getting stuck, then it’s not really ready for purpose suitable for purpose.

And you may also be on legally tricky ground if you provide content that’s incorrect, incomplete, or just, in other forms, not there.

And another reason related to why having content in the first place is two aspects.

One is the perception of quality.

If you have rotten documentation, that can be an indication that you have rotten software, brought a rotten product which is on shaky foundations.

And another reason for documentation is the calls to support, and reducing the amount of course to support.

So if your document is poor, doesn’t meet the job,  then you may be saving money in one area, but spending it in another, i.e., spending it on addressing the amount of calls that you have to support or the amount of people that give up using your particular product.

OK, so we want to have a measure by which we can check our content before it goes out.

What different measures are there out there? What different criteria do they use?

Probably the most popular one comes from a book called Developing Quality Technical Information which was published by IBM, Aand was written by various people who work at IBM, including Gretchen Hargis.

And this book comes from 1997 and it has a checklist that has a number of different measures in there.

There are three main categories. And then within those different main 3 categories, there are ways of measuring them. So you have in total nine different ways of checking your content before it goes out.

So the first category is ease of use. Is the content easy to use? And the way of assessing whether it’s easy to use or not is by three measures. One we’ve mentioned already.

Accuracy:

Is the content accurate?

Does it contain any mistakes or errors?

Is it truthful?

Is it factual?

So the content needs to be accurate?

Another measure for checking the content is related to accuracy really, and that is completeness.

Is the information complete? Does it include all of the essential information that’s needed now?

There’s always a balance with technical writing as to how much detail you go into, but does it contain enough for people to be able to do the job, do the thing they want to do.

So accuracy and completeness two measures.

Use third relates to minimalism and the ideas of John Carroll that we’ve talked about on previous podcasts and that is task orientation.

Is it focused and orientated to the tasks that people want to do so rather than being focused on screens or what the software’s features.

What about the capabilities that it has to enable people to do the things they want to do? The problems they want to solve?

So does the content help users complete tasks related to their work?

The second measure is whether it’s easy to understand.

So when people read the content.

Even though it’s accurate, even though it’s complete, even though it’s focused on the job, is it written in a way that people understand the information?

And there are three measures for that.

One is clarity.

Is it clear? Are there any words or sentences which are ambiguous or obscure?

And the second measure is, is it concrete?

So when you’re talking about things, do people understand what you mean by certain things?

So are you talking about concepts or abstractions that are unfamiliar to them? If you’re talking about concepts, are you providing examples that are appropriate?

For them to understand what you mean by something so high, medium and low, is it clear what is meant by high, what is meant by medium, what is meant by low?

Do you give examples of those, or do you give examples by explaining when and where you would use a particular feature scenarios. So it’s useful for it’s ideal for this particular situation. This is when you’d use it. Or can you use metaphors? Can you explain things from what people know, and then explain this unknown thing to them. So saying it’s like this and helping people understand it in that way.

So under the area of easy to understand, we’ve said we’ve got clarity. We’ve said we’ve got concreteness. The third one in this area is style or you might include tone in that as well.

So the way that you write it, the style that you use. Have you used correct and appropriate writing conventions? Have you used appropriate words for the audience?

Is the tone correct for the audience for the situation, the context in which it’s going to be used.

So that gives us six different measures so far.

Is it orientated to the tasks that people want to do? Is it accurate? Is it complete? Is it clear? Is it concrete? Is it stylistically good?

So that’s ease of use, and that’s ease of being understood.

The third measure and three categories under, that is, findability.

Is it easy to find?

So the three categories under that are: organisation, retrievability, and visual effectiveness.

Under organisation. Have you ordered and structured the information in a way that’s coherent, so it makes sense to the user.

Have you structured the information so they can find it quickly?

So the most important information is easy to find if they search in the table of contents. If they search in the index. If they search in the search engine box, does the information appear and does it appear towards the top? If we’re talking about search engine results and retrievability.

Is it presented in a way that lets users find the information quickly and easily?

And third is visual effectiveness, so this is the layout, the illustrations, the diagrams, the screenshots, colour type, the icons and other visual devices that we might use.

Have we used those in a way that means that people can understand what’s happening, that the information is attractive to users?

So that’s the checklist from Gretchen Hargis and others from IBM. And it’s a good checklist of nine ways of measuring the quality of the content that you’re producing.

But there is another measure that is quite popular and this comes from the world of usability and user experience.

The world of the web, developed by Peter Morville and it’s called the User Experience Honeycomb. And he created a diagram of hexagons that listed a number of different measures. And he said that each facet or each of these hexagons of user experience design can be defined in a diagram.

He then listed the different measures by which user experience can be judged.

And these overlap with the model developed by the IBM team.

So his measures are:

  1. usable.

The system in which the product or service is delivered needs to be simple and easy to use.

And he argues, systems should be designed in a way that’s familiar and easy to understand.

And by doing this it reduces the learning curve. It means that people get on quickly. They don’t have those lists to learn what they do. Learn is short and painless, or short and painless as possible.

So first measure, is it usable?

Second, is it useful?

Does the product does the information fulfil a need?

Because if it isn’t useful if it doesn’t fulfil  what the user needs, what they want, then it’s not really adding any value.

Is it desirable? This is the third measure.

And by that they mean the visual element to it. Is it attractive?

The 4th is it findable.

Information needs to be findable and easy to navigate. If the user has a problem, they should be able to quickly find the solution.

The navigational structure should also be set up in a way that makes sense.

So we can see from those four usable, useful, desirable, findable.

There is a lot of overlap with the easy to use, easy to find and easy to understand. And then there’s categories under that that we had with IBM.

So with Peter Morville there’s two more.

The product or services should be designed so that even users with disabilities can have the same user experience as others,

And the final measure credible. Products and services need to be trusted.

Now in general with technical documentation, it is seen as trustworthy, believable. It is some of the most credible content that you’ll find on a website, much more than marketing material.

But you still need to check whether your content is credible. If you’re vague, if you’re contradictory, that might damage the credibility of the content that you have written.

So those are the two most common ones that we’ve seen that are out there.

And we have developed a checklist based on those and we’ll talk about that a little bit later.

But in doing some research for this episode, I did come across another measure by Illene Burnstein. This is from a book called Practical Software Testing. And it’s orientated to also being a way of checking software as well as documentation and again.

We’ll see common themes come across in this list.

So the first one that Illene has is coverage and completeness

  • Are all essential items completed?
  • Have all irrelevant items been omitted?
  • Is the technical level of each topic addressed properly for this document?
  • Is there a clear statement of goals for this document?

So a few more in addition to the other ones that we’ve seen so far, under coverage and completeness.

Second point or second criteria, correctness.

  • Are there incorrect items?
  • Are there any contradictions?
  • Are the any ambiguities?

3rd measure, clarity and consistency.

  • Are the material and statements in the document clear?
  • Are the examples clear, useful, relevant and correct?
  • Are the diagrams, graphs and illustrations clear, correct, use the proper notation, effective, in the proper place?
  • Is the terminology clear and correct?
  • Is there a glossary of technical terms that is complete and correct?
  • Is the writing style clear ?

So there aren’t any areas of ambiguity.

So she’s got there 1,2,3 measures, coverage and completeness, correctness, clarity and consistency.

The 4th one is references and aids to document comprehension. And under that a number of questions.

  • Is there an introduction?
  • Is there a well placed table of contents? Are the topics broken down in a manner that’s easy to follow and understandable?
  • Are the topics or items broken down in a manner that is easy to follow and is understandable? So we could look at that from an information mapping or from a minimalism principle and ask
    • Have they used topics and is each topic clear?
    • Is each topic self-contained? Does it only talk about one thing?
    • The one thing that it talks about?
    • Is it provided with a meaningful heading so you understand what is in that particular topic?
  • Is there a bibliography that is clear, complete and correct? Well, that may not be necessarily relevant for online content or for task based content, or using a product.
  • Is there an index that is clear, complete and correct? Again that may not be necessarily needed these days for user documentation, but it could extend to the search engine. Is that effective?
  • Is the page and figure numbering correct and consistent?

So I mentioned that we cover this on a number of our training courses, and we provide a checklist, which is a combination of different sources, primarily the IBM and the Peter Morville one.

And the checklist that we provide on the courses covers these topics.

In fact, there’s a couple in addition to the ones we’ve mentioned so far.

They are

Is the content useful?

Is the content, usable and complete.

Is the content credible?

Is the content findable?

Is it accessible?

Is it valuable and relevant?

Is it desirable?

Is it accurate?

Is it coherent?

Is it consistent?

Is it up to date and

Is it legally OK?

And what can be done with this list is you can create a spreadsheet.

If you’re looking at existing content and you want to prioritise the content you want to work on and improve.

You can create a spreadsheet, and for each document you can have a column against these, and you can have either yes nose or ticks and crosses.

Or you can use a traffic light system of red, amber and green, to highlight the areas where content is weak.

We could even use a rating of one to five and for the documents which have the lowest scores and the most important documents that the users would want to see. You can then prioritise fixing those.

There’s another question, and that is who checks the content?

Who is doing the reviewing and that will affect the checklist?

So if you have a technical writing team, a technical publications team, or a usability team, if you have an editor, if you have a documentation manager, that complete list is a good way of checking and reviewing the content that the technical authors the technical writers are producing.

However, if you are writing content and it is new content and it’s being written to 1st draft we would do things differently.

Because what we would do at that first draftis we would ask the subject matter experts, usually developers, to review the content, but not review it for all of the measures that we’ve discussed so far.

We would ask them to review it to check it’s accurate and that it’s complete.

And to ask them to highlight any issues or problems with what’s being created.

Now the other factors, like it’s credible whether it’s actually needed or not, whether it’s findable, whether it’s coherent, we can address those in the planning of the information design, in the information architecture. We can address those at second review or third review.

But from the subject matter experts, we want to draw on their expertise. We don’t want to overburden them with too much time requirements.

So we would just ask them to just check: Is it accurate? Is it complete?

At the beginning we talked about minimal viable documentation.

It may be that you want to go over and above just the bare minimum, and again in the research for this episode, we came across a presentation that mentioned some work by Rodrigo Back Almada and so he talked about categories that helped prioritise quality.

And he said you could categorise things as essential, conventional, and attractive.

So at the minimum viable product level, it may be that what we aim for is just the essentials.

That there’s enough necessary to achieve minimal levels of customer satisfaction.

That if you didn’t have them, customers will be unhappy, so they are needed, but don’t necessarily get noticed when they are there, because they’re just expected or assumed.

The next level up from that from essential is conventional.

In this situation, when the content is there or other attributes of a product that result in satisfaction when present and in dissatisfaction when it’s not there.

And the more of it that’s provided, the better the customer likes it.

So you could see this as being in alignment or providing the same level quality in the same amount of content as your competitors.

What’s expected by industry standards and the final one, the third category, attractive quality.

This goes beyond customers’ expectations and desires. Customers remain satisfied even with the absence of those attributes, but delighted with their presence.

We provide the minimum we provide the industry standards and then we provide something that’s attractive that goes beyond what people expect.

I’ve not seen that used within a technical publications department, but it’s an interesting approach.

This is really aimed or talking about planning for software and software quality, but I think it could be also extended into the technical publication world.

So those are the different measures out there that we’ve come across for measuring the quality of documentation. T

he IBM Document Quality checklist Peter Morville’s user experience.

Illene Burnsteins measure, which is primarily aimed at software and software testing.

But can also be applied to documentation for combination of those that we use.

I’d be interested to know what you use, whether you use any of those, or whether you use an alternative.

You can let me know via email info at Cherryleaf com or on Twitter.

My handle is @EllisPratt. Another one you could use is @cherryleafltd.

So I am aware that this has ended up a bit like an AMSR list reading recording, but I hope it hasn’t sent you to sleep or into some stupid soporific catatonic state, and that it’s been of interest to you.

If you’d like to know more about Cherryleaf and the technical writing services we provide the recruitment services and the training courses. They’re all on the Cherryleaf website, which is Cherryleaf.com

It’s also good if you have the opportunity. It would be wonderful if you could rate us.

On iTunes or Apple Podcasts, I think it might be called these days.

We have some really nice reviews so far, but it would be great to have some more because that means that Apple will suggest our podcast to other people.

So thank you for listening.

In these pretty dark and horrible times, hopefully things will improve, and until the next time bye bye.

 

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.