We spotted an interesting statement by the “Father of Behaviour Design”, BJ Fogg:
“For somebody to do something – whether it’s buying a car, checking an email, or doing 20 press-ups – three things must happen at once.
The person must want to do it, they must be able to, and they must be prompted to do it.
A trigger – the prompt for the action – is effective only when the person is highly motivated, or the task is very easy. If the task is hard, people end up frustrated; if they’re not motivated, they get annoyed.”
If we want users to read Help text instead of calling the support line, then we maybe we need to meet those three criteria.
We can assume the user is motivated to fix their problem.
We can write instructions that are clear enough to make them able to solve the problem.
Where some applications fall down is they don’t prompt the user to read the online Help. The link to the Help text is often tucked away in the right hand corner of the screen.
Instead, we could put some of the Help text into the User Interface or the dialog screens, and we could prompt the user to follow a link to more information. Doing this could get users to read the online Help rather than call support.
Last week, we used the Hemingway app to highlight any unclear pages on our main website. The app highlighted four pages where we’d used the passive voice or very long sentences.
The first inclination was to think our readers are cleverer, our content is more technical, it’s not possible to rewrite those parts. We found, of course, we could rewrite them. We decided to write them in the same way we’d write user documentation. We found those passages were much clearer, as a result. A lesson learnt.
There are a number of factors that affect the readability of a page.
One is using the active voice. In our case, there were only a few sentences using the passive voice that we needed to change.
Another is to reduce the sentence length. We found we had many long sentences. We’ve split those into two, shorter, sentences.
We’re still being marked down in the readability scores for using “difficult words”. This is harder to fix, as we need to describe technical subjects. We also don’t use many transition words in sentences. This is probably a symptom of writing short Help topics, where transitions are less common.
We’ve also updated the look and feel for the main site (and for this blog), and we made some changes in order to reduce the number of clicks needed to get to information. This is a work in progress. We hope you find the site easier to use.
Here’s some examples from Munich of what might seem to obvious and common sense to the one audience, but not to others.
Traffic lights that have four lights, with the symbols –, O, I and K:
Pedestrian crossing lights that have two people instead of one:
The second set of lights is still comprehendible (hold the hand of the person next to you, whilst you’re waiting to cross the road 😉 ), but the first set didn’t make sense to even the (non-Bavarian) German members of our party.
The UK’s Government Digital Service has been doing great work in putting users’ needs before the needs of government, so it was a shock to see the revised tax manuals the GDS and HMRC published recently.
“HMRC has built a new publishing system which makes it easier for its tax experts to update and maintain the content of the manuals. Tax agents, accountants and specialists need to be able to see the tax manuals exactly how HMRC publishes them internally, so the GDS team knew we couldn’t touch the content. We did create a new design for the manuals to make them more user-friendly and bring them in line with GDS design principles.”
Screencast videos have become a popular means for delivering “how-to” information. One of the questions developers must address is, how long should you make your screencasts?
At last weeks’s tekom conference, I saw an interesting presentation by Melanie Huxhold and Dr Axel Luther of SAP on how they develop screencasts for SAP’s products (Produkt- und Lernvideos als ideale Ergänzung zur klassischen Dokumentation). In their presentation, Melanie said they had determined the ideal length for their videos by sending out a questionnaire to users, asking them what they preferred.
On Dara O’Brien’s Science Club (BBC 2) this week, neuroscientist Dr Tali Sharot explained “Optimism Bias”, suggesting that our brains may be hard-wired to look on the bright side.
Here is her TED presentation on the Optimism Bias:
Nearly everyone is optimistic they will never get divorced, and they are an above average driver, when statistically that’s just not possible. It seems reasonable to infer that users of software are also over optimistic, believing they are an average or above average user in their ability to use an application.
This has an implication for those developing user documentation and training. It seems likely that most people will believe they don’t need to read the documentation (or receive training) when they actually do.
Over the weekend, Dr Chris Atherton suggested I look at “the doorway effect”. You may well have experienced walking through a doorway and then finding you’d forgotten why you’d stood up in the first place.
Researchers at the University of Notre Dame have discovered your brain is not to blame for your confusion about what you’re doing in a new room – the doorway itself is.
From Scientific American:
The researchers say that when you pass through a doorway, your mind compartmentalizes your actions into separate episodes. Having moved into a new episode, the brain archives the previous one, making it less available for access.
The doorway can be a virtual doorway as well as a physical doorway. The researchers’ experiments involved seating participants in front of a computer screen running a video game.
So is this effect also happening when users need to leave a screen in a software application and read Help – be it delivered as a .CHM file, on a Web site or on paper?
The solution? If we deliver User Assistance (Help) in a way that it is actually located within the application screens, not only can we minimise the need for users having to go through a virtual door, we can also embed the learning into the users’ specific situations.