At the Write The Docs event in London last night, Gergely Imreh presented Resin.io’s approach to customer-driven docs – documentation as self service support. Resin.io is a software company that provides Linux containers to the Internet of Things. It sees itself as a support-driven organisation, and so documentation is very important to them.
The discussions at the end of the talk were around which type of platform is best for developer documentation.
Resin.io uses an in-house system, based on Markdown and a flat-file publishing tool. They build pages from “partials” (snippets of re-usable chunks of information) to create “parametric information”. Pages can be built to match different criteria. For example, using Resin.io on a Raspberry Pi and Node.js. It provides an authoring environment that is easy for developers to use; it doesn’t require a database-driven CMS; and the content can be treated in a similar way to the code. The challenge with this type of system is getting it to scale. The “intelligence” of the system is through storing content in folders and using scripts within pages. As the grows, they are finding it harder to manage.
Gergely said he’d like see if a wiki-based system would work better. Content would be easier to edit, as pages would be more self-contained.
Kristof van Tomme suggested using a database-driven CMS. Pages would be built “on the fly”, by the CMS. In this situation, the “intelligence” of the system is through metadata wrapped around each topic and the database software that’s managing the content. The downside is it can mean there might be challenges in moving it to another platform at some stage in the future. You also have to manage the database and protect the CMS from potential hacking.
Another suggestion was to use a semantic language to write the content. This could be AsciiDoc or DITA. In this situation, the “intelligence” is placed in the topics and with the writers: they “markup” sentences or paragraphs for each applicable parameter, such as audience and computer. These can be published as flat files or be managed by a database. This approach is scalable and tools-independent, but it requires much more work by the writers.
What’s best depends partly on your view of the problem. Is it a information design problem, a software problem, or a data management problem? There are pros and cons to each approach.
Daryl Colquhoun has written an article in tcWorld about the international standard ISO/IEC/IEEE 26512. He explained the standard is going to be revised and renamed: from “Systems and software engineering – Requirements for managers of user documentation” to “Systems and software engineering – Requirements for managers of information for users”.
The reason for this, he states, is because, in many parts of the world, the term “documentation” is associated with a printed manual only. The neutral term “information for users” refers to all types of content: Online help, audio, video, and Augmented Reality.
The problem with “information” is it can mean many things. Information for users could mean the weather forecast. We may well need to move away from the word documentation, but I’m not sure we’ve yet come up with a suitable alternative.
We’ve added three short and simple guides to the Cherryleaf website:
The Government Digital Service has been working on establishing a standard design for its technical (i.e. developer) documentation. This content is for systems architects, developers and users of the GOV.UK platforms and APIs.
You can see an example at: GOV.UK Platform as a Service
Cherryleaf was given a preview of the new design a few months ago, when we ran an API documentation training course at GDS. We made a couple of suggestions, which look like they’ve been included in the final design.
The documentation has these main sections:
- Overview. This includes why you would use it, and the pricing plan.
- Getting started. This includes prerequisites and limitations.
- How to deploy
- Managing apps
- Managing people
Elsewhere on the website is information relating to support, the product features, and the product roadmap.
As with other GOV.UK content, the team carried out research into what developers wanted, and they carried out usability testing. I understand the researchers discovered developers preferred content to be on a long, single page, and that they would be working in a two screen environment. Using long pages enables users to search and navigate with the keyboard, rather than the mouse. GDS also looked at other developer websites, such as WorldPay and Stripe, for best practice.
GDS is highly regarded in the technical communications community for its excellent work on the GOV.UK website. This means it is likely other organisations will copy GDS’s design for their own developer documentation.
Following on from James Mathewson’s presentation at Content Strategy 17, we’ve been reflecting on Cherryleaf’s main website, and the improvements we could make to it. One thing we have started to do is reduce the reading age for the content. Reading age measures are also, in effect, readability measures. So any improvements also benefit people with a high reading age.
There are a number of factors that affect the readability of a page.
One is using the active voice. In our case, there were only a few sentences using the passive voice that we needed to change.
Another is to reduce the sentence length. We found we had many long sentences. We’ve split those into two, shorter, sentences.
We’re still being marked down in the readability scores for using “difficult words”. This is harder to fix, as we need to describe technical subjects. We also don’t use many transition words in sentences. This is probably a symptom of writing short Help topics, where transitions are less common.
We’ve also updated the look and feel for the main site (and for this blog), and we made some changes in order to reduce the number of clicks needed to get to information. This is a work in progress. We hope you find the site easier to use.
I spoke at, and attended, the Content Strategy Applied 2017 conference last week. One of the keynote speakers, James Mathewson, provided a fascinating description of how IBM uses audience intent modelling to map its content plans. By doing this IBM is able to align its content with the buying cycles for their target personas.
This planning involves the management of 300 million pages and 100,000 marketing assets, and they use a dizzying array of artificial intelligence and software to improve their search engine rankings. However, their strategy is actually very simple.
There are three forms of audience intent
These are informational (learn about a topic), navigational (find information about a topic), and transactional (find a place to buy the solution or get help).
There are two kinds of audience
These are business people and specialists.
There are two kinds of queries
These are branded and unbranded. Most searches are unbranded questions, and people only move to branded questions when they are ready.
There are five stages in the IBM customer journey
Here are the steps and the type of content IBM provides:
- Discover – What is big data? web page
- Learn – Video on big data (“Four ways big data and analytics transform marketing”)
- Solve – A product information page (“14 top big data analytics platforms”)
- Try – The offer (Watson Analytics 30 day free trial)
- Buy – A whitepaper (“Understanding Watson Analytics”)
IBM has invested heavily in technology
This is used to maintain consistency in the tagging of content, and in the tone and voice. It’s also used to learn what audience want, and are searching for on the web. A lot of searches are in the form of questions, so they mine those questions to discover what they are asking.
IBM avoids online marketing tricks
James said “clever messages to push people and trick them” rarely work online, and if they do the reader is unlikely to come back. Instead they focus on what the audience wants, and aim to meet that need.
It was the best presentation at the conference, and it provided lots of ideas for Cherryleaf’s website.
I spoke at, and attended, the Content Strategy Applied 2017 conference last week. One of the keynote speakers, Madi Weland Solomon, explored what the impact of content has on users, and the trends that will inform content strategy in the near future.
She said one of the key challenges for organisations will be dealing with the loss of trust in information. She quoted a survey that stated over 50% of Americans have no trust in mainstream news. Her suggestions to fix this was to become more active at representing the public (and end users). Organisations should use more human-centric data and focus on helping users. Referring back to Dale Carnegie, Madi said being useful, and being seen to be an advocate for users, was vital. She suggested the law of reciprocity would play a part in users returning the favour of being helped by the company.
Help and other forms of user assistance meets this type of need. It is already seen at some of the most trustworthy content on the web, and it is useful. The should not be hidden about behind a firewall, but helping to build and sustain the trust between the organisation and the their users.
She also looked at which type of content is read the most: blog articles that have roughly a seven minute reading time. This fact is more problematic for technical communicators, as the trend is to write short chunks of information. Perhaps there is a need to rethink this style, in some situations.