We’ve added a new vacancy to our site: #4159, Digital Content Manager, Central London, circa £50K.
Approximately 50% of a Technical Author’s day is spent writing. However, when Technical Publications teams look for efficiencies, they tend to focus on the 50% of time spent on non-writing activities, such as researching, reviewing and planning. They assume the content itself cannot be written more quickly. To an extent, they are right, as the
querty qwerty keyboard is not an optimal layout.
We’ve been going through a process of transcribing our early e-learning modules, in order to have scripts upon which we can base future course updates. As part of this project, we’ve been using a free application called Plover to help us write the content. With Plover, you have the potential to create content (in Word, RoboHelp, Flare, Oxygen XML etc) at up to 225 words per minute (wpm).
Plover is based on chorded typing. You press more than one key at a time to create words. Chorded typing isn’t new – for example, it was demonstrated in Douglas Engelbart’s famous “The mother of all demos“.
Below is a five minute lightning talk on Plover and some of the emerging hardware:
So far, in my case, I’ve been able to double my typing speed. Realistically, those of us participating in this project at Cherryleaf aim to get to 180 words per minute. The reason for this is that most people speak at 160-180 wpm. At that speed, you are able to transcribe subject matter experts in real time – which means there’s no need to record an interview and then type it up at a later date.
There is a learning curve to this method, but it is based on over 100 years of theory and practice. It is tremendous fun – a bit like learning to use a
querty qwerty keyboard for the first time.
Mike Atherton, Lead Instructor at General Assembly London, tweeted a link to a 2011interview with Ted Nelson on the future of text, document abstraction and transclusion.
Ted Nelson is one of the pioneers of information technology. He has been credited as being the first person to use the words hypertext and hypermedia (although he denies this), transclusion and virtuality.
It’s an interesting description of how content should be independent of format and media, so it can be portable, re-usable and presented in ways that best suit a reader’s needs.
MadCap Software has asked me to present, as a webinar, one of my conference presentations from the MadWorld 2014 conference – Bust a Move: From Technical Communication to Content Strategy.
In this webinar, we’ll look at how technical communicators can get more involved in corporate content strategy. We’ll look at why they might want to do that, the differences between technical communication and content strategy, as well as looking at how they might re-position themselves. We’ll also look at what tools and skills technical communicators can bring across from the technical communications field.
It was a popular session at the conference, standing room only in fact, with 20 minutes of questions from the audience at the end.
This webinar will be held on the 12th August at 4.00pm BST (8:00 am Pacific Time), and it’s a free event.
I’m afraid there was some confusion over which presentation from MadWorld 2014 we would be repeating, and I mistakenly stated the webinar would be on metrics. That was my mistake. We’ve also had to move the webinar from its original date on the 13th August to the 12th August. Sorry for any confusion caused by these changes.
David Farbey wrote a semi-existentialist post on the challenges for technical communicators yesterday. I’d like to look at the issue in a different way, by looking at the big questions in technical communication today. The answers to these questions (which may be decided by people outside of the profession) are likely to affect the future direction for technical communicators.
One of the main benefits from single sourcing is the ability to reuse existing content. Different departments can avoid duplicating work, which means they can save time and money.
Unfortunately, it can be difficult to quantify these savings before you move to an authoring or content management system that enables you to single source. Analysing all the existing documents in a business can be overwhelming, which means often organisations only quantify the savings after the single sourcing content management system has been implemented.
There are a few software applications that can help you analyse your existing content and determine how much duplication exists. You get a sense of how much time and effort was wasted in the past, which is a pretty good indication of how much waste you’d avoid in the future.
At this week’s London Agile Content Meetup, Lana Gibson of the Government Digital Service (GDS) outlined how they use Google Analytics extensively to check and improve the user journey on the GOV.UK website. She said GDS treats this analytical data as the voice of their users – with GDS needing to interpret it and provide what we, as UK citizens, need.
Lana said they need to see what content is getting the most traffic, so that they can ensure that the most popular content is of top quality, and is prioritised within the site.
One of the key actions analytics have enabled them to do is improve the connections between different but related needs that were already on GOV.UK. She showed the example of the page views to the “Make a SORN” page. The number of views increased by 70,000 in a month due to them simply adding a link to this page from the car tax related links section. Previously, SORN information wasn’t mentioned on the car tax page.
She also said they treat searches on the GOV.UK website itself as an indication that users haven’t found what they’re looking for first time. As an example, she said by looking at search terms she discovered lots of people were searching for information about taking rest breaks at work, and that they’d omitted that from the page about an employee’s contract and working hours.
Another example she gave was they’d found, on many of the pages about passports, people were searching for “second passport”. This was by people wanting to apply for a second passport. GDS has identified this as a topic that should be added to the site.
Lana said they also optimise the pages based on the language their audience is using. They found having the most important keywords in the title or first sentence helped people find information quickly. GDS uses analytics, Google Trends and Google AdWords to help them understand what terminology people use. For example, she said they found out their page on annual leave needed a better title: users were actually searching for “holiday entitlement”.
Finally, she said they also use the data to determine what to leave out. If a department wants to add new content to the site, they can use analytics to help assess if there’s actually a need for this content.
Lana’s presentation has been summarised in two excellent blog posts on the GDS website. They are well worth reading:
There’s a wonderful German word, die Weltanschauung, which roughly translates as a view of the world. It suggests there is a framework of ideas and beliefs behind people’s descriptions of various things in the world. I was reminded of Weltanschauung at this week’s London Agile Content Meetup, where Rahel Bailie neatly summed up some of the different views of content, content strategy and single sourcing.
Baked v Fried content
CMS Wiki described baked content as “pages that have been generated by a Content Management System, but then moved to a static delivery server, which can serve them at high speed and high volume”. The word “baked” is used, because this approach means you cannot separate the content from the format afterwards. They are baked together.
“Fried” content is where the Web pages are built “on the fly” when they are requested by the end user. Rahel used the example of frying eggs: if you put too many eggs into the frying pan, you can always remove one. Fried content may take a little longer to generate than baked content, but this approach enables you to personalise and filter the content. It also means you can present the information in different ways, depending on which device a person is using.
COPE through technology v COPE through authoring
COPE (Create Once, Publish Everywhere) is another way of describing single sourcing content.
“COPE through technology” is the view that the content is essentially data that can be managed through software. If you need to create a personalised or filtered view of the content, you get a developer to create that version. If you need to create a mobile-ready version of your site, again you get a developer to do this. Content is often created by completing forms, in order to create structured information.
“COPE through authoring” is the view that the writers can do all of the fine-grain manipulation of content. If you need to create a personalised or filtered view of the content, you get the Technical Author to mark up sections for those different conditions in the content itself. To quote Rahel, “You can then run a transformation script run, which compiles the content into its final form, and uploads the content to the Web CMS, or other publishing platform, for consumption and presentation.” The advantage of this approach is it stops you from being tied to a technology or application. The disadvantage is it relies on your writers being able to mark up and structure the text correctly.
It’s important to be aware of these distinctions when you talk about content, content strategy and single sourcing, because your Weltanschauung may not be shared by the person you’re talking to.