Last night, we watched Steve Jobs: The Lost Interview on Netflix. It’s a lengthly (70 minute) interview from 1995, in which Steve Jobs discussed his recipe for a successful business. The interview was made 19 years ago when Steve Jobs was still running NeXT Computers, and just six months before he rejoined Apple.
Interviews with Subject Matter Experts (SMEs) are some of the most useful sources for Technical Authors when they are gathering information about a product or procedure. This often involves asking a developer or departmental manager a series of questions focused on the types of questions end users are likely to ask.
Interviewing is one of those dark arts that Technical Authors pick up over time – techniques for getting SMEs to find the time to speak to you and review your drafts, ways to avoid conversations meandering away from what the user will want to know, tools for capturing the interview, and so on.
So what tools should you use?
Coming armed with biscuits (cookies in the USA) is probably the most effective tool! After that, the most useful tool to have is a voice recording device. If you have a smartphone, in effect, you have a digital voice recorder. There are many voice recording apps for both iOS and Android, but the one we like is Recordium.
In addition to recording audio, Recordium also enables you annotate the voice recording. You can highlight and tag certain parts of audio recordings (for example: to indicate a new topic or to mark sections that relate to definitions of terms etc), and add attachments to those sections as well. You can use it, in effect, as an audio-orientated note clipping application, similar to Evernote.
Recordium also enables you to vary the playback speed. We’ve found this useful when SMEs are using specialist terminology – you can slow down the recording to check what it was they actually said. Listening at a faster speed is also a useful way of reviewing a recording quickly.
Technical Authors still need to transcribe sections of the interview, so it becomes text. Unfortunately, Text-to-Speech applications still have some way to go. Dragon Dictation is available for Apple devices, and ListNote offers similar functionality for Android. However, even if you are just a two fingered typist, you’re probably better off transcribing the audio yourself.
Are there any other apps you’d recommend? Let us know.
Google Glass, a wearable computer with a screen above the right eye, goes on sale in 2014. Glass is almost certainly going to be used to support maintenance and repair calls, providing technicians (and other types of user) with the ability to access manuals and discuss situations with remote colleagues.
So are your user manuals, and the other content users might need to access, compatible with Google Glass?
These potential actions would help Microsoft compete with Amazon and Apple in the digital publishing market; Microsoft would be able to offer writers a feature-rich authoring tool, a publishing platform and a publishing environment, all from the same vendor. The promise is, you could write your book in Word, and be able to start selling it, in just a few clicks.
What is unclear at this moment is whether “publish to Nook” means “publish EPUB documents that will display their content nicely on other ebook readers”. Will the underlying code in the EPUB files be “clean” and not “bloated”? Let’s hope that’s the case.
Mozilla has released Version 1 of Popcorn Maker, a free HTML5 Web application that enables you to create videos that interact with images, text, maps and other media.
This means you are able to add live content to a video. For example, if you have a video telling a user how to purchase an item, you could include details on the specific item they want to purchase, within the video.
Mozilla is promoting this as a tool for video makers, but it offers new capabilities to those involved in corporate training, support and user assistance.
In the upcoming weeks, Cherryleaf be advising our clients how they can use the technology in their training videos and screencasts.
One of the limitations of video-based information has been the difficulties for users in finding a particular piece of information in a video. Usually, they have to watch the whole video, or “peck and hunt” to get to the moment containing the information they were searching for.
As we’ve mentioned in previous posts, HTML5, an emerging Web standard, enables Technical Authors and courseware developers to synchronize different media. One application of this is enabling users to search a text for a key word and then start a video or audio at that point. Here is an example.
In addition to making it easier for users to search videos for the information they need, it will also mean the pages will be more likely to appear in the search engine rankings. In other words, there will be an SEO benefit as well.
We believe this is an exciting development in the field of user assistance.
As we mentioned in previous posts, HTML5 enables Technical Authors and courseware developers to synchronize different media. One of the key areas where this can be applied is in eLearning, where users are now able to toggle between text-based content and video tutorials.
As a consequence, Technical Communicators will need to decide which form of text to provide with the video.
Should it be:
a transcript, faithfully documenting every word that was said in the video?;
an edited, but still conversational, version?; or
text written in the minimalist writing style we normally see in User Guides and Help files?
Let’s look at the case for and against providing an exact copy of what was said in the recording.
The case against a transcript
The manner in which we speak and the way we write are often significantly different. Hugh Lupton, from The Company of Storytellers, once said:
It’s a very different journey from the eye to the mind as from the ear to the mind.
In oracy, the artifices of the speech are very important. These are what Marie Shedlock called “the mechanical devices by which we endeavor to attract and hold the attention of the audience.” They are the gestures, the pauses, the repeated phrases that a good presenter will use.
For example, in this performance by Daniel Morden and Hugh Lupton, Daniel repeats “not for you” as a way of keeping the audience engaged (from 0:13):
If we make a transcript, we retain these artifices in the written word. Instead of improving the comprehension, when published in written form, they can make it harder for the user to understand. For example, if the user searches for a key word or phrase, the repetition of those key phrases by the speaker is going to make it harder for the user to find the right instance.
The case for a transcript
Unlike the storytelling examples above, eLearning is rarely delivered in audio form only. In most cases, the presenter and the audience has a shared visual image to view. This helps ensure there is shared understanding.
A transcript gives a true representation of what the presenter said.
If a user remembers a key phrase or word in the presentation, then they may want to search for that moment to replay it. If the text has been edited, or is significantly different, then that phrase may have been omitted.
Which approach should you take?
We suspect over time that the text provided in synchronized elearning courses will follow the minimalist style that works in User guides and Online Help, where the subject matter is technical in nature. A more conversational tone may work where the material is non-technical (e.g. easy to use consumer goods), providing an overview, or where there isn’t the time to do anything more than provide a transcript.
One of the topics Ellis covered in his presentation at Technical Communication UK 12 conference was how media synchronization is likely to affect online training, online Help and other forms of user assistance.
HTML5, an emerging Web standard, will enable Technical Authors and courseware developers to synchronize different media, such as live data and video recordings.
One key area where this technology is likely to be used is where you are looking to use video to guide users through a specific task. For example, if you had a video explaining how to bid for an item on an auction site, you could include, dynamically, details on the product into the video itself. If a different user watched the video, then the product details appearing would change to the one they were interested in.
At the moment, HTML5 is still an emerging standard. However, it is likely to be an important development in the field of eLearning and technical communication.