Your policy and procedures manual as software

Jared Spool tweeted this morning:

HyperCard was a hypertext program that came with Apple Macintosh in the 1980s. It allowed you to create “stacks” of online cards, which organsiations used to create some of the first online guides. It also contained a scripting language called HyperTalk that a non-programmer could easily learn. This meant HyperCard could do more than just display content: it could be used to create books, games (such as Myst), develop oil-spill models, and even dial the telephone.

Continue reading

The lost Steve Jobs interview – on successful products

Last night, we watched Steve Jobs: The Lost Interview on Netflix. It’s a lengthly (70 minute) interview from 1995, in which Steve Jobs discussed his recipe for a successful business. The interview was made 19 years ago when Steve Jobs was still running NeXT Computers, and just six months before he rejoined Apple.

Here are some highlights.
Continue reading

Getting information from Subject Matter Experts

Flickr photo an interview by illustirInterviews with Subject Matter Experts (SMEs) are some of the most useful sources for Technical Authors when they are gathering information about a product or procedure. This often involves asking a developer or departmental manager a series of questions focused on the types of questions end users are likely to ask.

Interviewing is one of those dark arts that Technical Authors pick up over time – techniques for getting SMEs to find the time to speak to you and review your drafts, ways to avoid conversations meandering away from what the user will want to know, tools for capturing the interview, and so on.

So what tools should you use?

Coming armed with biscuits (cookies in the USA) is probably the most effective tool! After that, the most useful tool to have is a voice recording device. If you have a smartphone, in effect, you have a digital voice recorder. There are many voice recording apps for both iOS and Android, but the one we like is Recordium.

Recordium

In addition to recording audio, Recordium also enables you annotate the voice recording. You can highlight and tag certain parts of audio recordings (for example: to indicate a new topic or to mark sections that relate to definitions of terms etc), and add attachments to those sections as well. You can use it, in effect, as an audio-orientated note clipping application, similar to Evernote.

Recordium also enables you to vary the playback speed. We’ve found this useful when SMEs are using specialist terminology – you can slow down the recording to check what it was they actually said. Listening at a faster speed is also a useful way of reviewing a recording quickly.

Technical Authors still need to transcribe sections of the interview, so it becomes text. Unfortunately, Text-to-Speech applications still have some way to go. Dragon Dictation is available for Apple devices, and ListNote offers similar functionality for Android. However, even if you are just a two fingered typist, you’re probably better off transcribing the audio yourself.

Are there any other apps you’d recommend? Let us know.

Are your user manuals (and any other content) ready for Google Glass?

Google_Glass_detail from WikipediaGoogle Glass, a wearable computer with a screen above the right eye, goes on sale in 2014. Glass is almost certainly going to be used to support maintenance and repair calls, providing technicians (and other types of user) with the ability to access manuals and discuss situations with remote colleagues.

So are your user manuals, and the other content users might need to access, compatible with Google Glass?

Continue reading

Will the next version of Microsoft Word make EPUB publishing a lot easier (or a lot worse)?

NookThere are reports on various technology websites that Microsoft is rumoured to purchasing the owners of the Nook e-book readers and tablets. There are also rumours that the next release of Microsoft Office (codename “Gemini”) will have a “publish to Nook” option.

These potential actions would help Microsoft compete with Amazon and Apple in the digital publishing market; Microsoft would be able to offer writers a feature-rich authoring tool, a publishing platform and a publishing environment, all from the same vendor. The promise is, you could write your book in Word, and be able to start selling it, in just a few clicks.

What is unclear at this moment is whether “publish to Nook” means “publish EPUB documents that will display their content nicely on other ebook readers”. Will the underlying code in the EPUB files be “clean” and not “bloated”? Let’s hope that’s the case.

Let us know what you think.

Popcorn Maker – “freeing Web videos from the little black box”

Mozilla has released Version 1 of Popcorn Maker, a free HTML5 Web application that enables you to create videos that interact with images, text, maps and other media.

This means you are able to add live content to a video. For example, if you have a video telling a user how to purchase an item, you could include details on the specific item they want to purchase, within the video.

Popcorn Maker

Version 1 of Popcorn Maker offers basic functionality. More advanced functionality (such as synchronising a text transcript with a video) is also available via a JavaScript framework.

Mozilla is promoting this as a tool for video makers, but it offers new capabilities to those involved in corporate training, support and user assistance.

In the upcoming weeks, Cherryleaf be advising our clients how they can use the technology in their training videos and screencasts.

See also: Screencast, eLearning and animated tutorial development services

Searching for key words and phrases in training videos – Adventures in media synchronization

One of the limitations of video-based information has been the difficulties for users in finding a particular piece of information in a video. Usually, they have to watch the whole video, or “peck and hunt” to get to the moment containing the information they were searching for.

As we’ve mentioned in previous posts, HTML5, an emerging Web standard, enables Technical Authors and courseware developers to synchronize different media. One application of this is enabling users to search a text for a key word and then start a video or audio at that point. Here is an example.

Searching for key word or phrase in a video - example

In addition to making it easier for users to search videos for the information they need, it will also mean the pages will be more likely to appear in the search engine rankings. In other words, there will be an SEO benefit as well.

Synchronizing text and video within Web pages will become a lot easier in November 2012, when we are likely to see the the introduction of authoring tools containing this functionality (at the moment you need to be familiar with HTML and JavaScript).

We believe this is an exciting development in the field of user assistance.

Combining text and video in eLearning – Adventures in media synchronization

As we mentioned in previous posts, HTML5 enables Technical Authors and courseware developers to synchronize different media. One of the key areas where this can be applied is in eLearning, where users are now able to toggle between text-based content and video tutorials.

As a consequence, Technical Communicators will need to decide which form of text to provide with the video.

Should it be:

  • a transcript, faithfully documenting every word that was said in the video?;
  • an edited, but still conversational, version?; or
  • text written in the minimalist writing style we normally see in User Guides and Help files?

Let’s look at the case for and against providing an exact copy of what was said in the recording.

The case against a transcript

The manner in which we speak and the way we write are often significantly different. Hugh Lupton, from The Company of Storytellers, once said:

It’s a very different journey from the eye to the mind as from the ear to the mind.

In oracy, the artifices of the speech are very important. These are what Marie Shedlock called “the mechanical devices by which we endeavor to attract and hold the attention of the audience.” They are the gestures, the pauses, the repeated phrases that a good presenter will use.

For example, in this performance by Daniel Morden and Hugh Lupton, Daniel repeats “not for you” as a way of keeping the audience engaged (from 0:13):

If we make a transcript, we retain these artifices in the written word. Instead of improving the comprehension, when published in written form, they can make it harder for the user to understand. For example, if the user searches for a key word or phrase, the repetition of those key phrases by the speaker is going to make it harder for the user to find the right instance.

The case for a transcript

  1. Unlike the storytelling examples above, eLearning is rarely delivered in audio form only. In most cases, the presenter and the audience has a shared visual image to view. This helps ensure there is shared understanding.
  2. A transcript gives a true representation of what the presenter said.
  3. If a user remembers a key phrase or word in the presentation, then they may want to search for that moment to replay it. If the text has been edited, or is significantly different, then that phrase may have been omitted.

Which approach should you take?

We suspect over time that the text provided in synchronized elearning courses will follow the minimalist style that works in User guides and Online Help, where the subject matter is technical in nature. A more conversational tone may work where the material is non-technical (e.g. easy to use consumer goods), providing an overview, or where there isn’t the time to do anything more than provide a transcript.

As they say, time will tell.

What do you think?