Towards Flow-Based User Assistance

Flow theory is a psychological concept that is gaining interest in e-learning. It is a concept that should be also considered in the fields of User Assistance and Technical Communication.

Flow is akin to sportsmen being “in the zone” – flow is the situation where people are happiest when they are completely engaged in a task.

Online Help has been traditionally interruptive – people have to subconsciously admit they have failed and need to seek assistance from a Help file, Web page or user guide. The adoption of the term “User Assistance”, instead on online Help, is part of movement towards new models for minimising the situations where users get stuck, helping them quickly should that happen.

The conditions necessary to achieve the flow state include:

  • Having clear goals
  • Direct and immediate feedback
  • The right balance between the user’s ability level and the task
  • An activity that is intrinsically rewarding.

Flow-based User Assistance complements concepts such as adaptive content, as it implies content should adapt dynamically to explain information in the most suitable way. It also complements ideas such as affective assistance, conversation and community based documentation, in that these may be a more suitable “tone of voice” in certain circumstances.

In practice, this means that User Assistance is likely to be embedded into the User Interface – for example, helping explain what certain concepts mean, and what makes a good choice.

It is a very good approach to take if you are developing apps for mobile phones or tablets. This is, in part, because the iOS operating system has limited multitasking capabilities – you have to interrupt one activity in order to do another.

To adopt a flow-based approach, User Assistance must be planned and considered from the very start of any software project. As it is not a bolt-on to the application, it cannot be left to the end of the project. Guidance text becomes located in numerous different places.

The reward for taking this approach is that users get stuck less often, enjoy the application more and become more capable users, perhaps even at peak performance.

Designing documents for the iPad 3: the return of old design metaphors

After a few days of using the new iPad 3, it seems likely that, in the future, documents will be designed to take advantage of its retina display. Below are some thoughts on the new trends we’ll see in the way documents are designed for reading on tablet devices.

The paper metaphor

It has been good practice to present information published on paper and information published on screen differently. The limitations of computer screens, (for example, the screen resolution, screen flicker and eye strain issues) have meant long, linear documents don’t work on a screen. People like the resolution, portability and ability to make notes that paper provides. Paper simply is a great medium for deep learning and reading on the move.

It means organisations expect PDFs to be read on screen, when in reality they are printed out by users. Often the promised printing savings by distributing content online were never actually realised.

With the iPad 3’s screen, most of the limitations of reading on screen have been eliminated. Indeed, Apple is promoting it as a device for reading textbooks – which is can be an example of deep learning. It’s like paper in that it’s portable, you can make notes and you don’t get eye strain.

This means, we’re likely see documents on screen that look like documents printed on paper. It may be time for Technical Authors to dust off that copy of “Dynamics in Document Design”! For books that use Apple’s iBooks, we’ve found you tend to read page by page, instead of using the “peck and scan” approach common reading online content.

A new metaphor for online documents – Metro

The paper metaphor is not the only metaphor you can use. The new Metro UI, developed by Microsoft for Windows 8 and smartphones, is another design metaphor that is being adopted.

The Metro UI uses the following approaches:

  • Information is consolidated groups of common tasks to speed up use. This is done by relying on the actual content to function as the main UI.
  • The result is an interface that uses a “grid approach”, using large blocks (instead of small buttons, as well as laterally scrolling “canvases”.
  • Page titles are usually large and scroll off the page laterally.
  • The UI responds to users actions, by using animated transitions or other forms of motion.

An example of this is the Guardian iPad app:

According to The Guardian’s Andy Brockle:

We have created something that is a new proposition, different to other digital offerings. It works in either orientation and nothing is sacrificed. Instead of it being based on lists, breaking news, and the fastest updates it’s instead designed to be a more reflective, discoverable experience.

Displaying images

With the ability to pinch and zoom, readers have the ability to look at images in great detail. This may mean writers will need to present their documents in SVG format, but if we assume they stick to bitmap formats such as .jpg and .png, we’re still likely to see a change in the way documents that rely on images are designed. Instead of needing a series of separate images to display detail, the writer can provide a single image for the reader to explore. It also seems likely we’ll see images that contain “layers” that can be peeled off to reveal the underlying details.

Unresolved aspects

We’re at the beginning of the process of making the most of portable devices with “real-life” displays, so document design is likely to evolve further. It’s still unclear what is the best navigation UI for iPad3 readers.

The bear trap

There is a huge bear trap waiting to catch out organisations – that they assume what works on iPad 3’s retina display will work on screens with lower resolutions.

Conclusion

The more you use the iPad’s new screen, the more you realise it will change the way documents are designed in the future – the biggest possibly being a move from on screen content being structured laterally instead of vertically. With predictions of there being more iPads than citizens of the United States of America by the end of next year, there’ll be more and more reasons for optimising content for the iPad 3.

Introducing the Head Up Display. Say hello to the future of the menu

The Ubuntu operating system is to replace its application menus with a  “head-up display” (HUD) box. According to Mark Shuttleworth, Lead design and product strategy person at the company behind Ubuntu:

We can search through everything we know about the menu, including descriptive help text, so pretty soon you will be able to find a menu entry using only vaguely related text (imagine finding an entry called Preferences when you search for “settings”).

 

One of the comments states:

I suspect that applications will need to give help documentation a more significant place in the development of the application than it currently enjoys. Help seems the logical place to embed command discovery in such a system especially in connection with a capacity for fuzzy searches.

Is a nudge enough to change user behaviour?

In 2010, the UK government set up a “nudge unit” to look at ways the public could be persuaded – “nudged” – into making better choices for themselves without force or regulation.

This should be of interest to software designers and Technical Authors, because perhaps the same techniques could be used in the field of User Assistance.

Yesterday, The House of Lords Science & Technology sub-committee reported on the results so far.

According to The Guardian, the theories behind the current work have a long history, but came to prominence in 2008 with a book called Nudge: Improving Decisions About Health, Wealth and Happiness, by the Chicago Business School economist Richard Thaler and Chicago Law School professor Cass Sunstein.

According to the Belfast Telegraph, One experiment (prior to the nudge unit being set up) involved HM Revenue and Customs secretly changing the wording of tens of thousands of tax letters. This led to the collection of an extra £200m in income tax.

The paper states the unit’s approach centres on the acronym “mindspace“:

Messenger (i.e. he who communicates information affects its impact); Incentives; Norms (what others do influences individuals); Defaults (pre-set options tend to be accepted); Salience (revelance and novelty attract attention); Priming (sub-conscious cues); Affect (the power of emotional associations); Commitments (keeping public promises); and Ego (the stroking of which encourage positive action)

So is it working?

Unfortunately not in trying to get us to live more healthily, according to the House of Lords report. Committee Chair, Baroness Neuberger, said:

for the most important problems facing us at the moment, the science says that “nudging” won’t be enough.

That doesn’t mean nudging should be rejected out of hand – it might work in other areas, such as software usability. We’re unaware of anyone using nudge theories in developing software or User Assistance. It would be interesting to know if anyone has tried to apply it in that sphere – and whether it has worked or not.

What roundabouts can teach every software developer

Roundabouts have been in use in the UK for over 50 years, and today are seen as a natural part of the landscape – something as intuitive to use as a postbox. Everyone knows how to use them, they’re just intuitive to use, surely?

Apparently not, judging by this BBC article on the introduction of roundabouts to the USA and hundreds of American roundabout videos on YouTube (example below) .

It’s easy to fall into the trap of assuming certain knowledge is common to everyone – that we all understand the basic concepts. This isn’t always the case, and things may not be intuitive. They may be easy to use, yes, but there is often a need to familiarise users at the start. This is true for software development as it is for roundabouts.

Help in your line of sight

In an article called “The Future of Advertising will be Integrated“, Mark Suster argues readers’ attention is focussed on text and not the banners around it. This “banner blindness” is leading advertisers to move their messages to “the stream”. An example of this is Twitter’s promoted tweets service, where advertisers can pay for a tweet to be featured on Twitter for a day.

If we’re seeing a move towards “integrated advertising”, does this mean we should also be putting online Help in “the stream” as well? Rather than waiting to be called up via the F1 key or Help button, should User Assistance be placed where readers’ attention lies? Should Help be integrated into the stream, too?

Help is broken?

Are we at the point when we need to acknowledge that classic online Help files are not working as well as they should – that is, as the primary source of information to assist users when they get stuck?

This is not a Don Draper “why I’m quitting tobacco” moment, and this is not a criticism of the Help Authoring Tool vendors. Instead, it’s a proposal that, in some situations, what is delivered as online Help needs to be substantially modified to meet the needs of many modern technologies and users.

What’s wrong with Help? Help is often a “walled garden” in an Internet-era built on knowledge sharing and collaboration. Usability in relation to the user interface can be poor at times. It’s hard to measure its value and the ROI. Even its purpose can be vague to some project managers. Unfortunately, there’s often just not enough time to make significant improvements. We could go on.

Many users still get stuck, and many products still fail to work when they’re linked to another. Words still are a key way of communicating and teaching users. We still need to assist users and we still need some form of Help. It could be a useful tool in “evangelism marketing”. It could do so much more. This is why we’re suggesting it’s time to take a strategic look at what and how we can provide Help for when users get stuck.

What do you think?