We’ve uploaded the slides from Ellis’ lightning talk at February’s London Content Strategy Meet Up to SlideShare:
We’ve uploaded the slides from Ellis’ lightning talk at February’s London Content Strategy Meet Up to SlideShare:
You’ll find a new case study on the Cherryleaf Web site: Helping HCC deal with the size and complexity of embedded systems documentation.
HCC Embedded is a high tech software corporation that develops specialist software for deeply embedded systems, such as file systems, USB and networking software.
Dave Hughes, CEO of HCC, realised that with over 100 different modules to be documented, often with inter-dependent content and frequent updates, managing the documents in Microsoft Word had become unmanageable and untraceable.
HCC’s documentation assists users developing with the products, and it plays an important role in the marketing of HCC’s products to developers. This means keeping a consistent format and brand across all this material is critical to the organization.
For the rest of the case study, see Helping HCC deal with the size and complexity of embedded systems documentation.
According to business strategist Dr Alan Rae, it has been calculated that only 15% of the value of a company appears in the balance sheet. The rest is intangible value, which lies in four main areas:
If all of these are coded and formalised, then a financial justification can be made for the value created in the company.
So how can you code and formalise these areas? One way is to turn them into software applications, and the other is to record them. Your intangible value will be recorded in the polices and procedures, in people’s knowledge that is captured and documented.
This means the better your content strategy and content management systems are, the more in control of your business’s intangible assets and intellectual property you’ll be.
One of the challenges when considering moving to a single sourcing authoring environment, such as DITA, is determining the Return on Investment. This often boils down to a key question: how much content can you actually re-use?
Organisations typically attempt to answer this question in a number of ways:
In an ideal world, you’d be able use an application that could look at all your content and give you a report telling you the where content is repeated. It could do the “heavy lifting” in the information audit automatically for you. This programmatic analysis of reuse within existing content, at an affordable cost, is now starting to become possible.
Today’s BBC News Web site has a piece on Ofsted re-using sentences in more than one school inspection report:
An investigation has begun into claims that Ofsted approved “cut and paste” inspection reports using identical sentences and phrases…both reports say: “Some teachers do not plan learning for pupils at their different levels of ability and marking is not leading to improvement.”
Both reports make comments about the low attainment of pupils in reading, writing and maths which vary by just a few words.
Clearly it’s wrong if if a report has been put together with little thought, or if it contains information that is incorrect, irrelevant or inappropriate.
However, if the information the writer wants to convey has been said before, surely having access to a collection of re-usable sentences is a good thing?
Earlier this week, I was asked my opinion on whether a Documentation Manager was needed when the individual Technical Authors are embedded into Agile project teams.
My response was that a Documentation Manager mainly provides people management, project management, process management and content management. If a Technical Author is a member of a software project team, then that team’s Project Manager is probably providing the people management and the project management to the writer.
That leaves the need for someone to manage the processes and manage the content. I suggested managing the content could be done by someone with the role of Editor (or “Content Wrangler”). They might also look after the processes, or they could have another writer take on that responsibility.
It’s then a decision as to whether the organisation sees these roles as senior to the technical writing positions, or as a specialism and consequently on the same job grade.
It does leave the management of the writers’ career progression falling through the cracks, unfortunately.
How do others deal with this issue?
I was sent a review copy of Alan J. Porter’s latest book, The Content Pool: Leveraging your company’s largest hidden asset. It’s a well written book that’s ideal for anyone who is uncomfortable about the way their organisation creates and manages its written content, as well as anyone who simply wants to manage their content in better ways.
The book identifies and takes you through the key aspects of taking control of your content. I liked the book for what it left out, as much as what it covered. Content strategy and content management are huge topics, and it’s easy to feel overwhelmed by it all.
He could have covered topics such as the difference between short term and long term information, the provenance of information, and the attention economy (illustrated in the video below). However, he was right to leave those topics out and keep the focus on the main issues.
Alan raises key questions (such as why are you producing this content?), and helps steer you to the answers. The book also contains many anecdotes and case studies that keep you engaged throughout the book. He keeps reminding you to check any content system you implement meets your business goals.
However, being realistic, very few if any, companies are going to leap immediately from undervaluing their content assets to having them overseen and cared for by the highest levels of the organizations.
At the start of the book, he lists many examples of where poor content has had a major impact on an organization. Unfortunately, I don’t think these will persuade a CEO who thinks content is not that important to change their mind. I suspect they are more likely to change their mind if they felt their content was causing them to be left behind by their competitors.
The examples of Disney’s (which is mentioned in the book) and Coca Cola’s approaches to content strategy are likely to be a more convincing argument – big companies using content to gain a strategic advantage. We’ve also found other motivating factors to be directors who feel they don’t fully understand how the organisation is doing (numbers can only tell you so much, and meeting people is time-consuming) and CEOs who feel staff are not ‘getting’ the organisation’s goals and direction.
The final chapter provides great advice on how to sell yourself, and the idea of content strategy, to the organisation.
The worst aspect of the book is its cover drawing – it’s the wrong image for a professional book such as this. So, don’t judge this book by its cover – it’s worth adding to your bookshelf.
The London 2012 Olympic Games are less than a year away, and we recently came across some information about the IT systems that will be supporting the games.
“Planning and implementing IT in any major project is challenging to say the least. Problems and delays can cost millions of pounds. The Olympic IT team cannot afford any delays and the reputation of an entire nation rests on its success in 2012.”
The Olympics IT system must meet the needs of athletes, the media and TV commentators, which means it must feed this data in real-time.
The International Olympics Committee (IOC) has defined a standard format for this data to which the relevant IT providers must conform; it’s called the Olympic Data Feed (ODF), and the IOC has created an Web site on the ODF that anyone can view.
According to the ODF Web site’s Help page, the idea of the Olympic Data Feed is to:
define a unified set of messages valid for all sports and several different systems, which can send sport information from the moment it is generated to a set of final customers.
It’s interesting to see these requirements and see how an Olympics, from an information perspective, is documented.
If you imagine being the person who has to read and understand this information, then you’d probably feel overwhelmed. The requirements are published as a series of Word and PDF documents, so you’d probably need to print them and lay them out on a table to check your system meets all the requirements in the different documents. It doesn’t look like the documents are produced using a single sourcing tool – as far as we can tell, they’ve been written in Word.
In the same way that the ODF data will benefit from being in a unified system, given the complexity of the Olympics, it’s likely the requirements documents would benefit from being in a unified system as well. For example: hyperlinks could be used to link related information contained in different documents, and common content could be shared across the Olympic and Paralympic games.
When you look at the sheer volume of information contained in the the ODF website, it’s clear running, informing and documenting an Olympics is a big challenge!
This is an edited recording of a case study (by Malcolm Tullett of Risk and Safety Plus and Ellis Pratt of Cherryleaf) presented at the London Atlassian User Group meeting in April 2011. In this case study, we show how Cherryleaf created a system in Confluence software that dramatically reduced the time needed by Risk and Safety Plus to create risk reports.
In the UK, every building, apart from private single dwellings, needs to be assessed for fire risk every three years. To do this, a fire risk assessor will assess the building and write a report on their findings and recommendations. For offices, these reports can be 30 pages long, and it can take an assessor a full day to complete the report.
We’ve been working with a fire risk assessment firm to create a system for them that generates these reports in less time and in a more consistent way. Like many organisations, they’ve been using Microsoft Word to write the reports, and this can lead to a wide variation in the way the reports are presented.
Cherryleaf has been developing a report generator for them that significantly reduces the time needed to produce their reports – they believe they can reduce the time needed from a day to an hour – and, this week, they’ve started to print out the assessments they’ve been writing.
There were a number of potential software applications we could have used (for example, Author-it, Mindtouch and Confluence), but the best fit for this client was Confluence. Within this application, we created a master report ‘boilerplate’ that contained all the key information that should go into a fire risk assessment. This master boilerplate ensures there are no omissions in each report.
On the individual pages within the report, there are numerous drop-down sentences and blank text boxes for the assessors to choose. There are also ‘variables’, for chunks of information that need to appear in more than one place in the report – they are embedded in appropriate paragraphs. If you change the information contained in the variable, then this change is implemented at the appropriate places in the document.
The project has produced a number of challenges. For the client, they have been looking hard at the content that goes into an assessment report – and how to create a single report that satisfies the many different standards for fire risk assessments. For us, we’ve had to create a system that works for people who might not be very technically literate. For example, people who have never uploaded an image into a document before. We’ve also had to create something that’s very flexible – suitable for assessments of small buildings such as bandstands, bus shelters and suchlike, to big buildings such as tower blocks.
The proof of the pudding is in the eating, as they say, and we’ll soon be able to see how much time the client will save. The signs are looking good, and there’s likely to be further enhancements and developments to their system in the future.
At a rough estimate, there are 10 million buildings in the UK that need to be assessed for fire risk each year. If our system reduces, at a conservative estimate, the time needed to produce each report by 4 hours, then there’s the potential for it to save 40 million hours of writing time per year.