Which type of platform is best for developer documentation?

At the Write The Docs event in London last night, Gergely Imreh presented Resin.io’s approach to customer-driven docs – documentation as self service support. Resin.io is a software company that provides Linux containers to the Internet of Things. It sees itself as a support-driven organisation, and so documentation is very important to them.

The discussions at the end of the talk were around which type of platform is best for developer documentation.

Resin.io uses an in-house system, based on Markdown and a flat-file publishing tool. They build pages from “partials” (snippets of re-usable chunks of information) to create “parametric information”. Pages can be built to match different criteria. For example, using Resin.io on a Raspberry Pi and Node.js. It provides an authoring environment that is easy for developers to use; it doesn’t require a database-driven CMS; and the content can be treated in a similar way to the code. The challenge with this type of system is getting it to scale. The “intelligence” of the system is through storing content in folders and using scripts within pages. As the grows, they are finding it harder to manage.

Gergely said he’d like see if a wiki-based system would work better. Content would be easier to edit, as pages would be more self-contained.

Kristof van Tomme suggested using a database-driven CMS. Pages would be built “on the fly”, by the CMS. In this situation, the “intelligence” of the system is through metadata wrapped around each topic and the database software that’s managing the content. The downside is it can mean there might be challenges in moving it to another platform at some stage in the future. You also have to manage the database and protect the CMS  from potential hacking.

Another suggestion was to use a semantic language to write the content. This could be AsciiDoc or DITA. In this situation, the “intelligence” is placed in the topics and with the writers: they “markup” sentences or paragraphs for each applicable parameter, such as audience and computer. These can be published as flat files or be managed by a database. This approach is scalable and tools-independent, but it requires much more work by the writers.

What’s best depends partly on your view of the problem. Is it a information design problem, a software problem, or a data management problem? There are pros and cons to each approach.

XML isn’t the only way to semantic content

I’ve been on the road speaking at a conference this week, and I’ve been listening to a lot of presentations on technical communication. Many of these were on the importance of having structured, semantic content when you are dealing with large amounts of content that needs to be translated into different languages and published in many different ways. All of these presentations put forward XML-based systems as the solution.

However, XML isn’t the only method for having semantic content. For example, AsciiDoc supports attributes, which can be used to add a semantic descriptions to headings, paragraphs and whole documents. You can use conditions in RoboHelp and Flare to categorise content. You can also store content in a database.

It’s sometimes useful to remember that XML isn’t the only way to semantic content.

 

Do you need DITA?

Judging by Social Media last week, there were many strong opinions at the tekom tcworld conference towards the DITA authoring standard and the associated tools. It seemed, as the philosopher Swift once said, “Haters gonna hate”, and, by inference, “Hypers gonna hype”.

Eliot Kimber provided an interesting summary in a post to the DITA users group forum (Trip Report: Tekom 2015, DITA vs Walled Garden CCMS Systems):

“For background, Germany has had several SGML- and XML-based component content management system products available since the mid 90’s, of which Schema is probably the best known. These systems use their own XML models and are highly optimized for the needs of the German machinery industry…These products are widely deployed in Germany and other European countries. DITA poses a problem for these products to the degree that they are not  able to directly support DITA markup internally, for whatever reason, e.g., having been architected around a specific XML model such that supporting other models is difficult. So there is a clear and understandable tension between the vendors and happy users of these products and the adoption of DITA…
…It’s clear to me that DITA is gaining traction in Europe and, slowly, in Germany but that the DITA CCMS vendors will need to step up their game if they want to compete head-to-head against entrenched systems like Schema and Acolada. Likewise, the DITA community needs to do a better job of educating both tools vendors and potential DITA users if we expect them to be both accepting of DITA and successful in their attempts to implement and use it.”

This may have led those who are asking, do I need DITA?, to come away from the conference more confused than before. So, we thought it might be useful to provide a rough guide to whether it’s worth adopting DITA:

[ezcol_1third]

You probably don’t need DITA if:

  • The way the content looks to the user is the most important issue.
  • You have fewer than four writers.
  • You write narrative content.
  • You have limited budgets for tools, training and migration.
  • You don’t have the time to deal with the issues around changing working methods.
  • Your content has a “shelf life” of less than two years.
  • You use a lot of graphics with annotations.
  • You need to customise outputs [added] for individual documents [added] (such as PDFs).
  • The cost of migrating existing content will be expensive.
  • You want the presentation layer embedded with the content layer.
  • You don’t want to work within strict rules regarding how topics are written (where content is marked up semantically).
  • You need to put JavaScript code directly inside topics.
  • You need to use the tools used by developers or the marketing department.
  • You want a simple information architecture.

[/ezcol_1third]

[ezcol_1third]

You might need DITA if:

  • You need to write to (and enforce) a standard.
  • You need to localise content into many languages.
  • You have more than four writers.
  • You want to write semantically.
  • You need a more efficient authoring, [added] reviewing [added] and publishing process.
  • You create many variations of the same document.
  • You want intelligent content that can adapt to different users and contexts.
  • You are spending too much time on formatting content.
  • You need to re-use content in different projects and different contexts, and make those topics accessible to other writing teams who might want to re-use them.
  • You need to establish a controlled vocabulary and taxonomy.
  • You want content validated for consistency.
  • You want automated publishing.

[/ezcol_1third]

[ezcol_1third_end]

You probably do need DITA if:

  • You need to share content with other organisations.
  • Your content will need to last more than 30 years.
  • You want content to be stored in an open data standard, independent of any tool.
  • You don’t want to be tied into a specific authoring tool, content management system or publishing/rendering engine.
  • You need transclusion (where an element can replace itself with the content of a like element elsewhere) across a range of media.
  • You want to have a way of generalising back to a common standard.

[/ezcol_1third_end]

 

Do you agree?

Please share your thoughts below.

All our online courses – at a discounted, bundle price

You can now sign up for all of Cherryleaf’s popular online training courses at a discounted price!

This bundle provides you with access to:

By purchasing this bundle, you’ll save £180 ex VAT. That’s a discount of 38%.

See: Cherryleaf training course bundle – all our online courses at a discounted price

Ted Nelson on the future of text

Mike Atherton, Lead Instructor at General Assembly London, tweeted a link to a 2011interview with Ted Nelson on the future of text, document abstraction and transclusion.

Ted Nelson is one of the pioneers of information technology. He has been credited as being the first person to use the words hypertext and hypermedia (although he denies this), transclusion and virtuality.

Ted Nelson on the Future Of Text, Milde Norway, October 2011 from Frode Hegland on Vimeo.

It’s an interesting description of how content should be independent of format and media, so it can be portable, re-usable and presented in ways that best suit a reader’s needs.