Is your Technical Author a “Quant” or a “Pundit”?

The US Presidential elections have just ended, and the big winners were the “Quants” – the statisticians such as Nate Silver, who used statistical models of big data sets to accurately predict the electoral college vote results. In competition with the Quants were the “Pundits”. These were the commentators on politics, some of whom said they were using gut feel to make their predictions. Pretty much all of the Pundits failed to predict the results accurately.

It is our experience that there is a similar difference between different Technical Publications teams.

There are the Pundits – they don’t know how many people read their documentation, and they don’t have accurate data on how people are using the content they are created. They are confident, however, that what they are producing is what their users need.

There are also the Quants. These are the Technical Publications teams that use (mostly) Web analytics and usability testing to measure the effectiveness of the content they are creating. They can track and measure the documentation they’ve provided to users. They have the ability to test and prove their assumptions, to tell management how many people yesterday were reading or downloading the documentation, to know what what they were searching for and reading. They can quantify how many Support calls were avoided by people reading the documentation, and quantify how many Support calls were made as a result of missing or poor documentation.

So will the Quants win out over the Pundits in technical publications, in a similar way to politics? Is the “big data” available for every Technical Author to carry this type of analysis?

Are Technical Authors, or their employers, comfortable analysing their work in this way?

See also:

Leave a Reply