Podcast 148. Using AI in techcomm – 2024 survey results

In April 2023, we conducted a survey into the use of AI and ChatGPT in technical communications. We thought it would be interesting to carry out a similar survey again.

In this podcast, we talk about the results from the survey and describe what has changed over the last 12 months.


Transcript

This is the Cherryleaf podcast. Hello, my name is Ellis Pratt. I’m one of the directors at Cherryleaf. In this episode, we’re going to look at the results from a survey we did, which was on the use of generative AI. In technical writing.

We did a similar survey back in 2023 back in May and June. The 2024 survey was in April, so about 11 months have passed between the two. So if you’re interested in what we found out in 2023 in detail and some of the questions that technical writers had at that stage, then you might be interested in our podcast episode number 137, which covers that particular topic. In this episode, we’re going to look at what results were from this year’s survey. And compare them to what happened or what responses we got back in 2023 to see if there have been any changes and if so, where those changes have been.

We presented our findings in a webinar, and in this podcast, what I’m going to do is go through the slides that we created for that webinar. This is structured in five main sections,

A little bit about Cherryleaf.

About the survey respondents.

What the respondents said about how they’re using AI

The questions they have.

And finally, a little bit about how to build up your skills in using AI.

Let’s start by explaining who cherry far we are a technical writing services and training company. We’re based in the UK and we’ve written documentation for a few AI applications. We also offer an e-learning course on using generative AI in technical writing. And one of the reasons for running this survey was to help us in keeping that course up-to-date and relevant to technical writers.

Let’s talk about the responses. We’ve got 96 responses this year. I think we had 93 in 2023. So the numbers are comparable. And we asked some questions about the people, the industries they worked in and so on.

The vast majority of people were technical writers involved in technical documentation. What about the size of the companies that they worked for? Well, for this year, 45% of the people who responded said they worked for medium sized company, 23% said they worked for a large company, 14% said they worked for an extra large company, and then we had 10% for just small businesses and 5% for the category we described as it’s tiny.

And that is similar to what we got as a response last year. Last year we had more respondents who worked for large companies, slightly fewer who worked for medium sized companies, but maybe a 5% difference between the two.

Another question we asked was how familiar are you with AI and ChatGPT and how it can be used in technical communication. And the responses this year were that

46% said, I know a little, but not enough.

39% said I know a reasonable amount.

12% said I know a lot.

And 1% said I know nothing.

Again, comparing it to 2023, there’s been a big increase in the number of people that have said they know little, but not enough.

And the amount of people that know nothing has shrunk from about 10% down to just 1%.

The people that knew a lot last year was about 3%.

And the people that your reasonable amount was about 15%.

So there’s been a big increase in the number of people that know little, no reasonable amount or know a lot, which is understandable given that they’re 12 months into generative AI and we’re all a lot wiser.

Our conclusion from that question is nearly half of the people replied feel they don’t know enough about generative AI at the moment.

The next set of questions were around what are other technical authors using it to do. One of the questions we asked was do you use AI and ChatGPT in your role as a technical communicator? So who’s using it today?

28% said they’re not using it at all;

19 percent said I’ve done some experimenting;

16% said I use it occasionally;

19 percent said I use it sometimes; and

15% said I use it a lot.

So if we take I use it, sometimes, and they use it a lot. Add those two together, we come up with a figure of 34%.

So about 1/3 of the people responded are using AI on a general basis and nearly 50% aren’t using it at all, or have only just done some experimenting.

Compared to the results that we got in 2023, that’s been about a 10 to 15% increase in the number of people that are using AI in general.

So the numbers of people that aren’t using it at all have dropped by about 15%.

The bumber of people that use it a lot has increased by about 10 to 15%.

There’s a lot of talk of creatiing chatbots that can provide answers for people; where the answers are taken from the user documentation to help provide answers, instead of people having to go to a support line or having to read the documentation,

We had a question: Does your organisation have a chatbot that uses information from your user documentation?

So let me hit you with some more numbers.

The number of people that replied yes and that chatbot can be used by anyone. That was 9%.

The number of people that replied, yes, but it can only be used by customers, i.e. the chatbot is behind the firewall. That was 8%.

Add those numbers together, we get to 17%, so 17% of the people replied, have a chatbot that provides answers from the user documentation.

The rest of the answers were essentially no

one of the responses was no, but we have one in development that was 15%. Next year, we’ll probably expect them to become people who answer yes to the question.

Another option was No, but we planned to and that was 34%.

So 1/3 are planning to have a chatbot, but it’s not in development.

So it’s going to be a while away yet.

And the final option that we gave people was: No and we do not plan to have one.  And that came in at 32%.

So 1/3 of the people replied have no plans to provide a chatbot that provides answers from the user documentation.

Those plans might perhaps change over time.

Another question that we asked was how are people using AI?

And that fell into 11 main categories.

People are using it as a writing assistant to research and explain concepts,

To create code and technical tasks, to edit and proofread, to brainstorm.

To generate content, to format and publish content, to summarise information.

To automate workflow. To organise content, and to recommend related content.

Here are some of the responses in those categories.

So one person said

generating code samples teaching me concepts.

Another said

I use it for creating overview/instructional videos, release notes, overview content, proofreading, creating a new topic based on competitor docs….

I use ChatGPT to rewrite some texts and a tool that summarizes ticket descriptions so we can start from that output for Release Notes.

Summarize, structure content

Investigating using Co-pilot on our website to assist users finding information from manuals

I have experimented with using AI to reword and summarise documents.

Using it for research if plain Google search does not yield sufficient results.

I use it to find out information about technical concepts. I use it to give me ideas about how to write something if I am struggling.

I use it to help me migrate content from other sources and format in Markdown – particularly tables.

Editing suggestions Brainstorming Fact checking

I use ChatGPT as a brainstorming buddy.

Language improvements.

I use it to get a start on a concept in which I need to write acceptance criteria or to give me possible outlines for presentations on soft skills subjects.

for review, mostly, grammar checks, flow.

To generate markdown tables

To rephrase sentences.

Those are types of responses that we got, which, as I said, we could categorise into those main sections.

Another section that we had was one where people could ask a question and again we could categorise those questions.

We could categorise them into nine areas this year. And the main questions fell into these categories. Questions about accuracy and reliability concerns.

Another was intellectual property and data security.

3 was integration and workflow

4 was ethics.

And other set were around prompt engineering and customization.

There were some on limitations and human oversight.

Another category was use cases and applications.

Another scepticism, and taking a cautious approach.

And the final sort of category of questions was around training and upskilling.

If we compare the questions that were asked in 2023. There were some that were asked this year that were asked last year. Those were ethics and law; data security; questions around limitations and if AI is just all hype; what other technical communicators are using it to do; productivity and efficiency; and the quality of outputs and publishing.

The questions that weren’t asked this year that were asked in 2023 were around career applications; managing projects and standards; how do you actually use AI; and will it change how we write and what we create?

Those questions weren’t asked this year.

Overall, we. Get a sense that technical authors have a more positive attitude towards generative AI than 12 months ago.

There were some new questions that were asked this year that weren’t asked in the previous year.

The three main types of questions were around integration and workflow; prompt engineering and customization; and training and upskilling.

And to clarify what we mean by integration and workflow: questions around how could AI be integrated into existing documentation workflows. Could it assist with tasks like writing, organising thoughts and updating document versions.

The prompt engineering and customization category of questions or questions about how to write prompts, design them. Questions about training AI models on specific data sets, like a style guide. And customising an AI model to suit specific needs.

And the questions around training and upskilling: were questions related to training courses and certifications, specifically ones on prompt engineering, conversational design and chatbot development.

Let me read out some of the questions that people had and that will help provide more information, more colour to this.

Are we near the top or is this just the beginning?

How much better will GPT 5 be than 4?

How can we figure out if the system you’re using is keeping your data safe?

What prompts can I use to ask AI?

I struggle to see how the problem of hallucination can possibly be overcome if we don’t know what a eye produces is true.

How can we publish it?

How can we, as writers, accelerate our day-to-day workflows by incorporating AI?

What measures can we take to ensure the chatbots do not upload proprietary information and add to their large language models?

How to maintain access permissions to different levels of information.

How to identify the resources used to provide the answers.

How to incorporate AI into our current doc site so that customers can use AI with our content?  Also, when updating content with new features, using AI to identify areas of documentation that need updating.

How much does the source doc (data sources and ingestion) need to change to improve AI chatbot responses, or is this all about writing better prompts in the backend?

How can you navigate when your content gets flagged as AI generated even though it was AI assisted? That is reviewed and polished with an AI.

The final section I want to talk about is training.

One of the reasons for doing this survey is because we have a training course in using generative AI in technical writing.

And as I mentioned earlier, we want to keep that course up to date and relevant.

We asked a new question this year. We haven’t lost this in 2023.

Have you heard any training in how to use AI?

What we found was about 20% have had some formal training. The rest have had no formal training. They’ve either taught themselves or they have gaps in their knowledge.

So that means there’s an opportunity for us. There will be interest in the course that we provide.

So let’s summarise what we’ve discussed we’ve gone through.

What we can see from the responses is that technical writers are starting to use generative AI. But it’s still early days.

Now, one of the reasons for that might be because only 20% have had any formal training and they might like the skills. But there are other barriers: policies and concerns about data, which means that a significant minority of organisations don’t plan to introduce a chatbot to the documentation and may not allow AI to be used for the process of creating traditional documentation.

What we’d like to do is keep running the survey on a regular basis on an annual basis and see what trends emerge over time.

Although the survey is closed, we’re still open to getting feedback and your thoughts. So with the survey closed, the best way to do that is by e-mail and our e-mail address. It’s info at cherryleaf.com. We have mentioned the training course once or twice on this episode. You’ll find that at cherryleaf.teachable.com.

So I hope you found this information useful and I look forward to talking to you in the near future.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.