Podcast 133: What will ChatGPT mean for technical communication?

It’s been about three months since ChatGPT was launched. And we thought it would be good to look at really the status of what ChatGPT might mean for the future of technical communication.

And we did a blog and we did some experiments back in December, as have others, and people have reflected on the pros and cons of it.

And we thought it would be good to take a look at it today.


This is the Cherryleaf Podcast.
Hello and welcome to the Cherryleaf Podcast.
In this episode, well, it’s been about three months since ChatGPT was launched.
And I thought it would be good to look at really the status of what ChatGPT might mean for the future of technical communication.
And we did a blog and we did some experiments back in December, as have others, and people have reflected on the pros and cons of it.
And I thought it would be good to take a look at it today.
So on the openai.com website it states that ChatGPT optimising language models for dialogue.

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.

GPT stands for generative pre trained transformer.
And that is an Artificial Intelligence language model that uses deep learning to produce human-like text.
Essentially, if you give initial text as a prompt, it will generate text, and recently it has been incorporated into the Bing search engine.
So you can write a question into the search box for Bing and then it will generate an answer using ChatGPT 3 technology.
People have experimented and been impressed with what it can create when it comes to academic essays, blog posts and articles, and technical communicators have asked is ChatGPT a blessing or a curse? Will it be a useful tool for them to use? Or something that might threaten their careers?
One of the experiments that we did back in December was to ask it to write a set of instructions for the autosave feature in Word for Microsoft 365.
So the question we put in was document how to turn on autosave in Word for Microsoft 365.
The response that ChatGPT gave was a seven step set of instructions:

1. Open Microsoft Word on your computer.
2. Click on the “File” tab in the top left corner of the screen.
3. In the left-hand menu, click on the “Options” option.
4. In the “Word Options” window that appears, click on the “Save” tab on the left side of the screen.
5. In the “Save” tab, check the box next to “Save AutoRecover information every” and enter the desired time interval for autosaving in the field next to it.
6. Click on the “OK” button to save the changes and close the “Word Options” window.
7. Your changes will now be automatically saved every time the specified time interval passes.

Now that’s quite impressive.
It’s not perfect.
There are stylistically things that technical authors would do differently.
But it’s passible.
The only problem is that there is actually a button on the menu called turn autosave and if you toggle that from on to off.
Or from off to on I should say autosave turns on.
So if you look at the help that is in the help file for word for Microsoft 365 you get a completely different answer.

You get

Turn on Autosave
1. Toggle the autosave switch.
2. If the document isn’t currently saved to OneDrive or SharePoint Library, you’ll be prompted to choose the location.

Which is more succinct and better.
Versus as I said, the automatically generated answer is passible.
The other question that we asked it was Our question: Document how to get accident details for accidents occurring in 2020 from the Transport for London Unified API.

This is an API that we use on our API e-learning course for some of the exercises, so we’re familiar with it and it has some information.
It has a so I got file with reference content and from that ChatGPT was able to infer what to do and provide again initially what you might see as a possible answer.
So it was

To obtain accident details for accidents occurring in 2020 from the Transport for London Unified API, follow these steps:
1. Go to the Transport for London Unified API website (https://api.tfl.gov.uk/).
2. Sign up for an API key by clicking on the “Sign up” button on the homepage and filling out the necessary information.
3. Once you have received your API key, go to the documentation page and navigate to the “Accidents and Casualties” section.
4. Select the “Accidents and Casualties” endpoint from the list of available endpoints.
5. In the endpoint documentation, scroll down to the “Parameters” section and locate the “date” parameter.
6. Add the parameter “date=2020” to the endpoint URL, along with your API key. For example: https://api.tfl.gov.uk/Accidents/Casualties?date=2020&app_id=[YOUR_APP_ID]&app_key=[YOUR_APP_KEY]

As I said, impressive at first sight.
The problem is that the API key functionality has been disabled on that website.
You don’t need an API key, you can just put in a query and you can get responses from the API.
That information is provided in the forum for the website.
The documentation hasn’t been updated to reflect that that has changed.

So it can give a good answer based on the documentation that it finds.
But you would still need to test the response to check that it’s accurate.
And in that case, you would find that actually the information is incorrect and that it would need to be amended to exclude the information about needing an API key.

We came across two other organisations that have also done similar experiments.
One is Information Content Management Solutions, which is a German technical writing company.
And they did a blog post which is
Künstliche Intelligenz in der Technischen Redaktion: Fluch oder Segen?
Artificial Intelligence and technical editing or technical authoring, a blessing or a curse?
And they asked the question:

Wie bediene ich meine Waschmaschine?

which means

How do I use my washing machine?

And the response.

They got was an answer, but it wasn’t specific to any particular washing machine and it tells you to refer to the operating instructions for that particular washing machine.
So they asked a more specific question.

How do I wash my white laundry with the Siemens IQ 500 washing machine? And even with that context.

The ChatGPT wasn’t able to provide specific information, and again referred the user to the manufacturers user manual.
So as it stands, the ChatGPT model would need training for it to answer a question specific as that.
Person AG another German technical writing company also did some experiments and posted an article on their website

Wie gut ist Künstliche Intelligenz in der Technischen Dokumentation?

in English

How good is AI in technical documentation,

The tasks they asked ChatGPT3 to do were
1. Rework some unstructured content.
So the question they asked was restructure this text and make it a step by step instruction and then they had the content they wanted converted into instructional content, a numbered list.
Task two was to ask for advice relating to planning; and that was:

Give me a list of topics to explain to people how to use the browser software Microsoft Edge.

The third task they asked it to do was to create some content; and the question they posed that time round was:

Explain how to make a screenshot and insert it into Word.

And we will provide a link in the show note to that post so you can see the results that were created from ChatGPT.

The conclusions us and others have come to are pretty similar.

That you get a decent result, but it can be a bit wordy.
And it can be incorrect.
The inaccuracies come about because it is using existing text to create its own content and for ChatGPT that content ends at 2021.
It needs source text to work properly and the more narrow the field, the more specialised it is, the less text there is to exist.
And there have been reports on other people have tested things in things like academic work and also instructions that where the information is lacking, then it will make the information up.
That it would generate academic references to papers that don’t exist, or suggest that people download updates to software that have never been written.
So it’s not a threat to technical authors at the moment, if unless it doesn’t matter that your information is accurate or not.
Now that accuracy may improve over time.
The model that Google will be using for its chatbot will have more recent data and may have a wider data set as well.
And basing the answers on other people’s content can have copyright implications as well.
Does the chatbot have the legal necessities to use that content to rephrase it and repurpose it in the way that it is?
But it still does have potential and could still well affect the world of technical communication.

One of the people that commented on our blog post back in December was Mark Baker.
Let me read out what he said and follow up from that, he replied.

I think if we look at this as a tool for tech writers we miss the broader implications. If it really works, this is a tool for end users. Type your technical question into ChatGPT like you would type it into Google and it will generate the answer just for you. In other words, if it works, it is a Stack Exchange killer. If expect to see Google incorporating this into search. If it works. Because the question remains, can it document something that has never been documented before?

And Mark has identified a key point.
This is probably a tool for end users.
We’ve seen it now introduced into Bing, so people will start to ask more narrative, more complex questions.
And they will get a more fuller answer from the search engines without having to click through to another website.
So this may mean that end users never read the contents that you’ve created and published on your website.
They may be reading instead a summary created by Google or Bing on the search engine.
Now that has implications because it may be that the summary generated automatically and presented on the search engine isn’t complete.
It might be inaccurate.
And it also gives the scope of the risk of bad actors that others create content that the search engines use in their models, in their ChatGPT answers to provide the information that the user asks for.
Perhaps we might see instructions on how to use machine, and in those instructions or comments that using a competitor’s product would actually provide a better solution.
So it may be that organisations lose control of being the single source of truth because users go less to websites and rely more and more on just the search engine results and the summary provided by those search engines.
One reaction might be to hide instructional content behind a firewall so that the search engines and the AI tools don’t have any source content upon which to base their answers.
Another potential consideration is that Google and Microsoft with Bing and other search engines might decide to penalise and down rank information that’s been created automatically via AI.
What we’ve seen in recent months has been a flood of books created using AI flooding onto Amazon and causing challenges for them
And also people using ChatGPT to generate articles for their website.
Blog posts.
Let’s assume that they are not penalised by the search engines.
And then we can think about some of the potential uses that a technical or third technical writer might use ChatGPT for and they are things like creating a first draught which we then check and verify and make sure it is accurate for writing blog posts, for writing code samples.
We’re summarising documents so that we get the essence of the information.
And I think another potential area where it could be useful is if it were able to take a video or video walk through, for example, that might have been created by a subject matter expert and convert that into text and specifically convert that into a series of numbered steps.
Parson AG also suggested to other potential uses.
One was to help non-native speakers in writing English text.
And another was to reformulate texts for different target audiences by changing the tone and voice of the article.
Another consideration is that you don’t need to rely on ChatGPT to create chat like content.
One of our past clients called Pinecone.
Has produced some demonstrations and written some blog posts about how you can do this in one of their blog posts they wrote using generative question answering.
They can sculpt human like interaction with machines for information retrieval.
Imagine a Google that can answer your questions with an intelligent and insightful summary based on the top 20 pages highlighting key points and information sources.
The technology available today already makes this possible and is surprisingly easy.
And then the rest of the article then goes on to explain how to do retrieval augmented generative question and answering and how to implement it.
There’s a walkthrough of video on YouTube that demonstrates this, and it’s worth watching.
It’s quite impressive.
Another reply on our blog post was from John Mogilev.
And he wrote

Cameron Byrd’s team [AIXI] and Peeter Kivestu’s team [Mitek] have done work integrating AI-based workflows to S1000D and iSpec documentation systems.

S1000D is a standard that’s used in the aerospace industry and elsewhere.

And he wrote

Today, it’s not unusual for document engineers / architects / tools specialists to use AI technologies for tech data analysis, and the output from that gets more like natural language all the time. Tomorrow’s technical writer will need something like a “Photoshop for AIs”, that can bridge the gap between the highly technical AI interface of today [1] and a working UI that can manage the AIs as they read, write, and digest technical information. “Manage” includes oversight, possibly with an NLP query interface. Orange (ML) is a Python-based application that is a step in this direction, but it still needs to be made much more usable and with more focus on Natural Language.

Something very critical to note: AIs, even the most advanced ones, still have trouble with cause/effect relationships and basic inference. To illustrate the latter, in conversation with ChatGPT about the movie “Jaws”, ChatGPT was unable to infer that Quint was in the Navy. As ChatGPT continues to train itself, it will probably fix this specific mistake, but you can see how this can lead to overfitting.

Actually, if we don’t get involved in S1000D related projects, but I thought it was quite an interesting comment.

If you’re interested in the ethical aspects of ChatGPT, then you might be interested in one of the episodes on The 10 Minute Techcomm Podcast, which was with Justin McGill and was about AI writing.
And on that episode they talked about what AI means for the future of writing and the ethical issues surrounding AI content in general.

So where we are after three months of ChatGPT being available in the public domain suggests that it isn’t at all for creating new content and replacing technical communicators.

But it will be a tool that will change the way in which users get information and also what information they are given.

And that in itself will have an effect on the profession of technical communication.

I wonder if we have missed anything in looking for information on what the impact might be.
If you’ve seen something that we’ve missed, let us know.
You can contact us by e-mail, info at cherryleaf.com.
And you can contact me on the socials @ellispratt.
And if you’re interested in our technical writing services, the projects team, we have the recruitment service we offer and the training courses that we provide, you’ll find all of that on cherryleaf.com our website.
So thank you for listening and until the next time.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.