Podcast 138: Generative AI and Techcomm, lessons learnt so far

In this episode of the Cherryleaf Podcast, We reflect on the lesson learnt so far as a result of developing our upcoming elearning course on using Generative AI in technical communication.

Transcript

This is the Cherryleaf Podcast.
Hello again.
Welcome to the Cherryleaf Podcast.
In this episode, we’re going to reflect on things that we’ve learned as the consequence of developing a training course on using artificial intelligence or generative artificial intelligence in technical communication.
And to give you an update on that course, we have almost three hours of video content uploaded to the training platform, the exercises, and the notes.
The final bits we need to do are the section on large language models.
That’s about how you can add your content to a large language model so you can create chatbots that answer user queries about your product.
Also, a conclusion.
And we need to do a little bit about templates: How you can use a large language model or an AI system to get content into a template or a structure and create content that way.
Let’s talk about what we’ve learned so far.
And the first thing I would like to talk about is the change in the way in which content is written.
We had this week probably the first instance of a student sending exercises for our technical writing course where it looked or it felt very much like some of the exercises had been written by the student using a tool like ChatGPT we’ve been doing a lot of research, and writing and developing, creating the training content for our generative AI in Techcomm course.
And so we’re pretty tuned into spotting what content has been created by a tool like ChatGPT.
The way that they tend to write the content is like it’s an essay.
There tends to be an introduction after the introduction is about how happy they are to provide you with the information, or that you’re very wise to ask about this.
Then there’s the middle section, which is the information that you’ve requested.
And then finally, there’s a conclusion which can be a summary or some warnings about it.
In the case of the exercises that the students sent in, we have an exercise where we ask somebody to explain a thing to somebody who’s not familiar with it.
And the way that they responded was as if it were an e-mail, with an introduction saying “Hello, how are you? I understand you’re interested in this topic. I’d be happy to tell you more”
Now, we don’t ask in the exercise specifically to write it as if it were instructions or information or conceptual topic in a help file.
We don’t say, don’t write it as an e-mail, but it’s an unusual approach to take and it suggests that it was written by a generative AI tool.
Another exercise was to write some instructions. And what we noticed with that was at the end of the instructions was a topic called conclusion which summarised what had happened.
You tend to see that in essays, but not usually within instructions within a user guide.
Often users are very stressed, they just want to get to the information very quickly.
Find out how to do something, close the book or the help file and get on with the job.
You might, at the end of a set of instructions, describe the outcome.
What should happen as a result of having done the thing.
And you might potentially have links to more information if they wanted to delve deeper into the topic.
Or what to do next.
But it’s very unusual.
In fact, I don’t think I’ve ever seen and in instructional type of topic, a task topic, where there is a conclusion at the end.
There’s nothing fundamentally wrong with using a tool like ChatGPT in the same way that people get inspiration by looking at Wikipedia articles or code samples on Stack Overflow, or they use a spell checker to check the content that they’ve written.
But in all of those cases, that content should be treated as a first draft.
It needs to be checked for quality.
If you use a spell checker, it won’t pick up the difference between form and from.
So if you don’t go through and read it yourself, you may end up with content that’s not making any sense at all to the end.
But when it’s so easy to create information using a tool like ChatGPT or Claude or Bard or LLaMa, will we end up with more content looking like that?
Will we see a reduction in the quality of instructional content, in the same way that if you look on the web a lot of content that is optimised for SEO, search engine optimization.
And so you get recipes or how to blog posts that are very repetitive.
They have lots of words in there to get higher in the rankings and you have to read through a lot of cruft to get to the actual meet the instructions, the useful bits in the article.
It will depend on what users will put up with.
And also if that results in more calls to the support desk because they can’t find the information in the help topics.
Or it leads to reduced sales because people find it hard to use the product.
And that leads on to the second point.
Which is that prompt engineering will become very important.
In fact, it will become a career.
You’re starting to see adverts for jobs already advertising roles as a prompt engineer.
Writing an effective prompt is a skill.
It involves having the ability to use linguistics to state the correct context.
It also requires you to be able to write the instructions.
Clearly, in fact, it’s a role that Technical Writers, Technical Authors, will be very well suited to do.
You’ll need to understand the opportunity and grasp it.
The next lesson or point is that things are moving so quickly.
This is especially true with images.
We started with the ability to write some text and instructions, and for an AI system to create an image.
And now we can take an image and ask the AI to make it bigger. To fill in the background, the bits that are off the canvas, as it were, we can take an image.
And we can remove certain parts of the image and it will infill with replacement information. That could be to replace a person with a wall or painting.
And we can create videos by writing some text and the AI system generating a video or taking an image and animating it.
At the moment those videos are about four seconds long, but undoubtedly, they will get longer.
With regard to large language models, we saw at the beginning, GPT 3 and 3.5 and then GPT 4 came along.
And now we have LLaMa 2.
We have PaLM 2 from Google.
We have Claude from Anthropic, and launched this week is Microsoft Gorilla.
And that’s one that could have a very big impact on the technical writing space.
Gorilla has the ability to make API calls and to understand the data that’s retrieved from those, and then present that information in a meaningful way.
Which means that the ability to provide live data on the fly becomes more achievable.
We’re starting to get a better idea of where AI can help the technical writer in being more efficient.
There are certain capabilities that are proving to be useful.
Things like the ability to proofread a text, to do some of the mundane tasks that Technical Authors sometimes I have to do to.
Convert from one format to another.
To take an interview or transcript and to provide a summary of the key points.
To take text unstructured text and to slot it into a template.
To highlight changes between two different pieces of text.
And to write basic code samples.
It’s still early days to fully understand how it can be used to create better deliverables for end users.
And it’s still early days to work out how it can be best used to run automatically, to chain prompts together, to do what’s called auto AI, or autonomous AI.
Other areas that are still undetermined relate to copyright.
The large language models are huge.
And they seem to have taken a lot of information from various websites.
Whether they were legally entitled to do that, it’s still up in the air.
There’s talk of restricting that, but obviously that places those companies that have already done that in an advantage.
They’ll be able to close off the market to competitors.
And I think we’re all still learning about data security.
And the need to protect chatbots and large language models from things called injection or prompt injection attacks. And also about the privacy and security of data stored in large language model or questions asked in prompts.
And what about the career implications?
For technical communicators, does it threaten the profession?
Doing the research and putting together the training course suggests that AI is more of a co-pilot.
A tool for creating a first draft than necessarily a complete replacement of a Technical Author.
For large language models to work, they need a source content.
Somebody has to write it, and Technical Authors, Technical Writers seem to be a good candidate to carry on doing that.
Perhaps the impact will be like Excel had on the accounting profession.
It used to be that it would take a day to do a change to a paper spreadsheet, to change the numbers.
If you want to know what would happen if we cut costs by 10% or increase sales by 15%.
PC spreadsheets came along and you could change the figure into the formula, and you would get the answer within minutes.
That didn’t really result in a reduction in the number of accountants that are out there.
It changed the role
They ended up doing more sophisticated planning, budget planning and future modelling using Excel.
There are certainly fewer bookkeepers than there were in the past, and some of the mundane tasks were replaced by computers.
And again within the role of writing content, some of the mundane activities may be doable by artificial intelligence.
Having said that, a new tool like Microsoft Gorilla might upend all of that.
That’s one of the interesting and exciting things about the current state that we’re in.
The need for good source content for the large language models to operate on.
And the need to craft effective, good prompts means that we’re, overall, positive about the opportunities that lie ahead for technical communicators.
We live in interesting times.
So those are the lessons we’ve learned so far.
A short podcast episode for a sunny summer’s day.
We’ve got some interviews scheduled for upcoming podcasts into the autumn, which will be longer.
If you want more information on Cherryleaf, and the AI course, then our website is a good place to start. www.cherryleaf.com.
I’m also on Twitter and I’ll carry on calling it Twitter.
You can find me on that.
Also, Mastodon too, and LinkedIn.
Again, if you search for Ellis Pratt, you should find me.
Thank you for listening.

 

Categories: AI

Tags:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.