How does generating content change with the arrival of ChatGPT?
This post contains some musing about the impacts of generative AI on our relationship to our own knowledge and thinking.
I am no longer writing for humans alone – the machine is my audience as much as you are
One of the reasons I find blogging useful is that it helps me organise my thoughts and it makes articulate them, bringing a bit of structure and rigour to what can be a messy situation in my brain.
Yet recently I’ve been considering my relationship with my writing in the context of working with generative AI. At work, we’ve been doing a 12 week set of sprints on the potential implications of this technology on the intellectual property system and the rights administered by IP Australia. One of our experiments has been with Chatbase, a platform that allows you to put a tailored overlay on top of Open AI’s GPT, trained on specific content (currently up to 11 million characters worth).
One quick experiment (and I mean quick – it took about ten minutes for a colleague to set up and train it) was to train it on the content of IP Australia’s website, and then ask it questions. It was very good – it could answer nearly any general question about IP Australia’s work in an instant – but this raised some thoughts for me.
First, it makes me think about who are we writing for – human or machine?
In recent years we’ve all been in effect writing for the search engines to some degree. Search engine optimisation has become a default thinking style for most, even if we don’t think about it in that way in our writing. But large language models such as ChatGPT bring in a new factor – we’re writing for the machine as much as other people. Indeed, soon we might be writing primarily for GPT, with the knowledge that these engines might soon become the primary way that many people interact with information. Our writing will be mediated through the lens of the machine, and that will likely mean we’ll need to think about and care about how the machine ingests information best.
For instance a colleague identified one question that our Chatbase bot was incorrect on (it was still good at making up answers when it didn’t really know), and pointed us to the correct information. That information was in a pdf, effectively buried multiple layers down in the information, meaning that the bot hadn’t scraped and ingested it. Once we fed the assistant with that specific file, it gave the correct answer – however, even then the framing was not perhaps as clear as it could be.
And that raised a second thought for me. For the first time we’ll have an arbiter of our writing that will be hard to argue with as to whether we’re saying what we meant or not.
One of the joys of writing is that we share our thoughts in different ways, with different styles and turns of phrase. However, a longstanding concern in the public sector has been are we writing in plain language, language that the broader public can clearly understand?
There are a number of systemic reasons why that is harder than it might sound. When you are immersed in the public sector culture of risk aversion, with the knowledge that clarity can be a weakness because nuance is sovereign, saying things bluntly is a dangerous business sometimes. Caveats come to us naturally because so often the real answer is “it depends” or “if I say what I really mean, then I might expose myself to criticism” or “if someone acts on what I say and doesn’t take into account the different nuances, I might get blamed for it”. A crystal clear answer is easier to argue with, to apportion responsibility, and to get at underlying intent, when beneath the surface things may be very messy and complicated.
Large language models strip some of that away. They don’t care about our defences, they will give the answer based on what it’s been fed. They will distil the information we provide in clear terms, regardless of whether we want it to or not. With a human audience we could always argue: “Oh that’s not how it will be interpreted” even when that was clearly not the case.
But with GPT or other LLMs as the audience, we will know in an unarguable way if what we are writing is clear – because when the machine reflects it back to us, it will be very visible as to whether we wrote it in a way that is clear. If we don’t like the results, it will be our fault. If someone asks for the information to be reflected back in a manner accessible to someone in grade 8, and it doesn’t make sense, then we’ll know that we probably need to try again. I suspect this is going to be more confronting that it sounds – as I say, clarity is often not a public servant’s friend, even if that’s what people desperately want of us. Thirty or forty years of repeated refrains has not yet materially changed that though – LLMs might finally be the thing to do that.
Third, this technology suggests that we’re going to have to rethink the way we curate our knowledge. When someone asks the large language model, or an assistant version on it trained on specific information, which version of information are we asking it about? Does it need to know the past, because that will provide valuable context, even if the information is not specific to right now? Or will we only want it trained on what’s current? If it’s only the contemporary information, does that risk misleading inadvertently, with an ability to question as to when and how things changed? A deliberate approach will need to be taken, and I suspect it should probably be different to the current perspective, but then I like having all the context and others might not. Whose preferences should dominate? What do we feed the models on, when and how often?
In these and other ways, I suggest LLMs are going to significantly influence our writing and content generation in ways that are likely more far-reaching than what has happened with search engines and the content practices of the current Internet.
Generative AI as a reflective creative partner
I think the effects may also be deeply personal.
Another aspect of working with generative AI that has surprised me has been working with Midjourney. As part of our work we’ve been using generative AI tools to help illustrate our work. While there are still questions about ownership, moral rights and other things when it comes to the visual models, the thing that has interested me is how using it has changed my relationship with my thinking.
In the past when I have created slides, the visual elements have often just been filler – whether it be stock images or otherwise, as someone who isn’t especially visual, I’ve viewed it instrumentally. But as I fed Midjourney with my prompts, it started to change my dynamic with the content. As the machine gave me what I asked for, it helped me think more clearly about what it was I was actually saying or trying to say. By reflecting back my thinking, it made explicit what might have otherwise stayed implicit to me.

For instance this image, of a person and a living embodiment of a tool hand-in-hand, came about as I tried to convey the potential for partnership between these new tools and ourselves. After the initial set of prompts proved unsatisfactory, I thought more about what the intent was. As is often the case with my inexpert prompting of the Midjourney engine, there is often an element of serendipity in what it gives me, and this was one such instance. The image helped shape my thoughts more clearly, as well as giving me one of my favourite pictures so far.
By reflecting back to me, in a way that another person often can or does not, the machine helps me crystallise my thoughts.
Similar things often happen when I’m trying to think through other things. Often I write out my rough thinking on paper to help get a start. Now I often then feed those rough notes into the engine, to quickly help give me that reflective feedback, to mirror my thoughts more articulately, to see if they look like I wanted or if I’m on the wrong track.
Generative AI is going to change our relationship with ourselves
In these ways I think that generative AI is going to influence and shape our relationship with our own thoughts and the way we then organise and share them with others. While we have always had an external audience, I think it will be easier to accept the reflection back of the machine than what others might tell us, a mirror that shows us our thoughts unmediated by other people’s perspectives and perceptions.

Thought mirror, by Midjourney (and me?)
Interesting. Poses a very tangible form of introspection if used with a solid direction
Great reflections. I’ve been using ChatGPT-4 a lot for analogies recently. It really causes me to stop and pick apart what I’m trying to say, to ensure the analogy gets to the heart of what I’m saying. Seeing the responses to my prompts and what isn’t right, gets me closer each time to being able to articulate what is.
I like the idea that this might force clarity and simplicity similar to NZ gov and machine-readable laws. https://www.digital.govt.nz/blog/what-is-better-rules/
I wonder if taking nuance out is a chance to decrease bias? Assuming we are vigilant about not baking it in, in the first place.
Reflections from a LLM come without agenda or intent. It’s probably the most objective feedback you can get, though always remembering they are trained on (for now) mostly human generated content. Once LLM’s are trained on content generated by other LLMs or Multimodals then all bets are off