As the internet helped to disintermediate many industries, might large language models such as ChatGPT ‘disintermediate’ the voice of governments and public sectors? And if so, what might that mean for the work done in governments around communications, service design and behavioural insights?
Disintermediation from the internet, disintermediation from generative AI?

Seeing to the heart of things (Midjourney)
The arrival of the internet was a big shock to many traditional supply chains. The ability to access information directly brought efficiencies and reduced search costs. The power of search and aggregation meant that it was easier for many customers to connect with sellers directly, cutting out the middleman. For instance, disintermediation shook up the retail and travel industries.
Generative AI moves the internet beyond search – it brings direct knowledge and action. Ask a large language model and it can provide information direct to you, tailored if necessary to your level of readiness or understanding. Ask ChatGPT to tell you about something and then, if need be, ask it to adjust the answer according to what you understand, or simply ask it to give you the end result that you need. For instance, ask it to help you create a communications plan:
Regardless of where the information and insight has come from, you may not care – as long as what it gives you is sufficient for what you need. The knowledge can be separated from the context and the perspective of those who arrived at it. More than just aggregation or a distillation, such as a Wikipedia article, LLMs can disintermediate insight from their source. And not just that – by pulling out the insights you want, tailored to your context, it disintermediates the voice and the nuance behind that information.
What’s more, there are a number of tools that allow you to use LLMs to power ‘AI agents’ that can interact and act using prompts and guidance. Such a tool could be used to pull together information or even interact with forms and services on websites.
LLMs thus move the internet to beyond search, and into a new realm, one of answers and action.
While there are obvious limitations at the moment – accuracy, the risk of AI hallucinations, inconsistency, a lack of transparency, fidelity between your request and what you get – even with what is available now, there is a glimpse of this new form of disintermediation, of cutting out the middleman (content, context and process), and connecting people with the outcomes (getting the answer/doing the thing).
The public sector as intermediary
If we take the position that government and the public sector is, in many ways an information industry – i.e. that it deals primarily with the sense-making, processing, analysing, distilling and packaging of information – then a new technology that can create answers, and that can help people complete processes without doing processes, offers potential disruption. Disintermediation would look different to what has gone before, but it may still be equally discomfiting.
As an information industry, government and the public sector spend a lot of time and effort trying to ensure their messaging is effective. The last few decades has seen the growth of fields such as user experience, information design, service design and behavioural insights in recognition of the need to communicate effectively. And as the internet and other technologies have fragmented the media landscape, governments have had to put more effort into getting their messaging out there, as they have had to compete for attention and look for how to make the reach and impact of their messages more effective.
While it is still early days, I see the potential for LLMs to make this even more challenging. In some recent work I was involved in, we were looking at the potential ramifications of generative AI on the IP system, and part of that was experimenting with a range of tools. We used one of the GPT overlay tools, ChatBase, which allows you to focus the GPT model on specific content that you provide it with. We tested it with the content of our agency’s website, which seemed to work well and offered a reflection back to us of our content.
As an additional experiment, we used one instance of the tool to ingest our published material that we published on generative AI. ‘Prompty’, might seem like a bit of a gimmick, but it was a means of testing the technology and thinking about where things might be heading and how people might access content in the future.
And it raised a big question for me. While these tools are not perfect, and there are very real limitations and risks that exist right now, what happens as they mature?
What happens when, or if, LLMs are used to distil the key points of any publications, website, emails or messaging of any kind? In a crowded landscape, where people have so many asks of their attention, it is easy to imagine people using these new AI tools to summarise and prioritise for them. It might even be tempting for journalists and others that have to make sense of government materials to pull out the key points, rather than relying on how the messaging might have been packaged.
Equally, what happens when people can reliably rely on AI agents to complete or assist with certain tasks, whether it be applying for government support, filing their taxes or complying with requirements for their business? When doing the task means no longer being exposed to the messaging (whether cautions or otherwise) surrounding that task?
In short, what happens if government communications are in effect disintermediated by LLMs? If the messaging and careful packaging and presentation of information is revealed as an intermediary step that will be undone by people asking LLMs to tell them the key points or to find out what is relevant to them and their particular context?
The role of the public voice
In some ways that sounds great. If we can get to the end result faster, surely that’s good?
I suspect in many cases, to the hurt pride of public servants such as myself, it will be revealed that much of our messaging and support doesn’t matter as much as we think.
However, in many other ways, the messaging we use in our publications, in our correspondence, in our forms, in our services has been done for good reasons. Often this is to make things simpler, but equally it is also to try and influence behaviour in different ways.
Whether it’s messaging about health (e.g. advice about COVID), about ensuring compliance with relevant requirements (e.g. making sure people pay things on time to avoid further rigmarole), or nudging behaviour in ways deemed aligned with public good (e.g. trying to get people to take on pro-social behaviours such as reducing energy consumption), much of what government does is about signalling particular things. Sometimes this is done with nuance, sometimes this is baked in subtly into framing and language, and sometimes it is closer to outright propaganda.
Regardless of the ideological perspective or the considerations regarding different approaches, I would suggest this voice, this perspective is an important and considerable part of how things are currently.
How we speak as the public sector has evolved but will LLMs care?
In part to illustrate, and in part to highlight how it may be affected, I think we can look at the rise of certain practices within government:
- Service design / human centred design has increased in resonance because, again, as government has lost influence over some levers and as individuals have a lot more competing demands and options it has become more important to understand the perspective how things are received. We need to understand the perspective of the citizen/client/customer/stakeholder/user because we are much more conscious that there is a gap between intent and what happens in practice. Service design has done a lot to shape how information, and options and decisions, are presented.
- Behavioural insights has likewise gained more prominence as a practice in recent years as governments have become more conscious of the choice architecture they contribute to. Behavioural insights has helped the public sector be more conscious of how the presentation of information and options can shape how people react. Whether it is the framing of an electricity bill that shares how much other similar households are using, or encouraging on time payment of taxes, or presenting certain choices as an opt-out rather than an opt-in, behavioural insights has deeply influenced the practice of how information and options are presented.
- Communications has evolved as a practice over the years, and become much more deeply integral to how things are done. In a crowded information space, more and more attention has been paid about how to communicate things, how to ensure that messages resonate with and reach the right audiences, and how to connect with stakeholders. Communications has changed significantly, thereby influencing how information and options are conveyed.
Through these and other mechanisms and practices, there’s a lot of the machinery of government and the ways of working of the public sector that relates to trying to make sure key messages are heard in the desired fashion by the right audiences at the right time.
If the voice of the government and the public sector is disintermediated through the use of generative AI and LLMs, can we assume that the machine ingesting that nuance and framing and service design will distil in the manner we in the public sector might hope or expect? Especially where the prompts may deliberately or unintentionally seek to strip back the messages to the key issues?
Idle thoughts about an emerging technology
These are some preliminary personal thoughts about how an emerging technology is unfolding, It may be that these issues are not big or that they self resolve.
It might be that all of the practices used to help gain clarity around intent and purpose of our messaging simply makes it easier for the machines to distil the right titbits. It could even be that we find that much of what we currently do actually doesn’t matter as much as we think, and so this disintermediation won’t make that much of a difference.
Additionally, disintermediation isn’t an inevitable one way street. Sometimes new opportunities arise, which will allow for new approaches to flourish instead, that may achieve the end goals just as well, if not better.
However, I think it’s something worth thinking about and being on the lookout for, as disintermediation has previously proven to be significantly disruptive to the industry sectors that have experienced it. I also suspect it may mean we need to revisit some of our current practices or at least give more attention to the LLMs as consumers of the information and data we generate, to ensure that we are ‘heard’ in the way we intended.
Because we all seek convenience and short-cuts to answers and outcomes, I think disintermediation is a certainty and something to be worked with rather than against (although some content entities have circled the wagons and blocked LLMs from accessing their data).
I had the same ‘disintermediation’ conversation with colleagues – and it was the first time I’d used the word since the internet disruption of the Mid-90s!
I think we’re entering a new era of a battle to ‘own’ the customer and ‘own’ the channels to reach them.
However, I think good, strong, brands (and I include Governments) will continue to prosper. Users may stipulate (in some use-cases) to their chosen AI assistant ‘bring me this – but only from the official attributed source’. If the risk is high they may want to see source and attribution and may even visit the origins of the content for reassurance and certainty.
But maybe we need to accept that as long as the ‘right’ content is delivered to the customer it no longer matters whether that exchange occurred on our website or their virtual assistant.