Why generative AI feels significant

If government is an information industry, what might generative AI tools mean for it?

Playing with ChatGPT

Over the past few weeks I, like many many others, have been playing with Open AI’s ChatGPT. Much of this has been playful such as:

  • getting it to generate trivia questions and then provide the answers (it’s not always great at this, and is strangely temperamental about giving you the answers sometimes, and not all of them are correct)
  • asking it to write some short stories (including one about two possums that fall in love but then one is kidnapped by a witch and put to sleep for 100 years – given the ridiculousness of the premise, it did pretty well!)
  • songs in the style of different singers (after a few you begin to suspect it uses a similar approach for a lot of artists)
  • recipes for different things (some friends gave it mixed ratings on its abilities here – some very good, some dubious).

But then with some public servant friends, I started to play with it in terms of some work scenarios – e.g. write a ministerial speech about X, or draft some talking points about y, provide a high level plan for a meeting with stakeholders.

Obviously these are far from perfect (and I’m not suggesting public servants should be entering in detailed confidential information to an external engine to get more tailored results!), but they begin to illustrate some interesting possibilities.

For instance, this tool makes it even easier for citizens to draft a message to raise an issue with their local member.

Or it could be used by citizens to understand their obligations, either instead of trying to navigate government provided information or that of intermediary service providers. Sometimes it’s just easier when you can just ask for a starting point, and then build from there.

Taking a more, uh, creative approach, you can see some ways in which it differs from a search engine by its ability to distil complex issues and present them in easily accessible ways. Again, the information might not be perfect, but it provides an easy starting point for learning more. (And interesting to see that while ChatGPT has rules around what information it will share, these can often be worked around pretty easily still, as illustrated by these questions about what one might want to consider in launching a coup. Though like the rest of the tool, I expect this will be refined over time.)

So from some of these different examples, I think we can start to see some of the implications. It’s not that ChatGPT offers instantly perfectly accurate answers (as many others have pointed out, the tool can be extremely confident seeming in giving very wrong answers). It’s that it offers a rapid way to understand the basics of something. It is another step in the commodification of knowledge, making more and more complex issues more accessible, and providing a means by which to refine and iterate.

Indeed, in some ways talking with ChatGPT feels very different to what has come before – more humanlike, but not human. It reminds me a little of the Frederik Pohl novel Frederick Pohl novel Gateway, a science fiction tale where the main character works through his survival guilt and issues with an AI therapist. The key difference that comes with a generative tool like this is that it feels easy to ask questions and to refine and reflect on what it provides.

Given its ability to provide different answers and adjustments to answers at your prompts, it changes the nature of engaging with information, because it is doing a lot of the intellectual heavy lifting (often in under 10 seconds). We are no longer dependent on our own thoughts, as we can access a personal assistant, flawed as it might be. While ChatGPT is drawing on human knowledge, it can present it in novel ways at great speed, something we’ve never had access to before. And that seems pretty big.

“But it get’s lots of things wrong…”

As noted the tool is far from perfect. Yet, the key to innovation is to know that a lot of innovations start off worse than existing solutions, and yet expectations can often be inflated at the early stage. As Tim Kastelle ably describes, when you combine the S-curve (showing innovation takes time to improve, starting off flat as learning occurs then accelerating before plateauing as the benefits are exhausted) with the Gartner Hype Cycle (showing how too often we have the peak of inflated expectations before a trough of disillusionment that occurs before the technology actually matures and produces) we see that a new innovation can hit with the hype when it still has much to learn.

In short – innovations tend to promise, or be associated with, being able to deliver far more they can at the beginning, only to later really demonstrate their value once time and investment has kicked in. And that’s only for the successful ones.

Yet when an innovation does succeed, when a new technology, approach, conceptual organising framework or business model takes hold, we have seen how quickly it can improve and entrench itself. Whether it be the smart phone, social media platforms, new public management – innovations can dramatically reshape things, often in quite unanticipated ways.

So we have reason to be sceptical, but it is also wise to heed the many previous examples where a slow start has led to some pretty big things. This is easier if we consider the change that has already occurred for the public sector as an information industry.

Government as an information industry

The primary activity of a lot of the public sector is information – collecting, processing, analysing, storing, sharing, advocating, etc. Over the last few decades the information industry nature of it has become more apparent as the Internet and information technology has brought changes. Just as the advent of the web has brought disintermediation of varying degrees to a lot of other sectors – the media, banking, accounting, retail – so too has it started to change the nature of how citizens interact with their governments. For instance:

  • The speed of information and the multiplicity of voices that the web and social media has allowed has changed the dynamics of both politics and service delivery. The public sector has arguably been trying to catch up ever since.
  • Digital services are offering huge potential for delivering efficiencies for both citizens and government agencies, as well as providing richer sources of information about behaviours and needs. (This is not without risk however, and there are challenges particularly in regards to automation that remain to be navigated – especially around what can and should be automated, and ensuring the public services remain ‘human’ and accountable and transparent.)

If we accept the information industry nature of the public sector, I think the potential of generative AI such as ChatGPT (and its successors and competitors) becomes clearer. A tool that changes how we interact with information, with knowledge, and which offers us such a rapid way of generating and refining new forms of information, is surely going to matter to an information industry.

Implications of generative AI

Some of the key implications of generative AI stem from the fact that while the information revolution thus far has helped to democratise access to information, a lot of tacit information still remained out of reach to non-experts. Videos and blog posts helped to convey a lot, but as always the ability to ask the right questions was a limiting factor. What happens when you can ask endless questions quickly to navigate and interrogate information, processes, structure and institutions?

What happens also when individuals might be able to produce complex information at volume, combined with social media? As this virtual personal assistant might become more sophisticated, what might that mean when current abilities to navigate mis/disinformation are mixed at best? (And when the appropriate role for government is not yet even clear as yet).

What happens when expertise is more democratised/commodified in an environment where trust of government is already mixed, and there have been growing challenges to the authority of the public sector as an institution? Does every interaction become contested or does greater access to expertise/digested information reduce frictions?

What happens when officially produced and sanctioned information, with its many caveats and cautions, is rewritten by a machine to make the underlying information more accessible to a wider audience?

These are some of the questions that come to mind, although I am sure there are many many more that will become clearer as such tools develop and mature.

Of course we don’t yet know how ChatGPT might yet evolve or how this product might be commercialised (the processing demands are presumably much greater than that needed for simple search, and thus are unlikely to be covered by advertising revenue alone). Nor do we know what limits might be hit, or what competing efforts might achieve.

However the direction of how things are travelling would seem clear – widespread access to generative AI and an associated large class of people with access to new ways of engaging and interacting with information than ever before. Previous times that has happened – the advent of writing, the book, the printing press, radio, television, the Internet, social media – it has brought new challenges and opportunities for the public sector (or the reasonable facsimile of the time).

It will be exciting to see what happens this time around.

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.