Skip to content

Artificial Intelligence (AI) is continually evolving, demanding a deeper understanding of its benefits and limitations. This blog is written based on a discussion I participated in, involving senior civil servants at Public Service Data Live. We explore how the public sector can adapt to and anticipate AI advancements, its impact on public sector jobs and job security, and strategies for effective deployment. While highlighting the proactive measures, like the AI playbook, taken by industry leaders at Hitachi Solutions and Microsoft.

Keeping on top of the fast pace of technological change in the modern world is one of the biggest issues that governments face.

When dealing with major digital and data developments, public sector organisations need to be careful not to move too quickly, or too slowly. Either error risks wasting time and energy, as well as financial resource entrusted to them by the taxpayer.

This dilemma is particularly acute for government as they attempt to determine the best way to use artificial intelligence. Move too fast and there’s a risk that public trust could fail to keep up; move too slow and opportunities to improve public services could be missed.

Some participants mentioned that “students will openly use Open AI to complete work [because] that technology is available to them,” the participant said, even if, at an organisational level, staff are still discussing what they should do.

“The point is that we’re catching up with what do we do about the technology that’s out there.”

We then heard from officials that many government departments are deciding whether to encourage or discourage the use of AI. However, the result of needing to get the balance right on AI deployment means the protocols feel vague, and even at times appear contradictory.

Another participant added that in the UK, departments such as the Home Office have tended to roll out new tools with in-built limitations designed to make them more secure. The problem with this, they said, is that such limitations often make such tools less functional and even counterproductive.

This discussion allowed me to shed some insight into some of the current difficulties with AI security and protocol and how they were being addressed by leading corporations and highlighted that Open AI had recently released a playbook specifically for education.

Microsoft have also taken AI models and retrained using information fed into them and applied a layer “enterprise grade security”. What this meant, they said, was that such models could no longer be retrained using potentially sensitive information from within organisations.

AI is a change to the way we collectively interface with technology. To ensure its responsible deployment within government, guard rails both technical and ethical must be considered as part of any adoption path.

Jack Murphy
AI Capability Lead

Bolt-ons vs transformative change

Another participant expressed doubts that AI would form an integral part of making public servants’ jobs easier going forward.

They mentioned that “the challenge is that we’re all jumping to [an AI] solution before we’ve even considered whether there are other solutions to the problems that we have in government,” they said.

They gave the example of chatbots, an AI tool that is increasingly familiar to anyone who has ever tried to refund a product purchased online. Because chatbots are so common, government departments are keen to incorporate them as a fix or bolt-on feature to a service. However, they also said these departments often overlook the risks of incorporating chatbots, relative to the benefits. Another participant gave an example of where AI assistance is both highly risky and highly necessary. The UK government regularly receives large volumes of feedback via its main website, amounting to around 40,000 user comments each month. As well as containing users’ feedback, however, the information gathered often also contains personal details, such as full names and phone numbers. AI tools could comb this information for useful insights that could improve government policy, but separating this high-risk personal data from the more instructive data is hazardous, especially when the information comes in at such scale, and at such regular intervals.

The future for public service careers in the AI age

The conversation then turned to jobs and job security. How would AI change the career paths and skills for civil servants?

One speaker asked how career pipeline would be maintained for officials if junior roles were more and more likely to be replaced by AI.

This concern was picked up by a participant who said that they had noticed certain government departments only permitted staff in certain roles to use AI tools. If an entry-level official does not enjoy access to the same tools as their senior colleagues, they said, then there is a risk they will get stuck in that role indefinitely.

Another participant said that job security should not just depend on the ability to programme or use AI, but that traditional skills such as languages and interpersonal instincts would remain crucial to a functioning civil service.

“The conversation that I’ve been trying to have with my organisation is that we should be using open-source information to the best of its ability, but it doesn’t replace their traditional skills that actually [are] really valuable.”

Steps to implement changes

In the closing section of the discussion, we discussed some possible ways to make progress on AI deployment. A four-point model, being deployed in some organisations, was highlighted. To use AI, employees need to: secure their boss’s permission to use the tool; record what was said and/or inputted; check that what the AI has produced is true; and declare the fact that they’d used the AI tools.

Some participants mentioned that “That feels like a common-sense solution, whether it’s education or CVs,”and “If people declared that they had used some form of AI system, that would make us all a bit more comfortable that we’d understood that.”

I ended the discussion noting that though AI had caught widespread attention over the preceding 18 months, real-world applications remained few relative to the hype.

“If you compare noise to practical application, it’s astronomical. There’s probably nothing now that has more noise and less application anywhere in the world.”

A lot of potential uses are currently as bolt-ons to existing services, rather than a fundamental solution that meets a business objective or improves the everyday lives of citizens.

“What everyone’s looking for at the moment is “how do we actually…find use cases where there actually isn’t a better, simpler way to do it in a more traditional way”.

There’s no denying that AI will bring epoch – defining new capabilities to the public sector. But for now, the message from this roundtable is that many government organisations are waiting to discover their best use cases – and to set the rules around its use that could well set the groundwork for the public services of the future.

Click here to find out more about Public Service Data Live.

Jack Murphy

Author Spotlight

Jack Murphy

Jack specialises in delivering AI-powered solutions to customers across the private and public sectors. Based in Manchester, he has over ten years of experience working in a variety of industries across Europe to solve complex and challenging business problems with AI.