Wolters Kluwer survey highlights threat of “shadow AI” in healthcare
Learnings from US study highlight urgent need for governance to ensure responsible AI use in the NHS
A survey from Wolters Kluwer Health has highlighted the threat of shadow AI in healthcare and spotlights the need to accelerate the UK government’s new regulatory rulebook for AI in the NHS. As healthcare organizations turn to AI to improve workflow, the use of unauthorized tools and apps can introduce risk - to security, data and patient safety.
In the survey of over 500 healthcare workers in the US, 40% said they have encountered others using unapproved AI applications in their organization. Respondents were asked about the risks of AI and while nearly a quarter (23%) of respondents ranked patient safety as their top concern of AI risks, almost a fifth (17%) of the total said they have used unsanctioned tools at work themselves.
When respondents were asked why they used unapproved AI tools, almost half cited the need to achieve faster workflows, and one in three pointed to either a lack of approved tools or the approved tools lacking the desired functionality.
“With workload and staffing pressures in NHS trusts, healthcare professionals can see the potential of clinical AI as an indispensable partner in daily workflows, automating documentation, surfacing care gaps, and streamlining communications,” says Garry Edwards, Vice President EMEA, Wolters Kluwer Health.
Edwards added, “However, when approved options aren’t available or effective enough, the risk is they will turn to unauthorized tools. More formalized, organization-wide frameworks are needed that ensure the responsible use of AI, including proper training around the technology and appropriate guardrails to maintain compliance.”
Professor Derek Bell, OBE, group chair for University Hospitals Tees, echoes this: “NHS England is clear that the use of AI in healthcare must promote safety and be explainable and grounded in trusted evidence. What matters is not simply what AI can generate, but what it is built on and how it is governed.”
“If we don’t document, don’t evaluate, and don’t understand the source, we’re not managing risk – we’re creating it. It’s not just about having data – it’s about having trusted, dependable sources that clinicians can rely on,” he concludes.
The UK government announced in 2025 that the regulatory rulebook from a new National Commission will be published in 2026. The commission’s goal is to help accelerate safe access to AI in healthcare and across the NHS.
Editor Details
-
Company:
- Fourth Day PR
-
Name:
- Rachel Murray
- Email:
-
Telephone:
- +447713652217
- Website:
Related Links
- Website: Wolters Kluwer Health