;
Skip to main content
Latest Updates
  • *** WELCOME to the new British APCO Website ! ***
  • British APCO Launches Ian Thompson Bursary 2025 Read Now
  • Artificial Intelligence in Public Safety White Paper now available! Read Now
  • *** WELCOME to the new British APCO Website ! ***
  • Christmas opening hours: The office will close at 1730 on Monday 23rd December 2024 and re open at 0900 on Thursday 2nd January 2025
Read All News

Regulation AI in Health and Care


Artificial Intelligence (AI) holds enormous potential for the NHS, if we can use it right.  It can reduce the burden on the system by taking on the tasks that can be converted into an algorithm. Many of these are in areas of greatest pressure, like radiography and pathology. It could improve patient outcomes, and increase productivity across the system, freeing up clinicians’ time so they can focus on the parts of the job where they add the most value.

But doing AI right means putting a set of rules around it that will make sure it is done safely, in a way that respects patients’ privacy and keeps the confidence of citizens and staff. Some of those rules already exist - like the 2018 Data Protection Act, for example, which put GDPR into UK law. But there are gaps, lots of regulators on the pitch, and a lack of clarity on both standards and roles.  

UK could be a world leader in health AI

This carries two risks.  First, and most importantly, that AI will be used that is unsafe. Second, and more likely, that the opportunity to use AI to help patients will be wasted or delayed, as both clinicians and innovators hold back until they know there is a regulatory framework that gives them cover.

Smart regulation could really help make the UK the best place in the world to develop AI in health. The benefits will be huge if we can find the sweet spot, where we maintain the trust that AI is being used properly and safely, while creating a space in which compliant innovation can flourish. 

We aren’t there yet. There are multiple regulators involved, creating a bewildering array of bodies for innovators to navigate and creating confusion for organisations in the NHS and social care who want to make the most of these innovations. We haven’t worked out yet how to regulate Machine Learning - systems that are constantly iterating their algorithms, often at huge speed and for reasons that are not always transparent, even to their creators. We have not created a clear path for innovators for how they get regulatory approval for their AI systems.

Working through the issues around AI in health

This is why I convened the CEOs and heads of all the 12 regulators and organisations involved. It included MHRA, NICE, CQC, the Information Commissioner, the National Data Guardian, the Health Research Authority, the Centre for Data Ethics and Innovation, NHSD, NHSR, the Better Regulation Executive and the MRC (the full list of participants is below).

We met on 28 January, and spent three hours working through the issues.  There was complete agreement that we needed to get on with this, and get in place the clear, innovation-friendly processes and regulations we need.   In particular, we agreed that we need:

  • clarity of role, in which MHRA is responsible for regulating the safety of AI systems; HRA for overseeing the research to generate evidence; NICE for assessing their value to determine whether they should be deployed; CQC to ensure that providers are following best practice in using AI; with others playing important roles on particular angles (like the ICO and National Data Guardian on privacy);
  • a joined up approach, in which innovators do not have to navigate between lots of different bodies and sets of rules.  So we will aim to set up a single platform, bringing all the regulatory strands together to create a single point of contact, advice and engagement.  And we will work closely with colleagues across the devolved nations, to make sure that we are joined up across the UK as well;
  • a joined-up regulatory sandbox for AI, which  brings together all the sandbox initiatives in different regulators, and gives innovators a single, end-to-end safe space to develop and test their AI systems;
  • sufficient capability to assess AI systems at the scale and pace required.  This either needs to be in-house in the relevant regulators, particularly MHRA, or through designated organisations working to clear standards set by those regulators and accredited by them;
  • quick progress on working out how we handle Machine Learning.  We know it’s difficult, but we need to develop a proposal, test it, iterate, and keep iterating. There is a lot of brilliant thinking around the world on this - we will gather it, convene experts, practitioners and regulators, and get moving.  As this is - at this stage - a question of policy, we will lead it from NHSX in the first instance before passing a plan to regulators to implement;
  • communication with clinicians, innovators and - crucially - the public.  We need to keep explaining what we are doing, so people with views, expertise and concerns can feed them in rather than feel there is a secret process being done to them.

In all this, the NHS AI Lab will play an important role, not least as a source of funding for regulators to stand up the capability they need to do this work.  The lab will have regulation as one of its core streams of activity.   

It’s a huge agenda.  But it really matters, and we need to move this all forward at pace.  The prize - if we can get this right - is making the UK a world leader in AI for health, giving the NHS the benefits of this new technology safely, reducing the burden on its staff and improving outcomes for patients.  

 

Article Source:

Matthew Gould, Chief Executive Officer, NHSX

View online artlce HERE

View other News