Using AI safely and responsibly in primary care

What you need to know about AI in GP practices – how it's used, who's responsible when things go wrong, and the key risks to manage.

  • Artificial intelligence (AI) is increasingly being used for diagnostics, decision support and administrative tasks.
  • Using AI without the approval and outside the governance framework of your organisation could carry significant personal risks.
  • Individual doctors are responsible for ensuring their clinical records are accurate, including clinical notes transcribed by AI systems.
  • Legal and ethical risks include data protection breaches, bias and misinformation, requiring careful oversight.

Artificial intelligence (AI) refers to technology that emulates human intelligence, decision-making or thought process.

This includes a wide range of computer models. Some GP practices are using third-party machine learning algorithms to support diagnostic and decision support systems (DDS).

These systems are often applied in specific areas, such as triaging common requests or automating administrative tasks like rota planning.

Generative AI

There are many promising AI developments being researched to leverage vast amounts of data to improve the effectiveness of clinical care. Some of the most high-profile AI models use generative AI, which is trained on immense data sets.

The NHS AI Lab is supporting the development of bespoke AI systems and has produced guidance on the required governance framework. However, generic publicly available generative AI systems - like ChatGPT or Bard - are likely to be outside this framework.

Despite the human-like interaction, generative AI does not actually think. It can appear to give credible answers, even to complex questions, but remember that models are trained to predict the next words in a sequence when given a prompt.

While the ease of access may be attractive, they can pose risks. Publicly available AI models log user interactions and could use the information provided for their own purposes.

Some interfaces can now look up documents before answering, but even that is limited to data the system can access. Many websites block automated access in this way, so the available information might be much more limited than documents you could find yourself.

The training data for generative AI models is not usually made public, so it's not clear whether they've been trained on trusted sources, such as established textbooks, published papers or authoritative guidance from organisations like NICE or the GMC.

Generative AI will always produce a response to a prompt, often in a manner that would instil confidence if it had been said by a human. However, errors - often called 'hallucinations,' although perhaps more accurately described as 'confabulations' - are common. These mistakes can sound entirely plausible, making them difficult to spot.

Risks and responsibilities

Data protection and intellectual property

When using generative AI, data processing usually happens in the cloud, potentially outside the UK. Many AI services claim to comply with data protection legislation. That may provide some reassurance to practices, but using them inappropriately could still lead to a breach.

Some key risks include:

  • if your practice's privacy statement does not explain how the AI service uses identifiable data
  • if a patient does not realise they are being recorded, or if their data is retained for longer than necessary
  • data breaches caused by entering data into publicly available systems. Simply removing a patient's name and address is not sufficient to anonymise clinical information. The intersection of diagnoses, other elements of the history or information about you can make it possible to identify a patient.

Academic papers and other intellectual property are only available under licence agreements, which almost always prohibit their use in AI systems.

As a result, these documents are usually not included in the training data for generative AI models and may not be available in AI-driven searches or automated agents.

Bias and accuracy

AI training data is generally chosen because it is accessible in large quantities - that could mean the models reflect a health profile that is characteristic of where it is developed. For example, reflecting an American Caucasian population.

Consequently, AI systems have the potential to reinforce, and possibly exaggerate, the biases in the training data - impacting the accuracy and fairness of their outputs.

Transcription and translation

AI-powered transcription and translation are becoming more available in general practice. This could be very helpful in taking histories and providing information.

Some transcription systems are able to convert audio to text, which can then summarise the consultation into clinical notes.

Although the audio recording may not be retained for long, you would need to document the patient's consent in their notes before recording, including:

  • why a recording would assist the patient's care
  • what form the recording will take
  • and that it will be stored securely or disposed of.

To mitigate risks, GPs should always check AI-generated notes for accuracy before finalising records. Where possible, summarised notes should be reviewed with the patient before they leave the consultation.

Although some systems offer translation, if it's into a language you don't speak, you're still responsible for any errors in the communication that a native speaker might have noticed.

Who's responsible? What the GMC guidance says

The GMC has issued guidance on how good medical practice applies to the use of AI. This includes:

  • implementing new technology responsibly
  • raising concerns about errors the system makes
  • keeping up to date with developments
  • working within your competence.

The GMC states that you must make sure any information you communicate as a medical professional is accurate and not false or misleading. That includes taking reasonable steps to check the information is accurate ('Good medical practice', paragraph 89).

If you're using AI to assist with decision-making or to make notes, you're still responsible for checking the AI output is accurate and appropriate.

If an AI transcription model missed key elements of a history, you're obliged to correct the record at the time. If the model introduced 'hallucinations' to your notes, you would need to remove the inserted words. Failure to do this would be viewed as your error, or even your dishonesty.

It can be difficult to spot hallucinations when clinically relevant points are added that should have been asked but weren't. As a precaution, review any AI-generated summary or letter with the patient before they leave the consultation, if it is safe to do so.

Who's responsible if something goes wrong?

Liability in a claim can be complex. If there is a contract with an AI service provider, that contract could include provisions on indemnity. However, it is more common for AI services to have disclaimers stating their output should be not relied upon.

If no relevant contract is in place, you're liable for any decisions made with AI assistance. A claim alleging a data protection breach caused by AI may not be included within the benefits of your usual insurance or clinical indemnity arrangements.

Hence adopting AI systems as an individual, outside of your employer's protections, could carry significant personal risks and should be avoided.

Checks to implement AI

Follow national guidance

AI services should only be adopted by a practice in line with appropriate national and NHS guidance. It's unlikely an individual doctor would want to take on that responsibility or liability.

If you want to implement an AI service, the NHS AI Lab or the NHSX buyer's guide to AI in health and care are good places to start. There may also be local guidance or policies on AI use from your integrated care board (ICB).

Get employer approval

Using AI without the approval of your employer may breach your workplace policies and data protection legislation.

Get a governance plan and training

If you're working within a practice that has implemented AI systems, there should be a governance plan in place and, where necessary, training on the use of the system.

Inform patients about data use

Practice policies should clearly set out how patient data may be used and patients' rights under data protection legislation.

Provide direct patient communication

In some circumstances, like AI-assisted transcription, patients may need to be informed in person.

MDU advice on using AI safely and responsibly

  • Only use AI systems that have been approved by your workplace and follow workplace policies.
  • Make sure you know how AI is governed in your organisation, including what information is provided to patients about the use of their data, and any steps in taking consent for audio recording.
  • Check all AI outputs for accuracy. Be aware it can be wrong, so confirm any records created, ideally with the patient.
  • Raise concerns about errors made by an AI system and report these to your system administrators.
  • Never input any patient information into publicly available generative AI systems.
  • Only use intellectual property such as academic papers within the terms of the licence agreement.

This page was correct at publication on 17/03/2025. Any guidance is intended as general guidance for members only. If you are a member and need specific advice relating to your own circumstances, please contact one of our advisers.