Skip to main content

Main menu

  • HOME
  • LATEST ARTICLES
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • RESOURCES
    • About BJGP Open
    • BJGP Open Accessibility Statement
    • Editorial Board
    • Editorial Fellowships
    • Audio Abstracts
    • eLetters
    • Alerts
    • BJGP Life
    • Research into Publication Science
    • Advertising
    • Contact
  • SPECIAL ISSUES
    • Artificial Intelligence in Primary Care: call for articles
    • Social Care Integration with Primary Care: call for articles
    • Special issue: Telehealth
    • Special issue: Race and Racism in Primary Care
    • Special issue: COVID-19 and Primary Care
    • Past research calls
    • Top 10 Research Articles of the Year
  • BJGP CONFERENCE →
  • RCGP
    • British Journal of General Practice
    • BJGP for RCGP members
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers

User menu

  • Alerts

Search

  • Advanced search
Intended for Healthcare Professionals
BJGP Open
  • RCGP
    • British Journal of General Practice
    • BJGP for RCGP members
    • RCGP eLearning
    • InnovAIT Journal
    • Jobs and careers
  • Subscriptions
  • Alerts
  • Log in
  • Follow BJGP Open on Instagram
  • Visit bjgp open on Bluesky
  • Blog
Intended for Healthcare Professionals
BJGP Open

Advanced Search

  • HOME
  • LATEST ARTICLES
  • ALL ISSUES
  • AUTHORS & REVIEWERS
  • RESOURCES
    • About BJGP Open
    • BJGP Open Accessibility Statement
    • Editorial Board
    • Editorial Fellowships
    • Audio Abstracts
    • eLetters
    • Alerts
    • BJGP Life
    • Research into Publication Science
    • Advertising
    • Contact
  • SPECIAL ISSUES
    • Artificial Intelligence in Primary Care: call for articles
    • Social Care Integration with Primary Care: call for articles
    • Special issue: Telehealth
    • Special issue: Race and Racism in Primary Care
    • Special issue: COVID-19 and Primary Care
    • Past research calls
    • Top 10 Research Articles of the Year
  • BJGP CONFERENCE →
Research

Clinical decision making and risk appraisal using electronic risk assessment tools for cancer diagnosis: a qualitative study of GP experiences

Alex Burns, Emily Fletcher, Elizabeth Shephard, Raff Calitri, Mark Tarrant, Adrian Mercer, William Hamilton and Sarah Dean
BJGP Open 7 May 2025; BJGPO.2024.0243. DOI: https://doi.org/10.3399/BJGPO.2024.0243
Alex Burns
1University of Exeter Medical School, University of Exeter, Exeter, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Alex Burns
  • For correspondence: ab1265{at}exeter.ac.uk
Emily Fletcher
1University of Exeter Medical School, University of Exeter, Exeter, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Emily Fletcher
Elizabeth Shephard
1University of Exeter Medical School, University of Exeter, Exeter, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Raff Calitri
1University of Exeter Medical School, University of Exeter, Exeter, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mark Tarrant
2School of Psychology, University of Plymouth, Plymouth, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Adrian Mercer
3ERICA Trial Patient and Public Involvement and Engagement Group, University of Exeter, Exeter, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
William Hamilton
1University of Exeter Medical School, University of Exeter, Exeter, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sarah Dean
1University of Exeter Medical School, University of Exeter, Exeter, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info
  • eLetters
  • PDF
Loading

Abstract

Background Electronic risk assessment tools (eRATs) are intended to improve early primary care cancer diagnosis. eRATs, which interrupt a consultation to suggest a possibility of a cancer diagnosis, could impact clinical appraisal and the experience of the consultation. This study explores this issue using data collected within the context of the Electronic RIsk-assessment for CAncer (ERICA) trial.

Aim To explore views and experiences of GPs who used the ERICA eRATs, how the tools impacted their perception of risk and diagnostic thinking, and how this was communicated to patients.

Design & setting Qualitative interviews with GPs from English general practices undertaking the ERICA trial.

Method Participants were purposefully sampled from practices participating in the intervention arm of the ERICA trial. Eighteen GPs undertook semi-structured interviews via Microsoft Teams. Thematic analysis was used to explore their perspectives of the impact of the eRATs on consultations, diagnostic thinking related to cancer and other conditions, and how this information is communicated to patients.

Results The following three themes were developed: 1) the armoury, whereby eRATs were perceived as 'additional armour', offering a layer of protection against missing a cancer diagnosis, the defence coming at a cost of anxiety and complexity of consultation; 2) 'three heads' making a decision. eRATs were seen as another actor in the consultation, separate from clinician and patient, and challenging GP autonomy; and 3) for whom is the eRAT output intended? GPs were conflicted about whether the numerical eRAT outputs were helpful when communicating with patients.

Conclusion eRATs are appreciated as a defence against missing a cancer diagnosis. This defence comes at a cost and challenges GPs’ freedom in communication and decision making.

  • cancer diagnosis
  • neoplasms
  • primary health care
  • risk assessment

How this fits in

The use of electronic risk assessment tools (eRATs), which interrupt the clinical consultation, have been encouraged to support early diagnoses of significant, time-critical conditions. Little real-world evidence of their effectiveness or how they impact the clinical consultation exists. Using GP interview data from a large clinical trial (the Electronic RIsk-assessment for CAncer [ERICA] trial), this study explored how eRATs for cancer influenced clinical reasoning. Although the principle of eRATs was appreciated, the complexity of the consultation increased with GPs reporting a loss of clinician freedom both in terms of communication with patients and decision making.

Introduction

Policies encouraging improved cancer care in the UK started after studies indicated a worse cancer survival rate compared with other high-income countries.1 This adverse trend has persisted.2–4 Given early stage at diagnosis is associated with improved survival,5 early primary care diagnosis has been targeted as a potential solution.6 In the UK, the urgent suspected cancer (USC) pathway (previously known as the '2-week wait' pathway) is the route to early diagnosis from primary care, and while there is evidence that rapid diagnosis is achieved through this pathway,7 there is substantial variation in its use.8

One suggested method of increasing appropriate use of the USC pathway is via eRATs.9 These computer-based tools predict cancer by aggregating their relevant computer-coded symptoms, test results, and demographic data already present in the GP’s clinical system.10 They apply population-based risk estimates to the individual patient. Although there is no clear evidence of their effectiveness in primary care,11–13 their use is now encouraged in the UK.14

Primary care diagnosis presents challenges owing to the variable, imperfect, and subjective nature of the main input: patient history and examination. The pre-test probability of pathology in primary care is comparatively lower than in other healthcare contexts,15 meaning GPs need to navigate risk and embrace uncertainty in the diagnostic process.16 The traditional diagnostic paradigm involves a patient explaining their symptoms to a clinician, who then formulates a differential diagnosis, which is refined via examination, simple observations, and, if required, testing.17,18 It is at this point that previous risk assessment tools (electronic or otherwise) have been designed to work.10 In this situation a clinician must first consider that the possibility of cancer is high enough to warrant using a risk tool. This choice, of deciding if the tool is appropriate, is removed with some eRATs, where an on-screen prompt automatically interrupts the consultation with cancer risk information. It could disrupt the consultation dynamic, divert consultations from being patient led, or cause overtesting for cancer.

Previous qualitative studies examining how eRATs impact decision making have mainly used simulated clinics or prototype systems. Some studies have shown that they can help highlight possible cancer, or other serious diagnoses,19 and that, in some cases, the risk identified by the eRAT is greater than that perceived by the clinician.20,21 The time it takes to use eRATs has been a barrier to their use.21–23 Clinicians may mistrust the underlying algorithms and perceive them as fear-inducing,24,25 and have a preference for their own clinical judgement or intuition.26,27 These issues may contribute to the low uptake of eRATs in primary care.28

A novel set of eRATs for cancer (the ERICA eRATs) are algorithms based on established risk scores29–34 to help identify lung, colorectal, oesophago-gastric, bladder, kidney, and ovarian cancers. These eRATs operate through two mechanisms. First, an on-screen alert shows a numerical percentage risk when a patient’s risk of any of the six cancers reaches ≥2% (National Institute for Health and Care Excellence [NICE] threshold for referral is ≥3% risk,35 but a 2% trigger is intended to let clinicians explore further symptoms if appropriate). Second, clinicians can actively use a ‘symptom checker' to review relevant symptoms, adding patient-specific symptoms and recalculating cancer risk accordingly. The ERICA eRATs are undergoing assessment of their clinical and cost-effectiveness as part of a large, ongoing pragmatic cluster-randomised control trial.36 At the time of its initiation, the ERICA trial was the first large trial to investigate effectiveness of eRATs with automatic on-screen prompts for a high-stakes diagnosis in a real-world setting. Understanding the impact of eRATs on GPs’ clinical reasoning, within this real-world clinical context, will contribute to an assessment of their potential utility.

The aim of this study was to explore views and experiences of GPs who used the ERICA eRATs, how the tool impacted their perception of risk and diagnostic thinking, and how this was communicated to patients.

Method

A qualitative interview nested study was conducted as part of the ERICA trial.36 The study had the following three separate objectives: to explore 1) the eRATs' implementation process; 2) GP experiences of eRATs on risk appraisal and clinical decision making (reported here); and 3) the effect of eRATs on workload and workflow. The interview guide (see Supplementary Appendix S1) included questions related to all three objectives. This approach balanced methodological considerations with pragmatism and resources. It enabled researchers to collect data from a comparable number of clinicians, as recruited in similar qualitative studies.25,27 We present these findings separate to the two other study objectives, as these findings are less specific to the ERICA intervention implementation and workload issues. Instead, these findings may offer insight into clinician–patient–eRAT interactions more widely.

GPs using the eRAT software in the intervention arm of ERICA were invited to interviews. Email invitations were sent in batches of 10, with a reminder sent 2–3 weeks later. Recruitment was purposeful, targeting a sample framework to include both main IT systems used in UK primary care (EMIS and SystmOne) and across a range of USC pathway use. The interview schedule and consent forms were emailed to the responders, and an interview time arranged. A 30-minute face-to-face or online interview via Microsoft Teams was offered. Interviews were conducted by AB and EF. Participants were given a £50 voucher as a token of appreciation.

Interviews were transcribed by a third party. Transcripts were read and coded inductively by AB, following the process of thematic analysis.37 Analysis was approached from a position of critical realism,38 assuming that a cancer diagnosis exists independent of human observation, but there are perceptual layers, such as symptoms, signs, tests, and eRAT scores, which are an imperfect representation of that reality. Information power39 was considered iteratively after each interview and discussed with the research team.

NVivo software was used. Initial codes were broad, capturing any meaning associated with diagnostic decision making, risk, or uncertainty. Coded segments were grouped into an initial descriptive code framework. Codes and their groupings were discussed within the research team. The descriptive coding framework evolved throughout the process of coding, with particular code areas expanding as more data were added. Once all transcripts had been coded, a second researcher (EF) independently coded four transcripts using the coding framework. This was to explore if further meaning could be extracted from the data, or if any data related to the descriptive codes had been overlooked.

Descriptive codes were explored for overlap, or use of language and metaphor, which indicated a higher level or more abstract meaning. From this, initial shared ideas or emergent themes were discussed between AB and the research team. Analysis then developed into the reported themes.

The ERICA trial had an established patient and public involvement and engagement (PPIE) group from the outset. The group have input into key aspects of trial design including development of the interview topic guides. They have also been involved in trial oversight with PPIE representation at trial group meetings and have contributed to making sense of study findings; the emerging themes in this study were reviewed and discussed with the PPIE group before finalising.

Results

Eighteen GPs were recruited from a total of 70 invitations. Interviewees were not known to AB or EF. Characteristics of the 18 participants are shown on Table 1. Average length of interview was about 32 minutes. Further supporting quotes are displayed in Supplementary Table S1.

View this table:
  • View inline
  • View popup
Table 1. GP and practice participant characteristics

The following three themes were identified: the armoury; ‘three heads’ making a decision; and for whom is the eRAT output intended?

Theme 1: The armoury

This theme considers the risks and benefits of the eRATs and how they provide an additional layer of protection against missed diagnoses. This protection is for the patient, but also for the clinician. Participants recognised the principles on which eRATs work and are supportive of their objectives. It is 'part of our working armoury' (GP15), which 'spots things that you miss, because you’re thinking about so many other things' (GP09).

Its use in practice is more complicated. The armour has a cost. Armour is heavy and restricts movement: the eRATs hinder a clinician’s freedom of movement and adds to complexity of the consultation.

Participants were ambivalent about whether the costs were worth the benefits, and felt that over time this could lead to alert fatigue:

'... your resistance to them would gradually go down and down and down, and after a while you’d be like, what? Not really noticing.' (GP15)

There was a perception that this eRAT armour was often redundant: 'it’s really very difficult to sort out the actual signal from the noise' (GP15). 'Noise' here could be because the patient had already undergone, or was waiting for, investigations for the cancer that an eRAT had flagged:

'I can ignore that prompt because I’m satisfied that those features that are triggering it have already been reviewed, actioned, or referred … ' (GP02)

'Noise' also occurred when the component triggers were better explained by an alternative plausible narrative:

'I feel like I can justify that ... it reduces my concern.' (GP16)

Clinicians felt that the eRAT might be more helpful for GPs trained before the 2015 NICE guidelines,35 or those from practices with poor continuity of care.

Participants' understanding of the principles of the eRAT algorithms also led to scepticism about its protection. Because the eRATs works from symptom coding, the algorithms are only as good as the input provided by the clinician, which contained 'massive variability. Completely unreliable' (GP05). Clinicians felt that the symptoms they coded were likely ones they already considered significant:

'Symptoms such as diarrhoea. There’s no way on God’s earth we are coding that regularly … ' (GP10)

This leads to a circular paradox, predisposing towards warnings of cancer risks that have already been addressed:

'I guess the problem about it is that when I’m … if I’m coding things which have a significant impact on a … on a possible diagnosis, I’m actually already working towards that diagnosis.' (GP15)

Theme 2: 'Three heads' making a decision

In this theme the eRATs introduce another actor into the consultation, changing the locus of decision making, and challenging clinical autonomy.

The traditional clinical consultation model involves a clinician and a patient (or someone acting as their agent, such as a caregiver or relative). In a consultation with an eRAT, participants felt it was a third actor, with its own priorities and agency:

'I think you have to be having almost three heads on suddenly. Thinking about, what could the computer be telling me? What can the patient be telling me and what am I thinking?' (GP09)

This increased the complexity of the consultation, as well as adding uncertainty about whose priorities were most important and about 'where does the responsibility lie?' (GP09).

Most participants felt that the patient's agenda should have priority over other considerations, but that the eRATs could threaten this:

'There’s so many things … which take you away from why the patient has actually come to see you … and it’s another thing. We've kind of had enough of that.' (GP14)

Some clinicians felt their professional identity was threatened by the eRATs. They felt the eRATs undermined the usefulness of clinical judgement and shifted the diagnostic process into the administrative realm, away from their medical expertise:

'... if we just turn it into, you know, computer says yes, computer says no, at this level, what does this do to … you know, to our patients?' (GP15)

Theme 3: For whom is the eRAT output intended?

The final theme raises fundamental issues about communicating and understanding of numerical risk, and patient autonomy relating to these points. Participants raised the question as to who should be informed about the risk information. They felt conflicted when discussing the numerical results of the eRATs with patients. There was a recognition that in some cases, it added transparency to the discussion of risk and could be used to share diagnostic thinking and responsibility. Participants reflected on the 3% risk threshold for USC referrals, considering the balance between motivating patients to attend their appointments and reassuring them about the low risk of having cancer.

A more common perspective was that numerical risk information was not appropriate for patients:

'I wouldn't normally converse in percentage terms with patients.' (GP01)

Participants felt that patient understanding of numerical risk was poor and could lead to anxiety while restricting the clinical freedom of the clinician:

'I feel patients get very hung up on numbers sometimes, so I try not to … get myself nailed down too much.' (GP13)

Some participants felt that there was an ethical imperative to share the numerical risk with the patient, as it is 'their information ... it’s their risk' (GP16). Others, in contrast, thought that this information was intended for the clinician alone:

'… like, this is for me to use, it’s not for the patient to use … ' (GP09)

Participants were more comfortable sharing the individual symptoms and risk factors than the resulting numerical risk.

Participants felt that the categorical nature of clinical coding was inappropriate for subjective symptoms, and translating symptoms in diagnosis required clinical judgement.

Discussion

Summary

Participants appreciated the intention of the eRATs as a defence against missing a cancer diagnosis. In practice, the designed attributes of high sensitivity and automaticity of triggering create the issues that clinicians experience in their use. Initially, there was an impression that the eRATs confirmed rather than suggested the diagnosis, then a recognition that it increased consultation complexity and challenged the autonomy of clinicians in their communication and freedom of clinical reasoning.

Strengths and limitations

This study’s primary strength lies in its utilisation of data from clinicians actively using eRATs in clinical practice, from diverse practice settings, encompassing both high and low USC referral rates. Although sample size was pre-specified by the design of the ERICA trial, participants were articulate professionals, and the dialogue collected was thematically rich. Evidence of recurrent themes and patterns across participants occurred early in data analysis, supporting cross-case analysis and confidence to the adequacy of the sample's 'information power'.39

Potential limitations include the partial self-selection of participants. We could only interview ERICA users, which was a potential bias towards 'early adopters'40 or individuals who prioritise cancer diagnosis. Clinicians unengaged with the eRATs, a potential cause of any null outcome in the ERICA trial, are not represented here. It also necessarily involved participants who had been recently introduced to eRATs. A more prolonged exposure to any alert system may change a clinician’s interaction with that system through assimilation into normal routine.

The corresponding author (AB) can be considered an 'insider researcher', as a practising GP.41 This was made known at recruitment and had a positive impact on uptake. It may have improved data collection, by fostering trust and deeper understanding with participants at interview but could potentially introduce bias through personal assumptions or shared experiences.41,42

Comparison with existing literature

The eRATs’ model is supported by studies showing that diagnostic suggestions introduced early into the consultation are more effective in improving diagnostic accuracy,43 perhaps by avoiding anchoring bias (over-reliance on initial information or working diagnosis) and egocentric advice discounting (dismissing advice that conflicts with one’s own beliefs).44 However, theme 3 explored clinicians’ misgivings with clinical coding, and the use of subjective inputs to create a numerical risk score. Clinical reasoning can be described as a progression from subjective inputs through to a categorical diagnosis.45 eRATs, and their insertion of objective information early in a consultation, could disrupt this progression and challenge a clinician’s autonomy of selecting objective tests or other information (such as scores) that they deem necessary. This challenge to a clinician’s 'epistemic authority' has been previously described.46 It may lead to disengagement with the eRAT, or clinicians considering apparently objective information as subjective,47 undermining the purpose of the eRAT.

Theme 1 suggests that clinicians discount eRATs if they feel there is a more likely alternative diagnosis. The underlying logic of diagnosis is that evidence raising the probability of one disease reduces the probability of all remaining possibilities.48 Although eRATs provide a predictive population-based risk, there is less evidence of how alternative diagnoses should influence USC referrals. As most patients at the NICE referral threshold of 3% will not have cancer whether referred or not, clinicians do not have feedback with which to calibrate their decisions, an important part of improving diagnostic thinking.49

One participant used the terminology of signal detection theory50 (and others employed similar language) to describe their interaction with the eRAT, referring to the 'signal-to-noise ratio'. If eRAT triggers are perceived to offer additional noise (and thus workload and cognitive burden) without discernible signal, then this may cause clinicians to disengage: a process described as 'alert fatigue'.51

Theme 3 explored views about sharing the risk output of the eRAT with the patient. Shared decision making in diagnosis, where clinicians discuss and decide with a patient how to proceed, is a laudable aim.52 When discussions regarding further risk scores or testing take place before the test is performed, the ethical duty of sharing the results with a patient is clear.53 When the diagnostic information has not been previously discussed, the clinician may feel less obliged to share it. In this study, participants weighed their ethical duty to communicate the eRAT against the increased complexity, anxiety, and uncertainty created.

Although eRAT use in cancer diagnosis is currently encouraged, there is a lack of evidence for their effectiveness and the main ERICA trial aims to address this gap. The findings discussed here can be more generally applied. They highlight the challenges posed by an eRAT to the diagnostic process, including increased consultation complexity, difficulties in communicating numerical risk, and concerns about perceived clinical autonomy. In considering their use, GPs need to weigh up the putative benefits against the findings presented.

Implications for research

The use of digital technology in medicine is increasing,54 and research into clinician and patient interaction with computers is needed to maximise their potential. Further research is needed into how computer-generated risk information is received and processed by clinicians. It may be that new models of clinical reasoning are required, which explicitly incorporates computer-generated risk scores and diagnostic suggestions at the outset, when a patient presents with a complaint.

Notes

Funding

Alex Burns is a PhD candidate in a post funded by the University of Exeter (PhD studentship reference: 3521) in conjunction with the Electronic RIsk-Assessment for CAncer (ERICA) trial (http://www.theericatrial.co.uk), which is funded by the Dennis and Mirelle Gillings Foundation, and receives support from the charities Cancer Research UK and Macmillan. Sarah Dean and Mark Tarrant’s time is partly supported by the National Institute for Health and Care Research (NIHR) Applied Research Collaboration South West Peninsula. The views expressed in this publication are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care.

Ethical approval

Ethical approval was provided by London — City and East Research Ethics Committee (reference: 19/LO/0615).

Provenance

Freely submitted; externally peer reviewed.

Data

The dataset relied on in this article is available from the corresponding author on reasonable request.

Acknowledgements

The authors would like to acknowledge the patient and public involvement and engagement group, and all participants for their valuable time and contributions to this study.

Competing interests

The authors declare that no competing interests exist.

  • Received October 18, 2024.
  • Accepted November 4, 2024.
  • Copyright © 2025, The Authors

This article is Open Access: CC BY license (https://creativecommons.org/licenses/by/4.0/)

References

  1. 1.↵
    1. Berrino F,
    2. Sant M,
    3. Verdecchia A,
    4. et al.
    (1995) Survival of cancer patients in Europe, the EUROCARE study. IARC Scientific Publication no 132 (IARC Publications, Lyon).
  2. 2.↵
    1. Coleman MP,
    2. Forman D,
    3. Bryant H,
    4. et al.
    (2011) Cancer survival in Australia, Canada, Denmark, Norway, Sweden, and the UK, 1995–2007 (the international cancer benchmarking partnership): an analysis of population-based cancer registry data. Lancet 377(9760):127–138, doi:10.1016/S0140-6736(10)62231-3, pmid:21183212.
    OpenUrlCrossRefPubMed
  3. 3.
    1. Berrino F,
    2. Capocaccia K,
    3. Estève J,
    4. et al.
    (1999) Survival of cancer patients in Europe: the EUROCARE-2 study. IARC Scientific Publication no 151 (IARC Publications, Lyon).
  4. 4.↵
    1. Arnold M,
    2. Rutherford MJ,
    3. Bardot A,
    4. et al.
    (2019) Progress in cancer survival, mortality, and incidence in seven high-income countries 1995–2014 (ICBP SURVMARK-2): a population-based study. Lancet Oncol 20(11):1493–1505, doi:10.1016/S1470-2045(19)30456-5, pmid:31521509.
    OpenUrlCrossRefPubMed
  5. 5.↵
    1. NHS England
    (2023) Cancer survival in England, cancers diagnosed 2016 to 2020, followed up to 2021. accessed. https://digital.nhs.uk/data-and-information/publications/statistical/cancer-survival-in-england/cancers-diagnosed-2016-to-2020-followed-up-to-2021. 10 Apr 2025.
  6. 6.↵
    1. Department of Health
    (2000) The NHS plan: a plan for investment, a plan for reform. accessed. https://webarchive.nationalarchives.gov.uk/ukgwa/20130107105354/http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/@ps/documents/digitalasset/dh_118522.pdf. 10 Apr 2025.
  7. 7.↵
    1. Round T,
    2. Ashworth M,
    3. L’Esperance V,
    4. Møller H
    (2021) Cancer detection via primary care urgent referral and association with practice characteristics: a retrospective cross-sectional study in England from 2009/2010 to 2018/2019. Br J Gen Pract 71(712):e826–e835, doi:10.3399/BJGP.2020.1030, pmid:34544690.
    OpenUrlAbstract/FREE Full Text
  8. 8.↵
    1. Meechan D,
    2. Gildea C,
    3. Hollingworth L,
    4. et al.
    (2012) Variation in use of the 2-week referral pathway for suspected cancer: a cross-sectional analysis. Br J Gen Pract 62(602):e590–e597, doi:10.3399/bjgp12X654551, pmid:22947579.
    OpenUrlAbstract/FREE Full Text
  9. 9.↵
    1. Independent Cancer Taskforce
    (2015) Achieving world-class cancer outcomes: a strategy for England 2015–2020. accessed. https://www.cancerresearchuk.org/sites/default/files/achieving_world-class_cancer_outcomes_-_a_strategy_for_england_2015-2020.pdf. 10 Apr 2025.
  10. 10.↵
    1. Usher-Smith J,
    2. Emery J,
    3. Hamilton W,
    4. et al.
    (2015) Risk prediction tools for cancer in primary care. Br J Cancer 113(12):1645–1650, doi:10.1038/bjc.2015.409, pmid:26633558.
    OpenUrlCrossRefPubMed
  11. 11.↵
    1. Chima S,
    2. Reece JC,
    3. Milley K,
    4. et al.
    (2019) Decision support tools to improve cancer diagnostic decision making in primary care: a systematic review. Br J Gen Pract 69(689):e809–e818, doi:10.3399/bjgp19X706745, pmid:31740460.
    OpenUrlAbstract/FREE Full Text
  12. 12.
    1. Medina-Lara A,
    2. Grigore B,
    3. Lewis R,
    4. et al.
    (2020) Cancer diagnostic tools to aid decision-making in primary care: mixed-methods systematic reviews and cost-effectiveness analysis. Health Technol Assess 24(66):1–332, doi:10.3310/hta24660, pmid:33252328.
    OpenUrlCrossRefPubMed
  13. 13.↵
    1. Harada T,
    2. Miyagami T,
    3. Kunitomo K,
    4. Shimizu T
    (2021) Clinical decision support systems for diagnosis in primary care: a scoping review. Int J Environ Res Public Health 18(16), doi:10.3390/ijerph18168435, pmid:34444182. 8435.
    OpenUrlCrossRefPubMed
  14. 14.↵
    1. NHS England
    (2023) Network Contract Directed Enhanced Service: early cancer diagnosis support pack. accessed. https://www.england.nhs.uk/wp-content/uploads/2023/03/PRN00157-ncds-early-cancer-diagnosis-support-pack.pdf. 10 Apr 2025.
  15. 15.↵
    1. Buntinx F,
    2. Mant D,
    3. Van den Bruel A,
    4. et al.
    (2011) Dealing with low-incidence serious diseases in general practice. Br J Gen Pract 61(582):43–46, doi:10.3399/bjgp11X548974, pmid:21401991.
    OpenUrlAbstract/FREE Full Text
  16. 16.↵
    1. O’Riordan M,
    2. Dahinden A,
    3. Aktürk Z,
    4. et al.
    (2011) Dealing with uncertainty in general practice: an essential skill for the general practitioner. Qual Prim Care 19(3):175–181, pmid:21781433.
    OpenUrlPubMed
  17. 17.↵
    1. Cooper N,
    2. Bartlett M,
    3. Gay S,
    4. et al.
    (2021) Consensus statement on the content of clinical reasoning curricula in undergraduate medical education. Med Teach 43(2):152–159, doi:10.1080/0142159X.2020.1842343, pmid:33205693.
    OpenUrlCrossRefPubMed
  18. 18.↵
    1. Croskerry P
    (2009) A universal model of diagnostic reasoning. Acad Med 84(8):1022–1028, doi:10.1097/ACM.0b013e3181ace703, pmid:19638766.
    OpenUrlCrossRefPubMed
  19. 19.↵
    1. Sutton RT,
    2. Pincock D,
    3. Baumgart DC,
    4. et al.
    (2020) An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med 3, doi:10.1038/s41746-020-0221-y, pmid:32047862. 17.
    OpenUrlCrossRefPubMed
  20. 20.↵
    1. McParland CR,
    2. Cooper MA,
    3. Johnston B
    (2019) Differential diagnosis decision support systems in primary and out-of-hours care: a qualitative analysis of the needs of key stakeholders in Scotland. J Prim Care Community Health 10, doi:10.1177/2150132719829315, pmid:30767602. 2150132719829315.
    OpenUrlCrossRefPubMed
  21. 21.↵
    1. Moffat J,
    2. Ironmonger L,
    3. Green T
    (2014) Clinical decision support tool for cancer (CDS) project: evaluation report to the Department of Health. accessed. https://www.cancerresearchuk.org/sites/default/files/cds_final_310714.pdf. 10 Apr 2025.
  22. 22.
    1. Dikomitis L,
    2. Green T,
    3. Macleod U
    (2015) Embedding electronic decision-support tools for suspected cancer in primary care: a qualitative study of GPs’ experiences. Prim Health Care Res Dev 16(6):548–555, doi:10.1017/S1463423615000109, pmid:25731758.
    OpenUrlCrossRefPubMed
  23. 23.↵
    1. Richardson S,
    2. Feldstein D,
    3. McGinn T,
    4. et al.
    (2019) Live usability testing of two complex clinical decision support tools: observational study. JMIR Hum Factors 6(2), doi:10.2196/12471, pmid:30985283. e12471.
    OpenUrlCrossRefPubMed
  24. 24.↵
    1. Burns A,
    2. Donnelly B,
    3. Feyi-Waboso J,
    4. et al.
    (2022) How do electronic risk assessment tools affect the communication and understanding of diagnostic uncertainty in the primary care consultation? A systematic review and thematic synthesis. BMJ Open 12(6), doi:10.1136/bmjopen-2021-060101, pmid:35768084. e060101.
    OpenUrlAbstract/FREE Full Text
  25. 25.↵
    1. Akanuwe JNA,
    2. Black S,
    3. Owen S,
    4. Siriwardena AN
    (2021) Barriers and facilitators to implementing a cancer risk assessment tool (QCancer) in primary care: a qualitative study. Prim Health Care Res Dev 22, doi:10.1017/S1463423621000281, pmid:34615569. e51.
    OpenUrlCrossRefPubMed
  26. 26.↵
    1. Bradley PT,
    2. Hall N,
    3. Maniatopoulos G,
    4. et al.
    (2021) Factors shaping the implementation and use of clinical cancer decision tools by GPs in primary care: a qualitative framework synthesis. BMJ Open 11(2), doi:10.1136/bmjopen-2020-043338, pmid:33608402. e043338.
    OpenUrlAbstract/FREE Full Text
  27. 27.↵
    1. Chiang P-C,
    2. Glance D,
    3. Walker J,
    4. et al.
    (2015) Implementing a QCancer risk tool into general practice consultations: an exploratory study using simulated consultations with Australian general practitioners. Br J Cancer 112(Suppl 1):S77–S83, doi:10.1038/bjc.2015.46, pmid:25734392.
    OpenUrlCrossRefPubMed
  28. 28.↵
    1. Price S,
    2. Spencer A,
    3. Medina-Lara A,
    4. Hamilton W
    (2019) Availability and use of cancer decision-support tools: a cross-sectional survey of UK primary care. Br J Gen Pract 69(684):e437–e443, doi:10.3399/bjgp19X703745, pmid:31064743.
    OpenUrlAbstract/FREE Full Text
  29. 29.↵
    1. Stapley S,
    2. Peters TJ,
    3. Neal RD,
    4. et al.
    (2013) The risk of oesophago-gastric cancer in symptomatic patients in primary care: a large case-control study using electronic records. Br J Cancer 108(1):25–31, doi:10.1038/bjc.2012.551, pmid:23257895.
    OpenUrlCrossRefPubMed
  30. 30.
    1. Shephard EA,
    2. Stapley S,
    3. Neal RD,
    4. et al.
    (2012) Clinical features of bladder cancer in primary care. Br J Gen Pract 62(602):e598–e604, doi:10.3399/bjgp12X654560, pmid:22947580.
    OpenUrlAbstract/FREE Full Text
  31. 31.
    1. Shephard E,
    2. Neal R,
    3. Rose P,
    4. et al.
    (2013) Clinical features of kidney cancer in primary care: a case-control study using primary care records. Br J Gen Pract 63(609):e250–e255, doi:10.3399/bjgp13X665215, pmid:23540481.
    OpenUrlAbstract/FREE Full Text
  32. 32.
    1. Hamilton W,
    2. Peters TJ,
    3. Bankhead C,
    4. Sharp D
    (2009) Risk of ovarian cancer in women with symptoms in primary care: population based case-control study. BMJ 339, doi:10.1136/bmj.b2998, pmid:19706933. b2998.
    OpenUrlAbstract/FREE Full Text
  33. 33.
    1. Cleary J,
    2. Peters TJ,
    3. Sharp D,
    4. Hamilton W
    (2007) Clinical features of colorectal cancer before emergency presentation: a population-based case-control study. Fam Pract 24(1):3–6, doi:10.1093/fampra/cml059, pmid:17142248.
    OpenUrlCrossRefPubMed
  34. 34.↵
    1. Hamilton W,
    2. Peters TJ,
    3. Round A,
    4. Sharp D
    (2005) What are the clinical features of lung cancer before the diagnosis is made? A population based case-control study. Thorax 60(12):1059–1065, doi:10.1136/thx.2005.045880, pmid:16227326.
    OpenUrlAbstract/FREE Full Text
  35. 35.↵
    1. National Institute for Health and Care Excellence
    (2025) Suspected cancer: recognition and referral. NG12. accessed. https://www.nice.org.uk/guidance/ng12. 23 Apr 2025.
  36. 36.↵
    1. Hamilton W,
    2. Mounce L,
    3. Abel GA,
    4. et al.
    (2023) Protocol for a pragmatic cluster randomised controlled trial assessing the clinical effectiveness and cost-effectiveness of Electronic RIsk-assessment for CAncer for patients in general practice (ERICA). BMJ Open 13(3), doi:10.1136/bmjopen-2022-065232, pmid:36940950. e065232.
    OpenUrlAbstract/FREE Full Text
  37. 37.↵
    1. Braun V,
    2. Clarke V
    (2022) Thematic analysis: a practical guide (SAGE Publications, London).
  38. 38.↵
    1. Bhaskar R
    (2010) Reclaiming reality: a critical introduction to contemporary philosophy (Routledge, Abingdon).
  39. 39.↵
    1. Malterud K,
    2. Siersma VD,
    3. Guassora AD
    (2016) Sample size in qualitative interview studies. Qual Health Res 26(13):1753–1760, doi:10.1177/1049732315617444.
    OpenUrlCrossRefPubMed
  40. 40.↵
    1. Zachrison KS,
    2. Yan Z,
    3. Samuels-Kalow ME,
    4. et al.
    (2021) Association of physician characteristics with early adoption of virtual health care. JAMA Netw Open 4(12), doi:10.1001/jamanetworkopen.2021.41625, pmid:34967876. e2141625.
    OpenUrlCrossRefPubMed
  41. 41.↵
    1. Atkinson P
    1. Teusner A
    (2022) in SAGE Research Methods Foundations, ed Atkinson P, In ed, doi:10.4135/9781526421036845676. Qualitative Insider Research.
    OpenUrlCrossRef
  42. 42.↵
    1. Berkovic D,
    2. Ayton D,
    3. Briggs AM,
    4. Ackerman IN
    (2020) The view from the inside: positionality and insider research. Int J Qual Methods 19, doi:10.1177/1609406919900828.
    OpenUrlCrossRef
  43. 43.↵
    1. Kourtidis P,
    2. Nurek M,
    3. Delaney B,
    4. Kostopoulou O
    (2022) Influences of early diagnostic suggestions on clinical reasoning. Cogn Res Princ Implic 7(1), doi:10.1186/s41235-022-00453-y, pmid:36520258. 103.
    OpenUrlCrossRefPubMed
  44. 44.↵
    1. Tversky A,
    2. Kahneman D
    (1974) Judgment under uncertainty: heuristics and biases. Science 185(4157):1124–1131, doi:10.1126/science.185.4157.1124, pmid:17835457.
    OpenUrlAbstract/FREE Full Text
  45. 45.↵
    1. Croskerry P,
    2. Campbell SG,
    3. Petrie DA
    (2023) The challenge of cognitive science for medical diagnosis. Cogn Res Princ Implic 8(1), doi:10.1186/s41235-022-00460-z, pmid:36759370. 13.
    OpenUrlCrossRefPubMed
  46. 46.↵
    1. Grote T,
    2. Berens P
    (2020) On the ethics of algorithmic decision-making in healthcare. J Med Ethics 46(3):205–211, doi:10.1136/medethics-2019-105586, pmid:31748206.
    OpenUrlAbstract/FREE Full Text
  47. 47.↵
    1. Haase CB,
    2. Ajjawi R,
    3. Bearman M,
    4. et al.
    (2023) Data as symptom: doctors’ responses to patient-provided data in general practice. Soc Stud Sci 53(4):522–544, doi:10.1177/03063127231164345, pmid:37096688.
    OpenUrlCrossRefPubMed
  48. 48.↵
    1. Aberegg SK,
    2. Johnson SA
    (2020) When alternative diagnoses are more likely than pulmonary embolism: a paradox. Ann Am Thorac Soc 17(6):670–672, doi:10.1513/AnnalsATS.201908-590IP, pmid:32068432.
    OpenUrlCrossRefPubMed
  49. 49.↵
    1. Meyer AND,
    2. Singh H
    (2019) The path to diagnostic excellence includes feedback to calibrate how clinicians think. JAMA 321(8):737–738, doi:10.1001/jama.2019.0113, pmid:30735239.
    OpenUrlCrossRefPubMed
  50. 50.↵
    1. Swets JA,
    2. Dawes RM,
    3. Monahan J
    (2000) Psychological science can improve diagnostic decisions. Psychol Sci Public Interest 1(1):1–26, doi:10.1111/1529-1006.001, pmid:26151979.
    OpenUrlCrossRefPubMed
  51. 51.↵
    1. McGreevey JD 3rd.,
    2. Mallozzi CP,
    3. Perkins RM,
    4. et al.
    (2020) Reducing alert burden in electronic health records: state of the art recommendations from four health systems. Appl Clin Inform 11(1):1–12, doi:10.1055/s-0039-3402715, pmid:31893559.
    OpenUrlCrossRefPubMed
  52. 52.↵
    1. Berger ZD,
    2. Brito JP,
    3. Ospina NS,
    4. et al.
    (2017) Patient centred diagnosis: sharing diagnostic decisions with patients in clinical practice. BMJ 359, doi:10.1136/bmj.j4218, pmid:29092826. j4218.
    OpenUrlFREE Full Text
  53. 53.↵
    1. Makins N
    (2023) Patients, doctors and risk attitudes. J Med Ethics 49(11):737–741, doi:10.1136/jme-2022-108665, pmid:36898826.
    OpenUrlAbstract/FREE Full Text
  54. 54.↵
    1. Alowais SA,
    2. Alghamdi SS,
    3. Alsuhebany N,
    4. et al.
    (2023) Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ 23(1), doi:10.1186/s12909-023-04698-z, pmid:37740191. 689.
    OpenUrlCrossRefPubMed
  55. 55.
    1. Eayres D
    (2008) Technical Briefing 3: Commonly used public health statistics and their confidence intervals. accessed. https://www.ukiacr.org/publication/technical-briefing-3-commonly-used-public-health-statistics-and-their-confidence. 23 Apr 2025.
Back to top
Previous ArticleNext Article

Latest Articles

Download PDF
Email Article

Thank you for recommending BJGP Open.

NOTE: We only request your email address so that the person to whom you are recommending the page knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Clinical decision making and risk appraisal using electronic risk assessment tools for cancer diagnosis: a qualitative study of GP experiences
(Your Name) has forwarded a page to you from BJGP Open
(Your Name) thought you would like to see this page from BJGP Open.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Clinical decision making and risk appraisal using electronic risk assessment tools for cancer diagnosis: a qualitative study of GP experiences
Alex Burns, Emily Fletcher, Elizabeth Shephard, Raff Calitri, Mark Tarrant, Adrian Mercer, William Hamilton, Sarah Dean
BJGP Open 7 May 2025; BJGPO.2024.0243. DOI: 10.3399/BJGPO.2024.0243

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Clinical decision making and risk appraisal using electronic risk assessment tools for cancer diagnosis: a qualitative study of GP experiences
Alex Burns, Emily Fletcher, Elizabeth Shephard, Raff Calitri, Mark Tarrant, Adrian Mercer, William Hamilton, Sarah Dean
BJGP Open 7 May 2025; BJGPO.2024.0243. DOI: 10.3399/BJGPO.2024.0243
del.icio.us logo Facebook logo Mendeley logo Bluesky logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Mendeley logo Mendeley

Jump to section

  • Top
  • Article
    • Abstract
    • How this fits in
    • Introduction
    • Method
    • Results
    • Discussion
    • Notes
    • References
  • Figures & Data
  • Info
  • eLetters
  • PDF

Keywords

  • cancer diagnosis
  • neoplasms
  • primary health care
  • risk assessment

More in this TOC Section

  • Identifying and addressing UTI prevention barriers in primary care: a qualitative study
  • Depictions of the GP crisis: thematic analysis of UK newspapers pre-general election
  • Continuing professional development on planetary health for African family physicians: descriptive survey
Show more Research

Related Articles

Cited By...

Intended for Healthcare Professionals

 
 

British Journal of General Practice

NAVIGATE

  • Home
  • Latest articles
  • Authors & reviewers
  • Accessibility statement

RCGP

  • British Journal of General Practice
  • BJGP for RCGP members
  • RCGP eLearning
  • InnovAiT Journal
  • Jobs and careers

MY ACCOUNT

  • RCGP members' login
  • Terms and conditions

NEWS AND UPDATES

  • About BJGP Open
  • Alerts
  • RSS feeds
  • Facebook
  • Twitter

AUTHORS & REVIEWERS

  • Submit an article
  • Writing for BJGP Open: research
  • Writing for BJGP Open: practice & policy
  • BJGP Open editorial process & policies
  • BJGP Open ethical guidelines
  • Peer review for BJGP Open

CUSTOMER SERVICES

  • Advertising
  • Open access licence

CONTRIBUTE

  • BJGP Life
  • eLetters
  • Feedback

CONTACT US

BJGP Open Journal Office
RCGP
30 Euston Square
London NW1 2FB
Tel: +44 (0)20 3188 7400
Email: bjgpopen@rcgp.org.uk

BJGP Open is an editorially-independent publication of the Royal College of General Practitioners

© 2025 BJGP Open

Online ISSN: 2398-3795