Intended for healthcare professionals

CCBYNC Open access
Analysis Quality Improvement

Revitalising audit and feedback to improve patient care

BMJ 2020; 368 doi: https://doi.org/10.1136/bmj.m213 (Published 27 February 2020) Cite this as: BMJ 2020;368:m213
  1. Robbie Foy, professor of primary care1,
  2. Mirek Skrypak, associate director for quality and development2,
  3. Sarah Alderson, clinical lecturer in primary care and Wellcome ISSF fellow1,
  4. Noah Michael Ivers, clinician scientist3,
  5. Bren McInerney, community volunteer4,
  6. Jill Stoddart, director of operations2,
  7. Jane Ingham, chief executive officer2,
  8. Danny Keenan, medical director 2
  1. 1Leeds Institute of Health Sciences, Leeds, UK
  2. 2Healthcare Quality Improvement Partnership, London, UK
  3. 3Women’s College Hospital, Toronto, ON, Canada
  4. 4Gloucerstershire, UK
  1. Correspondence to: M Skrypak mirek.skrypak{at}hqip.org.uk

Audit and feedback are widely used in quality improvement. Robbie Foy and colleagues argue that their full potential to improve patient care could be realised through a more evidence based and imaginative approach

Key messages

  • Clinical audit and feedback entail reviewing clinical performance against explicit standards and delivering feedback to enable data driven improvement

  • The impact of audit could be maximised by applying implementation science, considering the needs of clinicians and patients, and emphasising action over measurement

  • Embedding research on how to improve audit and feedback in large scale programmes can further enhance their effectiveness and efficiency

Healthcare systems face challenges in tackling variations in patient care and outcomes.12 Audit and feedback aim to improve patient care by reviewing clinical performance against explicit standards and directing action towards areas not meeting those standards.3 It is a widely used foundational component of quality improvement, included in around 60 national clinical audit programmes in the United Kingdom.

Ironically, there is currently a gap between what audit and feedback can achieve and what they actually deliver, whether led locally or nationally. Several national audits have been successful in driving improvement and reducing variations in care, such as for stroke and lung cancer, but progress is also slower than hoped for in other aspects of care (table 1).45 Audit and feedback have a chequered past.6 Clinicians might feel threatened rather than supported by top-down feedback and rightly question whether rewards outweigh efforts invested in poorly designed audit. Healthcare organisations have limited resources to support and act on audit and feedback. Dysfunctional clinical and managerial relationships undermine effective responses to feedback, particularly when it is not clearly part of an integrated approach to quality assurance and improvement. Unsurprisingly, the full potential of audit and feedback has not been realised.

Table 1

Examples of national clinical audit programmes and randomised trials evaluating audit and feedback

View this table:

Clinical, patient, and academic communities might need to have more sophisticated conversations about audit and feedback to achieve substantial, data driven, continuous improvement. They can also act now. There are ways to maximise returns from the considerable resources, including clinician time, invested in audit programmes. These include applying what is already known, paying attention to the whole audit cycle, getting the right message to the right recipients, making more out of less data, embedding research to improve impact, and harnessing public and patient involvement.

Apply what is already known

Audit and feedback generally work. A Cochrane review of 140 randomised trials found that they produced a median 4.3% absolute improvement (interquartile range 0.5% to 16%) in healthcare professionals’ compliance with desired practice, such as recommended investigations or prescribing.3 This is a modest effect, but cumulative incremental gains through repeated audit cycles can deliver transformative change. Audit and feedback also influence reach and population through scaled up national programmes, which other quality improvement approaches (such as financial incentives or educational outreach visits) might not achieve with similar resources; for example, social norm feedback (presenting information to show that individuals are outliers in their behaviour) from a high profile messenger can reduce antibiotic prescribing in primary care at low cost and at national scale (table 1).7

The interquartile range in the Cochrane review indicates that a quarter of audit and feedback interventions had a relatively large, positive effect of up to 16% on patient care, whereas a quarter had a negative or null effect. The effects of feedback can be amplified by ensuring that it is given by a supervisor or colleague, provided more than once, delivered in both verbal and written formats, and includes both explicit targets for change and action plans.3 A synthesis of expert interviews and systematic reviews identified 15 “state of the science,” theory informed suggestions for effective feedback (box 1).8 These are practical ways to maximise the impact and value of existing audit programmes.

Box 1

Questions for audit programmes and healthcare organisations to consider in designing, implementing, and responding to audit and feedback8

Nature of the desired action

  • Can you recommend actions that are consistent with established goals and priorities?

  • Can you recommend actions that can improve and are under the recipient’s control?

  • Can you recommend specific actions?

Nature of the data available for feedback

  • Can you provide multiple instances of feedback?

  • Can you provide feedback as soon as possible and data frequency informed by the number of new patient cases?

  • Can you provide individual rather than general data?

  • Can you choose comparators that reinforce desired behaviour change?

Feedback display

  • Can you closely link the visual display and summary message?

  • Can you provide feedback in more than one way?

  • Have you minimised extraneous cognitive load for feedback recipients?

Delivering feedback

  • Have you addressed barriers to feedback use?

  • Can you provide short, actionable messages followed by optional detail?

  • Have you addressed credibility of the information?

  • Can you prevent defensive reactions to feedback?

  • Can you construct feedback through social interaction?

RETURN TO TEXT

Pay attention to the whole cycle

The audit and feedback process comprises one or more cycles of establishing best practice criteria, measuring current practice, feeding back findings, implementing changes, and further monitoring. This chain is only as strong as its weakest link. Feedback effects can be weakened by information-intention gaps (feedback fails to convince recipients that change is necessary), intention-behaviour gaps (intentions are not translated into action), or behaviour-impact gaps (actions do not yield the desired effect on patient care).9 The success of national audit programmes depends on local arrangements that promote action as well as measurement.10

A synthesis of 65 qualitative evaluations proposed ways of designing audit programmes to better align with local capacity, identity, and culture and to promote greater changes in clinical behaviour.11 Healthcare organisations have finite capacity, so audit programmes should be designed so that they require less work, make best use of limited local resources, and clearly state why any investment is justified. Clinician beliefs about what constitutes best practice can influence how they respond to feedback, so audit programmes need to consider these while also challenging the status quo. All aspects of audit programmes should be designed with a focus on the desired changes in behaviour by recipients to achieve better outcomes; for example, feedback tackling unnecessary blood transfusions could include suggested alternative approaches to minimise blood loss during surgery.12 Because the purpose of an audit programme is not measurement alone but using data to inform quality improvement, we need to understand existing barriers to desired change and have a plan for how feedback helps to tackle those barriers.

Without functioning local networks and systems, national audit programmes can become echo chambers, where good intentions and blame for limited progress reverberate. Audit and feedback will flounder if local quality improvement is based on repeated, unconnected, and inappropriately delegated projects conducted in isolation from mainstream pursuits and if any learning is dissipated in collective amnesia. Clinical and managerial leaders should ask questions about their organisational performance in response to feedback (box 2)13 and set clear goals, mobilise resources, and promote continuous improvement.14 Audit and feedback by themselves cannot solve ingrained deficiencies but can emphasise priorities for change, inform focused actions, and evaluate progress.

Box 2

Questions that healthcare organisations can ask themselves about performance13

  • Do we know how good we are?

  • Do we know where we stand relative to the best?

  • Do we know where and understand why variation exists in our organisation?

  • Over time, where are the gaps in our practice that indicate a need for change?

  • In our efforts to improve, what’s working?

RETURN TO TEXT

Get the right message to the right recipients

Feedback comparing performance among different healthcare organisations and clinicians can leverage competitive instincts. This might not always work as intended. Nobody likes being told they are getting it wrong, repeatedly. Yet this is how clinicians and organisations often experience feedback suggesting suboptimal performance. Low baseline performance is associated with greater improvement after feedback3 but can elicit defensive reactions (“I don’t believe these data”), especially if feedback does not align with recipient perceptions (”My patients are different”). Such responses are not uncommon given that clinicians tend to overestimate their own performance.15 Continued negative feedback perceived as punitive can also be demotivating and risk creating burnout (“What else can I do?”).

Giving feedback to professionals who take pride in their work requires careful thought. Consider, for example, providing feedback to high performers—will positive feedback lead to reduced effort or increase motivation? Should audit programmes switch attention to new topics where performance is poorer, at risk of inducing fatigue in higher performers? Given the law of diminishing returns, attempts to improve already high levels of performance might be less fruitful than switching attention to other priorities. Many clinical actions have a “ceiling” beyond which improvement is restricted because healthcare organisations or clinicians are functioning at or near their maximum capabilities.

A range of approaches can help tailor feedback to recipients’ needs. First, feedback can include comparators that show like for like (such as similar types of organisations with similar case mixes) and set realistic goals for change relative to performance levels (such as lower but more achievable targets for poorer performers). Second, feedback can be delivered alongside a range of tangible action plans to support improvement; for example, an implementation toolbox improved pain management in intensive care units.1617 Third, new audit criteria need to be convincing, based on robust evidence and with scope for patient and population benefit.

Make more out of less data

Healthcare organisations and clinicians need to juggle competing priorities and therefore struggle to act on all feedback from national and local audit programmes. A 2012 snapshot identified 107 National Institute for Health and Care Excellence clinical guidelines relevant to primary care, resulting in 2365 recommendations.18 Audit programmes can help to identify which recommendations have the greatest potential to benefit patients and populations.

One of the highest costs associated with audit programmes is the time and effort involved in data collection, particularly the manual review of patient records. The burden of this data collection can be compounded by temptations to add in more variables for analyses that marginally improve precision.19 The resulting feedback might reinforce the credibility of data and enable recipients to explore associations in the data. Providing larger amounts of complex data, however, risks cognitive overload and distracting recipients from key messages. The diminishing returns of continuing efforts to perfect data come at the expense of focusing energy on improvement.19

The increasing availability of electronic patient record systems and routinely collected data on quality of care offer opportunities for large scale, efficient feedback programmes. Such approaches offer greater population coverage, which can overcome risks of biased sampling associated with manual review, such as the loss of records of patients with poorer outcomes. Routine data can also be collected and analysed in real time, thereby enabling faster, continuous feedback and countering objections voiced by clinicians (“These data are out of date”).

Data quality is only as good as coding at the point of care. Validity checks and quality control of the data might compound the burden on clinical teams. Data linkage and extraction across different information requires compliance with data protection and information governance requirements. Even with all this in place, we must acknowledge Einstein’s advice that not everything that counts can be counted, and not everything that can be counted counts.

Embed research to improve impact

Poor research design, conduct, and dissemination contribute to “research waste.”20 Implementation science aims to translate research evidence into routine practice and policy but is also affected by research waste. A cumulative meta-analysis of the Cochrane review of audit and feedback indicated that the effect size stabilised in 2003 after 30 trials.21 By 2011, 47 more trials of audit and feedback versus control were published that did not substantially advance knowledge, many omitting feedback features likely to enhance effectiveness. This indicated a growing literature but “stagnant science.”

Implementation laboratories offer a means of enhancing the impact of audit and feedback while also producing generalisable knowledge about how to optimise effectiveness.22 A “radical incrementalist” approach entails making serial, small changes, supported by tightly focused evaluations to cumulatively improve outcomes.23 It is already used in public policy and in business. Amazon and eBay randomly assign potential customers to see different presentations of their products online to understand what drives purchases. It is also applicable to healthcare24 and can help answer many questions about how best to organise and deliver feedback (such as, does feedback on performance indicating an organisation’s position against top performing peers stimulate more improvement than showing its position against average performance? What is the effect of shorter versus longer feedback reports? Does adding additional persuasive messages have any effect?). Embedding sequential head-to-head trials testing different feedback methods in an audit programme provides a robust empirical driver for change. Modifications identified as more effective than the current standard become the new standard; those that are not are discarded.

Harness public and patient involvement

Healthcare providers and researchers are still learning how to work meaningfully with patients and the public, and there are opportunities in audit programmes. This means moving beyond current models of involvement—typically advisory group roles to ensure accountability and contribute to strategy—towards active participation in feedback and service improvement.

Patients and the public are often surprised by the extent of unwarranted variations in healthcare delivery, which is the core business of audit programmes.25 They express frustration at the difficulties in routinely measuring less technical aspects of care, such as consultation skills and patient centredness. Involving patients and the public, including seldom heard communities, early in the process of developing indicators is important. Audit programmes can be at the forefront of innovating and evaluating different approaches to involvement, asking questions such as, does incorporating the patient voice in feedback lead to greater improvement? Can feedback reports be better designed to improve understanding for both lay and professional board members of healthcare organisations? Patients and the public represent an underexplored and untapped force for change, which audit programmes can learn to harness.

Conclusion

Audit and feedback are widely used, sometimes abused, and often under-realised in healthcare. More imaginative design and responses are overdue; these require evidence informed conversations between clinicians, patients, and academic communities. It is time to fully leverage national audits to accelerate data guided improvement and reduce unwarranted variations in healthcare. The status quo is no longer ethical.

Footnotes

  • Contributors and sources: RF, SA, and NMI are general practitioners and implementation researchers with international experience of designing and evaluating large scale audit and feedback programmes. MS, JS, JI, and DK work for the Healthcare Quality Improvement Partnership, a charity led by the Academy of Medical Royal Colleges, the Royal College of Nursing, and National Voices. MS has experience in local, regional, and national delivery of quality improvement programmes, including commissioning of national clinical audits. BM is a service user with involvement, activation, and empowerment expertise in quality improvement and health equality programmes. JS has operational expertise and leadership in the design of national clinical audit programmes in the UK and abroad and is a non-executive director at the Mid Essex Hospital Trust. JI has leadership and strategic management expertise of healthcare providers and charities in quality improvement, including clinical audit. DK has expertise in leading clinical, regulatory, executive participation in national clinical audit and patient outcome programmes driving local quality improvement in acute hospitals in the UK. RF and MS drafted the initial manuscript. All authors contributed to and commented on subsequent drafts and approved the final manuscript. RF is the guarantor.

  • NMI is supported by the Department of Family and Community Medicine at the University of Toronto and by a Canada Research Chair in Implementation of Evidence Based Practice.

  • Patient involvement: BM coauthored the manuscript and emphasised the need to focus on tackling unwarranted variations in healthcare delivery and involve a diverse range of patients and members of the public in improving national audit programmes.

  • This article is one of a series commissioned by The BMJ based on ideas generated by a joint editorial group with members from the Health Foundation and The BMJ, including a patient/carer. The BMJ retained full editorial control over external peer review, editing, and publication. Open access fees and The BMJ’s quality improvement editor post are funded by the Health Foundation.

  • Competing interests: We have read and understood BMJ policy on declaration of interests and have the following interests to declare: JI, JS, DK, and MS declare that they commission the NCAPOP on behalf of NHS England and Welsh government. The other authors declare no competing interests.

http://creativecommons.org/licenses/by-nc/4.0/

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

References