Monthly Archives: July 2020

Changing the Message to Change the Response – Psychological Framing Effects During COVID-19

The way in which a government communicates can shape people’s responses. Psychological and behavioural research reveals that the same objective information can elicit different responses when presented in different ways, an effect called ‘framing’.[1] For example, one study compared describing blood donations as either a way to “prevent a death” or “save a life”.[2] While preventing death and saving life are two sides of the same coin, “prevent a death” triggered more donations. These results are explained, at least in part, by a prevalent loss-aversion bias. As Kahneman and Tversky (1979) explain: losses loom larger than gains.[3] 

In 1981, Kahneman and Tversky asked people to imagine that the US was preparing for a disease outbreak that was expected to kill 600 people.[1] Participants were asked to choose between two government programmes. In one scenario, participants considered saving lives: given programme A, 200 lives would be saved; and given programme B, there was a 1/3 probability that 600 lives would be saved and 2/3 probability that no lives would be saved. While mathematically these programmes are equivalent, 72% preferred programme A (109/152 participants). A second group of participants considered preventing deaths: given programme C, 400 would die; and given programme D, there was a 1/3 probability that nobody would die and 2/3 probability that 600 people will die. This time, 78% chose programme D (121/155 participants). Flipping the vocabulary coin flipped people’s preferences. 

In March 2020, we set out to test whether these results would hold when applied to COVID-19. We created two scenarios with identical options to Kahneman and Tversky’s but changed the wording to be about COVID-19 and social/physical distancing. The study was ethically approved and in early July we invited UK participants via Prolific Academic to respond to a randomly allocated scenario. The data were collected in less than two hours. The pattern of results held – participants preferred programme A over B (21/30 = 70%) and D over C (23/30 = 77%). Interesting, but perhaps insufficient to inform the way messages are presented to the public to influence their more personal decisions, such as about visitors at home.

The UK government’s initial messaging strategy about personal decisions emphasised that people needed to say home in order to “save lives”. A later campaign framed this differently, stressing that “people will die” if they go out. Does flipping the vocabulary coin here matter? We, and others, suspect that it does. There have been several opinion pieces on psychologically informed messaging,[4] although we are unaware of any published research results that have tested framing effects in the context of COVID-19. 

We created six further personal scenarios. These scenarios varied across three situations and two frames. Participants were asked whether they would be willing to have a friend over (yes/no), attend a crowded work meeting (yes/no), and download a contact tracing app (yes/no). Each situation was framed in two ways – as about a choice to save lives or prevent deaths. An excerpt from the story about inviting a friend over is provided here: 

Imagine that the town of Pleasantville… is preparing for the outbreak of the Coronavirus (COVID-19), which is expected to kill 600 people. They decide to adopt a social/physical distancing programme to prevent the spread of COVID-19 that is expected to [save 200 lives / prevent 400 deaths]. Social/physical distancing is when people reduce social interaction to stop the spread of a disease, such as by working from home and avoiding gatherings in public spaces. Your good friend calls you and says they want to come over to discuss the announcement… 

What do you say to your friend? Yes, come over / No, don’t come over

If losses loom larger than gains in more personal scenarios, then we should expect messages framed as ‘preventing death’ to have stronger effects across situations. The pilot results are shown in Figure 1. There was no substantial effect of message framing, although the situation made some difference. Nobody was willing to let a friend visit their home, some people said they would attend a work meeting, and the majority would download a contact tracing app.


Fig 1: Results of the study testing framing effects about saving lives versus preventing death

What can explain these results? One possibility is social desirability bias. People may wish to appear as if they would take action to prevent COVID-19 spreading, even if they would not in everyday life. 

Timing may also matter. When we conducted our study, people may have been sufficiently fearful of the consequences of COVID-19 that they were willing to comply with guidelines and recommendations, regardless of the message framing. It is possible that earlier on in the pandemic, we would have found different results.  

Another explanation is that, unlike the government programmes scenarios, the alternative options in the more personal scenarios did not state certain and probabilistic qualities. For the government programme scenarios, when the options were framed as saving lives, participants wanted to secure the safe-but-sure option. One participant explained their response by saying, “The 1/3 probability means the same 200 die but the [other] option appears to guarantee saved lives”. Alternatively, when the options are framed negatively, people wanted to roll the proverbial dice. One participant explained that, “The overall odds are the same but the chance for no one dying is worthwhile”. In contrast, the risk regarding personal decisions is uncertain because many outcomes for COVID-19 are uncertain. It may be that loss aversion is more pronounced when people make policy choices between certain and probabilistic outcomes. 

Our study only scratches the surface of possibilities for message testing. We wonder what research may have shown about alternatives to ‘Stay Alert’. Perhaps some of its criticisms could have been avoided, such as with messages to help manage the anxieties associated with the uncertainty of lifting a lockdown. Certainly, public messages can be efficiently tested before they are publicly disseminated – even during a crisis.

Laura Kudrna (Research Fellow) and Kelly Ann Schmidtke (Assistant Professor)


References:

  1. Tversky A, Kahneman D. The Framing of Decisions and the Psychology of Choice. Science. 1981; 211(4481): 453-8.
  2. Chou EY, Murnighan JK. Life or Death Decisions: Framing the Call for Help. PLoS ONE. 2013; 8(3): e57351.
  3. Kahneman D, Tversky A. Prospect Theory: An Analysis of Decision Under Risk. Econometrica. 1979; 47(2): 263-92.
  4. Halpern SD, Truog RD, Miller FG. Cognitive Bias and Public Health Policy During the COVID-19 Pandemic. JAMA. 2020.

N.B. This blog post has also been cross-posted at: blogs.lse.ac.uk/politicsandpolicy/changing-the-message-to-change-the-response-psychological-framing-effects-during-covid-19/

Walking Through the Digital Door: Video Consultations During COVID-19 and Beyond

The “NHS Long-Term Plan” (2019) is a five-year plan describing how NHS services should be redesigned for the next decade. This plan includes making better use of digital technologies, such as video consultations. While video consultations have potential advantages for patients and hospital systems,[1] they may make patients uncomfortable. If patients do not walk through the ‘digital door’ to attend a video consultation, then potential advantages cannot be realised. Likely the motto of “build it and they will come” is insufficient. Instead, we need to support patients so that they come the first time and return after that. 

What support that patients need is, at least in part, an empirical question that we plan to address in a future study. One way to support attendance may be with the behavioural science principle of ‘defaults’ – people tend to ‘go with the flow’ of pre-set options.[2] Defaults have been used to influence organ donations by adding the word ‘don’t’ to an application, i.e. “If you want to be an organ donor, please check here,” vs. “If you don’t want to be an organ donor, please check here”. In a simulated study, 42% of people opted-in to become organ donors given the original phrasing, and 82% did not opt-out given the second.[3] In other words, the realised organ donation rate nearly doubled by changing the default option. Until April 2020 England had an opt-in system with 38% of people having opted-in to become organ donors. When England’s law changed to an opt-out system in May 2020 the assumed donor rate has increased instantly. Time will tell how many people fill out the form to opt-out, but the present authors suspect the resultant donor rate to remain higher than 38%.

Defaults have been used to influence people’s behaviour in many contexts, e.g. how much money people save for retirement,[4] physicians’ medication use,[5] and purchases of healthy foods.[6] At least three psychological mechanisms are at play: endorsement (believing the proposed default is recommended), endowment (believing the default is normal), and ease (taking up the proposed default is simpler than refusing it).[7,8] Re-framing an invitation to attend an outpatient appointment from ‘in-person’ to ‘video’ creates a new default ‘endorsed’ mode of attendance that is ‘easier’ to accept than refuse. However, if a substantial number of patients refuse an invitation to attend a video consultation, this would suggest that more support is needed to garner people’s acceptance.

An ideal experimental test of the default effect on out-patient appointment attendance would occur in the field setting, similar to our work on influenza vaccination letters.[9] But (without tremendous follow-up efforts) this approach provides a limited ability to explore barriers and facilitators patients believe influence their choices. These beliefs undoubtably influence whether patients attend. To explore how default options and beliefs influence whether patients accept an invitation to attend a video consultation, we will conduct a simulated study with patients from the site Prolific Academic. Prolific Academic contains thousands of people prepared to answer researchers’ questions who can be filtered on criteria such as health status, age, and education. Our research will utilise an online experiment with quantitative and qualitative items. We plan to compare our findings to real hospital data on video consultations before and after COVID-19, which may have provided the impetus for more patients to engage in digital healthcare. 

Conversations with researchers across ARC WM’s themes and with public contributors suggest several barriers and facilitators to the uptake of video consultations. For instance, while the location of in-person consultations was obvious, video consultations require patients to make an additional choice about where they feel comfortable attending. Whether attending from home or work, new privacy concerns arise regarding what other people can overhear across physical and digital space. Our research will show how much such concerns matter to patients, and suggest what additional support should be offered to increase patients’ attendance within their invitation to attend. If COVID-19 hasn’t provided the push that patients need to walk through the digital door, this research will help us understand why. Equally, if it has, we will be better equipped to sustain and expand the shift, and in so doing help realise the NHS Long-Term Plan.

Kelly Ann Schmidtke (Assistant Professor) and Laura Kudrna (Research Fellow)


References:

  1. Greenhalgh T, et al. Virtual Online Consultations: Advantages and Limitations (VOCAL) Study. BMJ Open 2016; 6: e009388. 
  2. Dolan P, et al. Influencing Behaviour: The Mindspace Way. J Econ Psychol. 2012; 33(1): 264-77.
  3. Johnson EJ, Goldstein D. Do Defaults Save Lives? Science. 2003; 302(5649): 1338-9. 
  4. Madrian BC, Shea DF. The Power of Suggestion: Inertia in 401(k) Participation and Savings Behaviour. Q J Econ. 2001; 116(4):1149–87. 
  5. Ansher C, et al. Better Medicine by Default. Med Decis Making. 2014; 34(2):147-58. 
  6. Peters J, et al. Using Healthy Defaults in Walt Disney World Restaurants to Improve Nutritional Choices. J Assoc Consum Res. 2016; 1(1): 92-103.
  7. Jachimowicz JM, et al. When and Why Defaults Influence Decisions: a Meta-Analysis of Default Effects. Behav Public Policy. 2019; 3(2): 159-86. 
  8. Dinner I, et al. Partitioning Default Effects: Why People Choose Not to Choose. J Exp Psychol Appl. 2011; 17(4): 332-41. 
  9. Schmidtke KA, et al. Randomised controlled trial of a theory-based intervention to prompt front-line staff to take up the seasonal influenza vaccine. BMJ Qual Saf. 2020; 29(3): 189-97.

The Holy Grail of Quality Measurement

Writing in JAMA, Austin and Kachalia argue for automation of quality measurements.[1] We ourselves have argued that the proliferation of routine quality measures is getting out of hand.[2]

The authors argue, as we have argued, that using quality measures to incentivise organisations is a blunt tool, subject to gaming. Far better, is to use quality measures in real-time to prompt doctors to provide high quality care.

In fact, this is what computerised decision support offers. There is considerable empirical support for use of this type of decision tool. Working with Prof Aziz Sheikh and colleagues NIHR ARC West Midlands has investigated decision support for prescribing [3] and we are now investigating its use in antibiotic stewardship.[4] We are entirely in support of the use of decision support to improve care in real-time.

However, we question the idea that the majority of healthcare can be guided by online decision support. Working with Prof Timothy Hofer in Michigan, ARC WM co-investigators have shown that the measurement of the quality of hospital care is extremely unreliable.[5] Kappa measures of agreement between reviewers were about 20%. This means that seven reviewers would be needed for each case note, to achieve a reliability of 80%.

That is to say, that for much of medical care, there is no agreed standard. Truly, the majority of medical care is more art than science.

We think that the time has arrived to abandon hubristic notions about standardising and quality assuring the generality of clinical care. Medicine is not like aviation. Commercial aviation is almost entirely computerised. Emergencies aside, the whole process can be guided algorithmically. Our paper in Milbank Quarterly, shows quite clearly that this is not the case for medicine.[5]

Working with Prof Julian Bion, the ARC WM Director had an opportunity to audit numerous case notes from patients with sepsis.[6] The idea was to observe quality of care against a package of evidence-based criteria. Many of these criteria was based on actions that should be carried out within a specified time from diagnosis. The exercise proved almost impossible, since the point of diagnosis was ephemeral. In most cases there was no clear point to start the clock and the very diagnosis of sepsis had to be reverse-engineered from the time at which a sepsis-associated action took place! This exercise provided eloquent testimony to the judgemental, rather than rules-based, nature of much medical practice. We should use algorithmic decision support where clear rules exist, but we must stop pretending that the whole of medicine can be guided in this way. Perhaps we should just stand back a little, and accept some of the imperfections in our systems. Like a harm-free world, perfection will always lie beyond our grasp.[7]

Richard Lilford, ARC WM Director


References:

  1. Austin JM, Kachalia A. The State of Health Care Quality Measurement in the Era of COVID-19. The Importance of Doing Better. JAMA. 2020.
  2. Lilford RJ. Measuring Quality of Care. NIHR CLAHRC West Midlands News Blog. 21 April 2017.
  3. Yao GL, Novielli N, Manaseki-Holland S, et al. Evaluation of a predevelopment service delivery intervention: an application to improve clinical handovers. BMJ Qual Saf. 2012;21:i29-38.
  4. Usher Institute. ePrescribing-Based Antimicrobial Stewardship. 2020.
  5. Manaseki-Holland S, Lilford RJ, Te AP, et al. Ranking Hospitals Based on Preventable Hospital Death Rates: A Systematic Review With Implications for Both Direct Measurement and Indirect Measurement Through Standardized Mortality Rates. Milbank Q. 2019;97(1):228-84. 
  6. Lord JM, Midwinter MJ, Chen YF, et al. The systemic immune response to trauma: an overview of pathophysiology and treatment. Lancet. 2014;384(9952):1455-65.
  7. Meddings J, Saint S, Lilford RJ, Hofer TP. Targeting Zero Harm: A Stretch Goal That Risks Breaking the Spring. NEJM Catal Innov Care Del. 2020; 1(4).