Monthly Archives: November 2020

Science Denial and the Importance of Engaging the Public with Science

A recent paper in JAMA, concerning science denial, tackles a problem of immense importance.[1] For us scientists, science denial negates our reason for being. Far more important though, is the effect on society. We need to think only of the vaccination fiasco. The JAMA paper used the difficulties that people with certain neurological conditions have with processing information as an analogy for the challenges that people with low scientific literacy have with interpreting complex graphs. Such difficulties leave room for false beliefs, including beliefs in conspiracy theories. While this analogy might shed light on neural mechanisms, there are far more important determinants of science denial in the population at large. One issue is the effect of education. Lack of educational attainment is consistently associated with science denial and the propensity to believe in conspiracy theories.[2]

Of course, this does not prove that improving science education would solve the problem. It may simply be the case that the cause of low educational achievement is also the cause of a predisposition to believe conspiracy theories. For example, low self-esteem or cognitive ability may be determinants of both low educational attainment and science denial. More likely, education plays a part, and both nature and nurture are involved. In that case, educational achievement conditional on early-life cognitive ability should correlate with resistance to conspiracy theories. We do not know whether this possibility has been examined.

Debunking misinformation with evidence or education is not enough. In responding to COVID-19, behavioural scientists were quick to point out that debunking could even lead to a backlash and increase the belief in misinformation. While the evidence on backlash is mixed, alternative approaches are still needed. One alternative is ‘pre-bunking’,[3] which is analogous to medical inoculation: people are exposed to a little bit of misinformation that activates their ability to critique it, but not so much misinformation as to be overwhelming. Web-based games like ‘Get Bad News’ apply this approach and are used by governments and schools to reduce people’s susceptibility to fake news. Reminding people before they engage with information to assess the accuracy of sources may also help.[4]

Yet, education, pre-bunking, and reminders are arguably ‘demand-side’ factors, which largely rely on the public selecting into engagement with science. These may be the very people least likely to denounce it. Given this, it is incumbent upon policymakers – and academics – to address the ‘supply-side’ factors, too. They must consider how to provide trustworthy, transparent, and accessible information, including to those with lower levels of education or cognitive ability. Sadly, this does not always happen; for example, little effort appears to have been directed towards testing some of the public health messaging about COVID-19 in the UK.[5] Confusing messaging can breed uncertainty, which is easily filled with simple but false information – including scientific information. Critiquing conspiracy theorists for their ‘bad science’ is unlikely to be persuasive. Instead, we advocate building trust in rigorous science.

Engaging the public with science is critically important; we can hardly think of a more important issue. Here at ARC West Midlands we take public engagement very seriously. We continuously seek opportunities to engage on science. In previous news blogs, we tested some of the government’s COVID-19 messaging ourselves,[6][7] and described our plans to use geospatially referenced maps to engage communities where COVID-19 infections are not under control.[8] We are engaging the public in numerous implementation science projects, including one based on mathematical modelling and another on the role of chance in decision-making. In all of these, development of the service, engagement with decision-makers, and with the public, go hand in hand.

Richard Lilford, ARC WM Director; Laura Kudrna, Research Fellow

References:

  1. Miller BL. Science Denial and COVID Conspiracy Theories: Potential Neurological
    Mechanisms and Possible Responses
    . JAMA. 2020.
  2. Van Prooijen J-W. Why Education Predicts Decreased Belief in Conspiracy Theories. Appl Cognit Psychol. 2016; 31(1).
  3. Van Bavel JJ, et al. Using social and behavioural science to support COVID-19 pandemic response. Nat Hum Behav. 2020; 4:460-71.
  4. Pennycook G, et al. Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci. 2020; 31(7):770-80.
  5. BBC News. Coronavirus: Minister defends ‘stay alert’ advice amid backlash. 10 May 2020.
  6. Kudrna L, Schmidtke KA. Changing the Message to Change the Response – Psychological Framing Effects During COVID-19. NIHR ARC West Midlands News Blog. 2020; 2(7): 7-9.
    See also our London School of Economics and Political Science blog.
  7. Schmidtke KA, Kudrna L. Speaking to Hearts Before Minds: Increasing Influenza Vaccine Uptake During COVID-19. NIHR ARC West Midlands News Blog. 2020; 2(10):9-11.
    See also our London School of Economics and Political Sciences blog.
  8. Lilford RJ, Watson S, Diggle P. The Land War in the Fight Against COVID-19. NIHR ARC West Midlands News Blog. 2020; 2(10):1-4.

Use of Causal Diagrams to Inform the Analysis of Observational Studies

Observational studies usually involve some sort of multi-variable analysis. To make sense of the association between an explanatory variable (E) and an outcome (O), it is necessary to control for confounders – age for example in clinical studies. A confounder (C) is a variable that is associated with both E and O. Indeed it is causal of E and O as shown by the direction of arrows in Figure 1.

Fig 1. Causal Diagram for a Confounder

A common error is to mistake a confounder for a mediator. If the variable lies on the causal pathway between E and O, then it is a Mediator – M in Figure 2.

Fig 2. Causal Diagram to Distinguish Between a Confounder (C) and a Mediator (M)

Failure to make this distinction, and to adjust for M, will reduce or remove the effect of E on the outcome. In a study of the effect of money spent on tobacco on lung cancer, it would be self-defeating to adjust for smoking! If we are interested in decomposing different causal pathways, then we should adapt the multivariable analysis to examine how much of the effect of E or O is explained by the putative mediator (M in Figure 2) – a structural equation model or ‘mediator’ analysis.

There are some issues to consider:

  1. It may not be possible to say for certain whether a variable is a mediator or confounder and some variables may be both. Then try the analysis three ways: omit it, treat it as a confounder, or treat it as a mediator.
  2. It is hard to know which variables to include as confounders. A dataset was sent for analysis by 29 different teams of statisticians.[1] They came up with different results that varied wildly. This was because they adjusted for different combinations of variables. The corollary is that choice of variables should not be left to statisticians – it turns on causal theory that distinguishes between variables that are likely to have arrows pointing from E and O via M, and those pointing from C to both E and O (Figure 2). Context matters!
  3. There may be an interaction between variables, such that the causal effect of one variable on E or O is amplified or attenuated in the presence of another. Given four variables, each with four ‘levels’, yields 256 possible first order interactions. So, again, theory is needed to determine which variables to include in such interaction tests.

A variable may exist that is an independent cause of C or M (let’s call these C* and M*), as in Figure 3. There is no reason to adjust for these variables. Likewise, do not adjust for any variable that ‘precedes’ E, as also shown in Figure 3.

Fig 3. Variables That Cause Change in Other Variables

In this example, C* and M* are not causally linked to O, except through C and M respectively. But a situation may occur where such a link is possible. It is well known that maternal smoking is causally linked to both low birth-weight and to neonatal deaths, as per Figure 4. The theory is that smoking is toxic and leads to both a small baby and, via that pathway and other pathways, leads to neonatal death.

Fig 4. Causal Pathway for Smoking and Neonatal Deaths

If this analysis is conducted controlling for ‘small baby’, then smoking is associated with lower mortality – it appears protective. The obvious fault was to control for a variable on the causal pathway, as per Figure 2. But this could explain why the association may be reduced, but not reversed.

The explanation for the reversal lies in a putative third variable (perhaps a ‘genetic’ defect, G), which predisposes to both a small baby and neonatal death (Figure 5). Note, that both E and G collide on M, and such a scenario leads to ‘collider bias’ – by controlling for one source of bias, the door is opened to another. It is well known that there may be unobserved (‘lurking’) confounders in any association. The same applies, of course, to a variable that might completely alter the meaning of an association once one has conditioned on another variable.

Fig 5. Collider Bias

These analyses show that conducting a multivariable analysis is not, or rather should never be, an entirely data-driven / empirical exercise. Choices have to be made, such that the statistical model informs on, but does not determine, the causal model. For a brilliant example of extensive causal chains involving confounders, colliders and mediators, see an example from Andrew Forbes and colleagues.[2]

Note, we are not arguing against adjustment per se. It is an essential part of the analysis. We argue against adjusting without reference to a causal model.

Richard Lilford, ARC WM Director; Sam Watson, Senior Lecturer [With thanks to Peter Diggle (Lancaster University & Health Data Research UK) for comments.]


References:

  1. Silberzahn R, et al. Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results. Adv Method Pract Psychological Science. 2018; 1(3).
  2. Williamson EJ, et al. Introduction to causal diagrams for confounder selection. Respirology. 2014; 19(3): 303-11.