More Fun with Regression:

Confounding, interaction and random effects

The following blog post provides a general overview of some of the terms encountered when carrying out logistic regression and was inspired by attending the extremely informative HealthyR+: Practical logistic regression course at the University of Edinburgh.

  • Confounding
    • What is confounding?
    • Examples
  • Interaction
    • What are interaction effects?
    • Example
    • How do we detect interactions?
    • What happens if we overlook interactions?
    • Terminology
  • Random effects
    • Clustered data
    • Why should we be aware of clustered data?
    • A solution to clustering
    • Terminology
  • Brief summary

Confounding

What is confounding?

Confounding occurs when the association between an explanatory (exposure) and outcome variable is distorted, or confused, because another variable is independently associated with both. 

The timeline of events must also be considered, because a variable cannot be described as confounding if it occurs after (and is directly related to) the explanatory variable of interest.  Instead it is sometimes called a mediating variable as it is located along the causal pathway, between explanatory and outcome.

Examples

Potential confounders often encountered in healthcare data include for example, age, sex, smoking status, BMI, frailty, disease severity.  One of the ways these variables can be controlled is by including them in regression models. 

In the Stanford marshmallow experiment, a potential confounder was left out – economic background – leading to an overestimate of the influence of a child’s willpower on their future life outcomes.

Another example includes the alleged link between coffee drinking and lung cancer. More smokers than non-smokers are coffee drinkers, so if smoking is not accounted for in a model examining coffee drinking habits, the results are likely to be confounded.

Interaction

What are interaction effects?

In a previous blog post, we looked at how collinearity is used to describe the relationship between two very similar explanatory variables.  We can think of this as an extreme case of confounding, almost like entering the same variable into our model twice.  An interaction on the other hand, occurs when the effect of an explanatory variable on the outcome, depends on the value of another explanatory variable. 

When explanatory variables are dependent on each other to tell the whole story, this can be described as an interaction effect; it is not possible to understand the exact effect that one variable has on the outcome without first knowing information about the other variable. 

The use of the word dependent here is potentially confusing as explanatory variables are often called independent variables, and the outcome variable is often called the dependent variable (see word clouds here). This is one reason why I tend to avoid the use of these terms.

Example

An interesting example of interaction occurs when examining our perceptions about climate change and the relationship between political preference, and level of education. 

We would be missing an important piece of the story concerning attitudes to climate change if we looked in isolation at either education or political orientation.  This is because the two interact; as level of education increases amongst more conservative thinkers, perception about the threat of global warming decreases, but for liberal thinkers as the level of education increases, so too does the perception about the threat of global warming. 

Here is a link to the New York Times article on this story: https://www.nytimes.com/interactive/2017/11/14/upshot/climate-change-by-education.html

What happens if we overlook interactions?

If interaction effects are not considered, then the output of the model might lead the investigator to the wrong conclusions. For instance, if each explanatory variable was plotted in isolation against the outcome variable, important potential information about the interaction between variables might be lost, only main effects would be apparent.

On the other hand, if many variables are used in a model together, without first exploring the nature of potential interactions, it might be the case that unknown interaction effects are masking true associations between the variables.  This is known as confounding bias.

How do we detect interactions?

The best way to start exploring interactions is to plot the variables. Trends are more apparent when we use graphs to visualise these.

If the relationship between two exposure variables on an outcome variable is constant, then we might visualise this as a graph with two parallel lines.  Another way of describing this is additive effect modification.

Two explanatory variables (x1 and x2) are not dependent on each other to explain the outcome.

But if the effect of the exposure variables on the outcome is not constant then the lines will diverge. We can describe this as multiplicative effect modification.

Two explanatory variables (x1 and x2) are dependent on each other to explain the outcome.

Once an interaction has been confirmed, the next step would be to explore whether the interaction is statistically significant or not.

Terminology

Some degree of ambiguity exists surrounding the terminology of interactions (and statistical terms in general!), but here are a few commonly encountered terms, often used synonymously. 

  • Interaction
  • Causal interaction
  • Effect modification
  • Effect heterogeneity

There are subtle differences between interaction and effect modification.  You can find out more in this article: On the distinction between interaction and effect modification.

Random effects

Clustered data

Many methods of statistical analysis are intended to be applied with the assumption that, within a data-set, an individual observation is not influenced by the value of another observation: it is assumed that all observations are independent of one another. 

This may not be the case however, if you are using data, for example, from various hospitals, where natural clustering or grouping might occur.  This happens if observations within individual hospitals have a slight tendency to be more similar to each other than to observations in the rest of the data-set.

Random effects modelling is used if the groups of clustered data can be considered as samples from a larger population.

Why should we be aware of clustered data?

Gathering insight into the exact nature of differences between groups may or may not be important to your analysis, but it is important to account for patterns of clustering because otherwise measures such as standard errors, confidence intervals and p-values may appear to be too small or narrow.  Random effects modelling is one approach which can account for this.

A solution to clustering

The random effects model assumes that having allowed for the random effects of the various clusters or groups, the observations within each individual cluster are still independent.  You can think of it as multiple levels of analysis – first there are the individual observations, and these are then nested within observations at a cluster level, hence an alternative name for this type of modelling is multilevel modelling.

Terminology

There are various terms which are used when referring to random effects modelling, although the terms are not entirely synonymous. Here are a few of them:

  • Random effects
  • Multilevel
  • Mixed-effect
  • Hierarchical

There are two main types of random effects models:

  • Random intercept model
Random intercept: Constrains lines to be parallel
  • Random slope and intercept model
Random slope and intercept: Does not constrain lines to be parallel

Brief summary

To finish, here is a quick look at some of the key differences between confounding and interaction.

If you would like to learn more about these terms and how to carry out logistic regression in R, keep an eye on the HealthyR page for updates on courses available.

Fun with Regression

“All models are wrong, but some are useful”

George Box

This quote by statistician George Box feels like a good starting point from which to consider some of the challenges of regression modelling.  If we start with the idea that all models are wrong, it follows that one of the main skills in carrying out regression modelling is working out where the weaknesses are and how to minimise these to produce as close an approximation as possible to the data you are working with – to make the model useful.

The idea that producing high-quality regression models is often more of an art than a science appeals to me.  Understanding the underlying data, what you want to explore, and the tools you have at hand are essential parts of this process.

After attending the excellent HealthyR+: Practical Logistic Regression course a few weeks ago, my head was buzzing with probabilities, odds ratios and confounding.  It was not just the data which was confounded.  As someone fairly new to logistic regression, I thought it might be useful to jot down some of the areas I found particularly interesting and concepts which made me want to find out more.  In this first blog post we take a brief look at:

  • Probability and odds
    • The difference between probability and odds
    • Why use log(odds) and not just odds?
    • Famous probability problems
  • Collinearity and correlation
    • What is collinearity?
    • How do we detect collinearity?
    • Is collinearity a problem?

Probability and odds

The difference between probability and odds

Odds and probability are both measures of how likely it is that a certain outcome might occur in a series of events.  Probability is perhaps more intuitive to understand, but its properties make it less useful in statistical models and so odds, odds ratios, and log(odds) are used instead, more on this in the next section.

Interestingly, when the probability of an event occurring is small – <0.1 (or less than 10%) – the odds are quite similar.  However, as probability increases, the odds also increase but at a greater rate, see the following figure:

Here we can also see that whilst probabilities range from 0 to 1, odds can take on any value between 0 and infinity.

Why use log(odds) and not just odds?

Asymmetry of the odds scale makes it difficult to compare binary outcomes, but by using log(odds) we can produce a symmetrical scale, see figure below:

In logistic regression, the odds ratio concerning a particular variable represents the change in odds with each unit increase, whilst holding all other variables constant.

Famous probability problems

I find probability problems fascinating, particularly those which seem counter-intuitive. Below are links to explanations of two intriguing probability problems:

Collinearity and correlation

What is collinearity?

The term collinearity (also referred to as multicollinearity) is used to describe a high correlation between two explanatory variables.  This can cause problems in regression modelling because the explanatory variables are assumed to be independent (and indeed are sometimes called independent variables, see word clouds below). 

The inclusion of variables which are collinear (highly correlated) in a regression model, can lead to the false impression for example, that neither variable is associated with the outcome, when in fact, individually each variable does have a strong association.  The figure below might help to visualise the relationships between the variables:

In this image, y represents the control variable, and x1 and x2 are the highly correlated, collinear explanatory variables.  As you can see, there is a large area of (light grey) overlap between the x variables, whereas there are only two very small areas of independent overlap between each x and y variable.  These small areas represent the limited information available to the regression model when trying to carry out analysis.

How do we detect collinearity?

A regression coefficient can be thought of as the rate of change, or as the slope of the regression line.  The slope describes the mean change in the outcome variable for every unit of change in the explanatory variable.  It is important to note that regression coefficients are calculated based on the assumption that all other variables (apart from the variables of interest) are kept constant. 

When two variables are highly correlated, this creates problems. The model will try to predict the outcome but finds it hard to disentangle the influence of either of the explanatory variables due to their strong correlation. As a result, coefficient estimates may change erratically in response to small changes in the model.

Various terms are used to describe these x and y variables depending on context.  There are slight differences in the meanings, but here are a few terms that you might encounter:

The information I used to generate these word clouds was based on a crude estimate of the number of mentions in Google Scholar within the context of medical statistics.

Is collinearity a problem?

Collinearity is a problem if the purpose of your analysis is to explain the interactions between the data, however it has little effect on the overall predictive properties of your model, i.e. the model will provide accurate predictions based on all variables as one big bundle, but will not be able to tell you about the interactions of isolated variables.

If you are concerned with exploring specific interactions and you encounter collinearity, there are two main approaches you can take:

  • Drop one of the variables if it is not vital to your analysis
  • Combine the variables (e.g. weight and height can be combined to produce BMI)

An example of a publication where missed collinearity led to potentially erroneous conclusions, concerns analyses carried out on data relating to the World Trade Organisation (WTO). Here is a related article which attempts to unpick some of the problems with previous WTO research.

Finishing on an example of a problematic attempt at regression analysis may perhaps seem slightly gloomy, but on the contrary, I hope that this might provide comfort if your own analysis throws up challenges or problems – you are in good company!  It also brings us back to the quote by George Box at the beginning of this blog post, where we started with the premise that all models are wrong.  They are at best a close approximation, and we must always be alert to their weaknesses.

What next?

Look out for the next HealthyR+: Practical Logistic Regression course and sign up.  What areas of medical statistics do you find fun, puzzling, tricky, surprising? Let us know below.