R: filtering with NA values

This post was originally published here

NA – Not Available/Not applicable is R’s way of denoting empty or missing values. When doing comparisons – such as equal to, greater than, etc. – extra care and thought needs to go into how missing values (NAs) are handled. More explanations about this can be found in the Chapter 2: R basics of our book that is freely available at the HealthyR website
This post lists a couple of different ways of keeping or discarding rows based on how important the variables with missing values are to you.

Using codepen.io and google cloud to build a handy risk calculator.

If you’ve been watching the news or twitter over the past week, you may have seen the appendicitis-related headlines about unnecessary operations being performed. The RIFT collaborative and Dmitri Nepogodiev have really spearheaded some cool work looking at who gets unnecessary operations, which are all well worth a read:

Original article:

https://bjssjournals.onlinelibrary.wiley.com/doi/10.1002/bjs.11440

(Selected news coverage):

https://www.theguardian.com/society/2019/dec/04/unnecessary-appendix-surgery-performed-on-thousands-in-uk

https://www.dailymail.co.uk/health/article-7750707/Thousands-young-British-women-needless-operations-remove-appendix.html

https://www.independent.co.uk/life-style/health-and-families/women-appendix-surgery-appendicitis-study-a9232146.html

So, when Dmitri asked if I could develop a web application for risk scoring to help identify those at low risk of appendicitis, I was very excited.

Having quite often used risk calculators in clinical practice, I started to write a list of what makes a good calculator and how to make one that can be used effectively. The most important were:

  • Easy to use
  • Works on any platform (as NHS IT has a wide variety of browsers!) and on mobile (some hospitals have great Wi-Fi through eduroam)
  • Can be quickly updated
  • Looks good and gives an intuitive result
  • Lightweight requiring minimal processing power, so many users can use simultaneously

Now we use a lot of R in surgical informatics, but Shiny wasn’t going to be the one for this as it’s not that mobile friendly and doesn’t necessarily work on every browser that smoothly (sorry shiny!). Similarly, the computational footprint required to run shiny is too heavy for this. So, using codepen.io and a pug html compiler, I wrote a mobile friendly website (Still a couple of tweaks I’d like to make to make entirely mobile friendly!).

Similarly, I get asked why not an app? Well app development requires developing on multiple platforms (Apple, Android, Blackberry) and can’t be used on those pesky NHS PCs. Furthermore, if something goes out of date or needs to be updated quickly – repairing it will take ages as updates sometimes have to be vetted by app stores etc.

My codepen.io for the calculator:

Codepen.io is a great development tool and allows you to combine and get inspired by other people’s work too!

I then set up a micro instance on google cloud, installed the pug compiler and apache2, selected a fixed IP and opened the HTTP port to the world and all done! (this set up is a little more involved than this but was straightforward!). The micro instance is very very cheap so it’s not expensive to run. The Birmingham crew then bought a lovely domain appy-risk.org for me to attach it to.

Here’s the obligatory increase in CPU usage since publication (slightly higher but as you can tell – it’s quite light:

More Fun with Regression:

Confounding, interaction and random effects

The following blog post provides a general overview of some of the terms encountered when carrying out logistic regression and was inspired by attending the extremely informative HealthyR+: Practical logistic regression course at the University of Edinburgh.

  • Confounding
    • What is confounding?
    • Examples
  • Interaction
    • What are interaction effects?
    • Example
    • How do we detect interactions?
    • What happens if we overlook interactions?
    • Terminology
  • Random effects
    • Clustered data
    • Why should we be aware of clustered data?
    • A solution to clustering
    • Terminology
  • Brief summary

Confounding

What is confounding?

Confounding occurs when the association between an explanatory (exposure) and outcome variable is distorted, or confused, because another variable is independently associated with both. 

The timeline of events must also be considered, because a variable cannot be described as confounding if it occurs after (and is directly related to) the explanatory variable of interest.  Instead it is sometimes called a mediating variable as it is located along the causal pathway, between explanatory and outcome.

Examples

Potential confounders often encountered in healthcare data include for example, age, sex, smoking status, BMI, frailty, disease severity.  One of the ways these variables can be controlled is by including them in regression models. 

In the Stanford marshmallow experiment, a potential confounder was left out – economic background – leading to an overestimate of the influence of a child’s willpower on their future life outcomes.

Another example includes the alleged link between coffee drinking and lung cancer. More smokers than non-smokers are coffee drinkers, so if smoking is not accounted for in a model examining coffee drinking habits, the results are likely to be confounded.

Interaction

What are interaction effects?

In a previous blog post, we looked at how collinearity is used to describe the relationship between two very similar explanatory variables.  We can think of this as an extreme case of confounding, almost like entering the same variable into our model twice.  An interaction on the other hand, occurs when the effect of an explanatory variable on the outcome, depends on the value of another explanatory variable. 

When explanatory variables are dependent on each other to tell the whole story, this can be described as an interaction effect; it is not possible to understand the exact effect that one variable has on the outcome without first knowing information about the other variable. 

The use of the word dependent here is potentially confusing as explanatory variables are often called independent variables, and the outcome variable is often called the dependent variable (see word clouds here). This is one reason why I tend to avoid the use of these terms.

Example

An interesting example of interaction occurs when examining our perceptions about climate change and the relationship between political preference, and level of education. 

We would be missing an important piece of the story concerning attitudes to climate change if we looked in isolation at either education or political orientation.  This is because the two interact; as level of education increases amongst more conservative thinkers, perception about the threat of global warming decreases, but for liberal thinkers as the level of education increases, so too does the perception about the threat of global warming. 

Here is a link to the New York Times article on this story: https://www.nytimes.com/interactive/2017/11/14/upshot/climate-change-by-education.html

What happens if we overlook interactions?

If interaction effects are not considered, then the output of the model might lead the investigator to the wrong conclusions. For instance, if each explanatory variable was plotted in isolation against the outcome variable, important potential information about the interaction between variables might be lost, only main effects would be apparent.

On the other hand, if many variables are used in a model together, without first exploring the nature of potential interactions, it might be the case that unknown interaction effects are masking true associations between the variables.  This is known as confounding bias.

How do we detect interactions?

The best way to start exploring interactions is to plot the variables. Trends are more apparent when we use graphs to visualise these.

If the relationship between two exposure variables on an outcome variable is constant, then we might visualise this as a graph with two parallel lines.  Another way of describing this is additive effect modification.

Two explanatory variables (x1 and x2) are not dependent on each other to explain the outcome.

But if the effect of the exposure variables on the outcome is not constant then the lines will diverge. We can describe this as multiplicative effect modification.

Two explanatory variables (x1 and x2) are dependent on each other to explain the outcome.

Once an interaction has been confirmed, the next step would be to explore whether the interaction is statistically significant or not.

Terminology

Some degree of ambiguity exists surrounding the terminology of interactions (and statistical terms in general!), but here are a few commonly encountered terms, often used synonymously. 

  • Interaction
  • Causal interaction
  • Effect modification
  • Effect heterogeneity

There are subtle differences between interaction and effect modification.  You can find out more in this article: On the distinction between interaction and effect modification.

Random effects

Clustered data

Many methods of statistical analysis are intended to be applied with the assumption that, within a data-set, an individual observation is not influenced by the value of another observation: it is assumed that all observations are independent of one another. 

This may not be the case however, if you are using data, for example, from various hospitals, where natural clustering or grouping might occur.  This happens if observations within individual hospitals have a slight tendency to be more similar to each other than to observations in the rest of the data-set.

Random effects modelling is used if the groups of clustered data can be considered as samples from a larger population.

Why should we be aware of clustered data?

Gathering insight into the exact nature of differences between groups may or may not be important to your analysis, but it is important to account for patterns of clustering because otherwise measures such as standard errors, confidence intervals and p-values may appear to be too small or narrow.  Random effects modelling is one approach which can account for this.

A solution to clustering

The random effects model assumes that having allowed for the random effects of the various clusters or groups, the observations within each individual cluster are still independent.  You can think of it as multiple levels of analysis – first there are the individual observations, and these are then nested within observations at a cluster level, hence an alternative name for this type of modelling is multilevel modelling.

Terminology

There are various terms which are used when referring to random effects modelling, although the terms are not entirely synonymous. Here are a few of them:

  • Random effects
  • Multilevel
  • Mixed-effect
  • Hierarchical

There are two main types of random effects models:

  • Random intercept model
Random intercept: Constrains lines to be parallel
  • Random slope and intercept model
Random slope and intercept: Does not constrain lines to be parallel

Brief summary

To finish, here is a quick look at some of the key differences between confounding and interaction.

If you would like to learn more about these terms and how to carry out logistic regression in R, keep an eye on the HealthyR page for updates on courses available.

RStudio Server LAN party: Laptop+Router+Docker to serve RStudio offline

This post was originally published here

TLDR: You can teach R on people’s own laptops without having them install anything or require an internet connection.

Members of the Surgical Informatics team in Ghana, 2019. More information: surgicalinformatics.org

Members of the Surgical Informatics team in Ghana, 2019. More information: surgicalinformatics.org

Introduction

Running R programming courses on people’s own laptops is a pain, especially as we use a lot of very useful extensions that actually make learning and using R much easier and more fun. But long installation instructions can be very off-putting for complete beginners, and people can be discouraged to learn programming if installation hurdles invoke their imposter syndrome.

We almost always run our courses in places with a good internet connection (it does not have to be super fast or flawless), so we get our students all set up on RStudio Server (hosted by us) or https://rstudio.cloud (a free service provided by RStudio!).
You connect to either of these options using a web browser, and even very old computers can handle this. That’s because the actual computations happen on the server and not on the student’s computer. So the computer just serves as a window to the training instance used.

Now, these options work really well as long as you have a stable internet connection. But for teaching R offline and on people’s own laptops, you either have to:

  1. make sure everyone installs everything correctly before they attend the course
  2. Download all the software and extensions, put them on USB sticks and try to install them together at the start
  3. start serving RStudio from a your computer using Local Area Network (LAN) created by a router

Now, we already discussed why the first option is problematic (gatekeeper for complete beginners). The second option – installing everything at the start together – means that you start the course with the most boring part. And since everyone’s computers are different (both by operating systems as well as different versions of the operating systems), this can take quite a while to sort. Therefore, queue in option c) – an RStudio Server LAN party.

Requirements

  1. A computer with more than 4GB of RAM. macOS alone uses around 2-3GB just to keep going, and running the RStudio Server docker container was using another 3-4 GB, so you’ll definitely need more than 4GB in total.
  2. A network router. For a small number of participants, the same one you already have at home will work. Had to specify “network” here, as apparently, even my Google search for “router” suggests the power tool before network routers.
  3. Docker – free software, dead easy to install on macOS (search the internet for “download Docker”). Looks like installation on the Windows Home operating system might be trickier. If you are a Windows Home user who is using Docker, please do post a link to your favourite instructions in the comments below.
  4. Internet connection for setting up – to download RStudio’s docker image and install your extra packages.
My MacBook Pro serving RStudio to 10 other computers in Ghana, November 2019.

My MacBook Pro serving RStudio to 10 other computers in Ghana, November 2019.

Set-up

Running RStudio using Docker is so simple you won’t believe me. It honestly is just a single-liner to be entered into your Terminal (Command Prompt on Windows):

docker run -d -p 8787:8787 -e ROOT=TRUE -e USER=user -e PASSWORD=password rstudio/verse 

This will automatically download a Docker image put together by RStudio. The one called verse includes all the tidyverse packages as well as publishing-related ones (R Markdown, Shiny, etc.). You can find a list of the difference ones here: https://github.com/rocker-org/rocker

Then open a browser and go to localhost:8787 and you should be greeted with an RStudio Server login! (Localhost only works on a Mac or Linux, if using Windows, take a note of your IP address and use that instead of localhost.) More information and instructions can be found here: https://github.com/rocker-org/rocker/wiki/Using-the-RStudio-image

Tip: RStudio suggests port 8787, which is what I used for consistency, but if you set it up on 80 you can omit the :80 as that’s the default anyway. So you can just go to localhost (or something like 127.0.0.0 if using Windows).

For those of you who have never seen or used RStudio Server, this is what it looks like:

Rstudio Server is almost identical to RStudio Desktop. Main difference is the “Upload” button in the Files pane. This one is running in a Docker container, served at port 8787, and accessed using Safari (but any web browser will work).

Rstudio Server is almost identical to RStudio Desktop. Main difference is the “Upload” button in the Files pane. This one is running in a Docker container, served at port 8787, and accessed using Safari (but any web browser will work).

The Docker single-liner above will create a single user with sudo rights (since I’ve included -e ROOT=TRUE). After logging into the instance, you can then add other users and copy the course materials to everyone using these scripts: https://github.com/einarpius/create_rstudio_users Note that the instance is running Debian, so you’ll need very basic familiarity with managing file permissions on the command line. For example, you’ll need to make the scripts executable with chmod 700 create_users.sh.

Then connect to the same router you’ll be using for your LAN party, go to router settings and assign yourself a fixed IP address, e.g., 168.192.1.78. Once other people connect to the network created by this router (either by WiFi or cable), they need to type 168.192.1.78:8787 into any browser and can just start using RStudio. This will work as long as your computer is running Docker and you are all connected to the same router.

I had 10 people connected to my laptop and, most of the time, the strain on my CPU was negligible – around 10-20%. That’s because it was a course for complete beginners and they were mostly reading the instructions (included in the training Notebooks they were running R code in). So they weren’t actually hitting Run at the same time, and the tasks weren’t computationally heavy. When we did ask everyone to hit the “Knit to PDF” button all at the same time, it got a bit slower and my CPU was apparently working at 200%. But nothing crashed and everyone got their PDFs made.

Why are you calling it a LAN party?

My friends and I having a LAN party in Estonia, 2010. We would mostly play StarCraft or Civilization, or as pictured here - racing games to wind down at the end.

My friends and I having a LAN party in Estonia, 2010. We would mostly play StarCraft or Civilization, or as pictured here – racing games to wind down at the end.

LAN stands for Local Area Network and in most cases means “devices connected to the same WiFi*”. You’ve probably used LANs lots in your life without even realising. One common example is printers: you know when a printer asks you to connect to the same network to be able to print your files? This usually means your computer and the printer will be in a LAN. If your printed accepted files via any internet connection, rather than just the same local network, then people around the world could submit stuff for your printer. Furthermore, if you have any smart devices in your home, they’ll be having a constant LAN party with each other.

The term “LAN party” means people coming together to play multiplayer computer games – as it will allow people to play in the same “world”, to either build things together or fight with each other. Good internet access has made LAN parties practically obsolete – people and their computers no longer have to physically be in the same location to play multiplayer games together. I use the term very loosely to refer to anything fun happening on the same network. And being able to use RStudio is definitely a party in my books anyway.

But it is for security reasons (e.g., the printer example), or sharing resources in places without excellent internet connection where LAN parties are still very much relevant.

* Overall, most existing LANs operate via Ethernet cables (or “internet cables” as most people, including myself refer to them). WiFi LAN or WLAN is a type of LAN. Have a look at your home router, it will probably have different lights for “internet” and “WLAN”/“wireless”. A LAN can also be connected to the internet – if the router itself is connected to the internet. That’s the main purpose of a router – to take the internet coming into your house via a single Ethernet cable, and share it with all your other devices. A LAN is usually just a nice side-effect of that.

Docker, containers, images

Docker image – a file bundling an operating system + programs and files
Docker container – a running image (it may be paused or stopped)

List of all your containers: docker ps -a (just docker ps will list running containers, so the ones not stopped or paused)

List your images: docker images

Run a container using an image:

docker run -d -p 8787:8787 -e ROOT=TRUE -e USER=user -e PASSWORD=password rstudio/verse 

When you run rstudio/verse for the first time it will be downloaded into your images. The next time it will be taken directly from there, rather than downloaded. So you’ll only need internet access once.

Stop an active container: docker stop container-name

Start it up again: docker start container-name

Save a container as an image (for versioning or passing on to other people):

docker commit container-name pository:tag

For example: docker commit rstudio-server rstudio/riinu:test1

Rename container (by default it will get a random label, I’d change it to rstudio-server):

docker rename happy_hippo rstudio-server

You can then start your container with: docker start rstudio-server

HealthyR Ghana! Quick summary

These past two days are new frontier for the HealthyR course, taking the number of continents we’ve run it in up to 2.After the NIHR Unit on Global Surgery meeting, we travelled to Tamale, Ghana’s third largest city. The Wellcome Trust have kindly funded the development of the innovative, open-source HealthyR notebooks course. Spearheaded by Dr Riinu Ots, this course aims to provide an easy way for anyone in the world to learn R.This is particularly powerful where resources are limited and there are plenty of questions that need to be answered. Enter Stephen Tabiri, professor of Surgery at the University for Development Studies in Tamale. Stephen is as surgeon and has a large team of junior surgeons in training, nurses and other clinicians.In an innovative twist, it was held on a mix of laptops, from the data centre and on delegates own machines. Riinu had a brilliant solution, that served an offline R studio instance to delegates computers.Day 1 quickly introduced some key concepts to the delegates who quickly worked through the materials! After lunch a global surgery showcase event was held, which showcased the wide range of tools available to analyse data in R!Day 2 kicked off nicely, completing the basics session and then straight into everyone’s favourite session – Plotting! Here there were a lot of pleased delegates as they made complicated and colourful ggplots! People were making a lot of progress, in what can sometimes be a challenging language to learn!We finally closed on a logistic regression session delivered by Ewen Harrison, where people built their own models!Throughout the course there were numerous people bringing laptops to install RStudio software on their own desktops. A very enthusiastic and keen bunch of data scientists!Excitingly, members of the Ghana R community also attended, to offer support and discuss how best to provide a sustainable future for data science in Ghana.

Touch Down In Tamale!

The Surgical Informatics team arrive in Tamale, Ghana for the next HealthyR Notebooks course

The Surgical Informations groups are delighted to be visiting Tamale in Ghana to deliver our flagship HealthyR Notebooks course as part of our Wellcome Trust grant, ‘HealthyR Notebooks: Democratising open and reproducible data analysis in resource-poor
environments’
.

We’re being made extremely welcome by our hosts Professor Stephen Tabiri and Benard Ofori Appiah from the NIHR Global Health Research Unit on Global Surgery hub in Ghana.

Over the next few days we’ll be establishing a data centre in Ghana with the provision of 15 laptops and training 20 local delegates to use R for healthcare data analysis. This will build capacity for future data driven research in partnership with the NIHR Global Surgery Unit in Ghana.

Do you speak rlang?

Something for the more advanced R user! We’ll be back to our more exciting programming shortly (I hope!).

rlang? I already speak R

Quite right. rlang is part of the tidyverse side of things, so is probably more useful if you’re an advanced R user. It’s certainly not for the faint-hearted and needs a comprehensive understanding of how R ‘sees’ the code you write.

rlang is a low-level programming API for R which the tidyverse uses (meaning it speaks to R in as R like way as possible, rather than a ‘high-level’ – high level is more user orientated and interpretable). It enables you to extend what the tidyverse can do and adapt it for your own uses. It’s particularly good to use if you’re doing lots of more ‘programming’ type R work, for example, building a package, making a complex shiny app or writing functions. It might also be handy if you’re doing lots of big data manipulation and want to manipulate different datasets in the same way, for example.

Here’s an example of dynamically naming variables

In this example, say we have a tibble of variables, but we want to apply dynamic changes to it (so we feed R a variable, that can change, either using another function like purr::map or in a ShinyApp). In this instance, specifying each variable and each different possible consequence using different logical functions would take forever and be very clunky. So we can use rlang to simply put a dynamic variable/object through the same function.

We make use of the curly curlys too, which allow us to avoid using bulky enquo() – !! syntax

JAMA retraction after miscoding – new Finalfit function to check recoding

This post was originally published here

Riinu and I are sitting in Frankfurt airport discussing the paper retracted in JAMA this week.

During analysis, the treatment variable coded [1,2] was recoded in error to [1,0]. The results of the analysis were therefore reversed. The lung-disease self-management program actually resulted in more attendances at hospital, rather than fewer as had been originally reported.  

Recode check

Checking of recoding is such an important part of data cleaning – we emphasise this a lot in HealthyR courses – but of course mistakes happen.

Our standard approach is this:

library(finalfit)
colon_s %>%
  mutate(
    sex.factor2 = forcats::fct_recode(sex.factor,
      "F" = "Male",
      "M" = "Female")
  ) %>%
  count(sex.factor, sex.factor2)
# A tibble: 2 x 3
  sex.factor sex.factor2     n
  <fct>      <fct>       <int>
1 Female     M             445
2 Male       F             484

The miscode should be obvious.

check_recode()

However, mistakes may still happen and be missed. So we’ve bashed out a useful function that can be applied to your whole dataset. This is not to replace careful checking, but may catch something that has been missed. 

The function takes a data frame or tibble and fuzzy matches variable names. It produces crosstables similar to above for all matched variables. 

So if you have coded something from sex to sex.factor it will be matched. The match is hungry so it is more likely to match unrelated variables than to miss similar variables. But if you recode death to mortality it won’t be matched. 

Here’s a walk through.

# Install
devtools::install_github('ewenharrison/finalfit')
library(finalfit)
library(dplyr)
# Recode example
colon_s_small = colon_s %>%
  select(-id, -rx, -rx.factor) %>%
  mutate(
    age.factor2 = forcats::fct_collapse(age.factor,
      "<60 years" = c("<40 years", "40-59 years")),
    sex.factor2 = forcats::fct_recode(sex.factor,
    # Intentional miscode
      "F" = "Male",
      "M" = "Female")
  )
# Check
colon_s_small %>%
  check_recode()
$index
# A tibble: 3 x 2
  var1        var2       
  <chr>       <chr>      
1 sex.factor  sex.factor2
2 age.factor  age.factor2
3 sex.factor2 age.factor2
$counts
$counts[[1]]
# A tibble: 2 x 3
  sex.factor sex.factor2     n
  <fct>      <fct>       <int>
1 Female     M             445
2 Male       F             484
$counts[[2]]
# A tibble: 3 x 3
  age.factor  age.factor2     n
  <fct>       <fct>       <int>
1 <40 years   <60 years      70
2 40-59 years <60 years     344
3 60+ years   60+ years     515
$counts[[3]]
# A tibble: 4 x 3
  sex.factor2 age.factor2     n
  <fct>       <fct>       <int>
1 M           <60 years     204
2 M           60+ years     241
3 F           <60 years     210
4 F           60+ years     274

As can be seen, the output takes the form of a list length 2. The first is an index of matched variables. The second is crosstables as tibbles for each variable combination. sex.factor2 can be seen as being miscoded. sex.factor2 and age.factor2 have been matched, but should be ignored.

Numerics are not included by default. To do so:

out = colon_s_small %>%
  select(-extent, -extent.factor,-time, -time.years) %>% # choose to exclude variables
  check_recode(include_numerics = TRUE)
out
# Output not printed for space

Miscoding in survival::colon dataset?

When doing this just today, we noticed something strange in our example dataset, survival::colon.

The variable node4 should be a binary recode of nodes greater than 4. But as can be seen, something is not right!

We’re interested in any explanations those working with this dataset might have.

# Select a tibble and expand
out$counts[[9]] %>%
  print(n = Inf)
# Compressed output shown
# A tibble: 32 x 3
   nodes node4     n
   <dbl> <dbl> <int>
 1     0     0     2
 2     1     0   269
 3     1     1     5
 4     2     0   194
 5     3     0   124
 6     3     1     1
 7     4     0    81
 8     4     1     3
 9     5     0     1
10     5     1    45
# … with 22 more rows

There we are then, a function that may be useful in detecting miscoding. So useful in fact, that we have immediately found probable miscoding in a standard R dataset.

Fun with Regression

“All models are wrong, but some are useful”

George Box

This quote by statistician George Box feels like a good starting point from which to consider some of the challenges of regression modelling.  If we start with the idea that all models are wrong, it follows that one of the main skills in carrying out regression modelling is working out where the weaknesses are and how to minimise these to produce as close an approximation as possible to the data you are working with – to make the model useful.

The idea that producing high-quality regression models is often more of an art than a science appeals to me.  Understanding the underlying data, what you want to explore, and the tools you have at hand are essential parts of this process.

After attending the excellent HealthyR+: Practical Logistic Regression course a few weeks ago, my head was buzzing with probabilities, odds ratios and confounding.  It was not just the data which was confounded.  As someone fairly new to logistic regression, I thought it might be useful to jot down some of the areas I found particularly interesting and concepts which made me want to find out more.  In this first blog post we take a brief look at:

  • Probability and odds
    • The difference between probability and odds
    • Why use log(odds) and not just odds?
    • Famous probability problems
  • Collinearity and correlation
    • What is collinearity?
    • How do we detect collinearity?
    • Is collinearity a problem?

Probability and odds

The difference between probability and odds

Odds and probability are both measures of how likely it is that a certain outcome might occur in a series of events.  Probability is perhaps more intuitive to understand, but its properties make it less useful in statistical models and so odds, odds ratios, and log(odds) are used instead, more on this in the next section.

Interestingly, when the probability of an event occurring is small – <0.1 (or less than 10%) – the odds are quite similar.  However, as probability increases, the odds also increase but at a greater rate, see the following figure:

Here we can also see that whilst probabilities range from 0 to 1, odds can take on any value between 0 and infinity.

Why use log(odds) and not just odds?

Asymmetry of the odds scale makes it difficult to compare binary outcomes, but by using log(odds) we can produce a symmetrical scale, see figure below:

In logistic regression, the odds ratio concerning a particular variable represents the change in odds with each unit increase, whilst holding all other variables constant.

Famous probability problems

I find probability problems fascinating, particularly those which seem counter-intuitive. Below are links to explanations of two intriguing probability problems:

Collinearity and correlation

What is collinearity?

The term collinearity (also referred to as multicollinearity) is used to describe a high correlation between two explanatory variables.  This can cause problems in regression modelling because the explanatory variables are assumed to be independent (and indeed are sometimes called independent variables, see word clouds below). 

The inclusion of variables which are collinear (highly correlated) in a regression model, can lead to the false impression for example, that neither variable is associated with the outcome, when in fact, individually each variable does have a strong association.  The figure below might help to visualise the relationships between the variables:

In this image, y represents the control variable, and x1 and x2 are the highly correlated, collinear explanatory variables.  As you can see, there is a large area of (light grey) overlap between the x variables, whereas there are only two very small areas of independent overlap between each x and y variable.  These small areas represent the limited information available to the regression model when trying to carry out analysis.

How do we detect collinearity?

A regression coefficient can be thought of as the rate of change, or as the slope of the regression line.  The slope describes the mean change in the outcome variable for every unit of change in the explanatory variable.  It is important to note that regression coefficients are calculated based on the assumption that all other variables (apart from the variables of interest) are kept constant. 

When two variables are highly correlated, this creates problems. The model will try to predict the outcome but finds it hard to disentangle the influence of either of the explanatory variables due to their strong correlation. As a result, coefficient estimates may change erratically in response to small changes in the model.

Various terms are used to describe these x and y variables depending on context.  There are slight differences in the meanings, but here are a few terms that you might encounter:

The information I used to generate these word clouds was based on a crude estimate of the number of mentions in Google Scholar within the context of medical statistics.

Is collinearity a problem?

Collinearity is a problem if the purpose of your analysis is to explain the interactions between the data, however it has little effect on the overall predictive properties of your model, i.e. the model will provide accurate predictions based on all variables as one big bundle, but will not be able to tell you about the interactions of isolated variables.

If you are concerned with exploring specific interactions and you encounter collinearity, there are two main approaches you can take:

  • Drop one of the variables if it is not vital to your analysis
  • Combine the variables (e.g. weight and height can be combined to produce BMI)

An example of a publication where missed collinearity led to potentially erroneous conclusions, concerns analyses carried out on data relating to the World Trade Organisation (WTO). Here is a related article which attempts to unpick some of the problems with previous WTO research.

Finishing on an example of a problematic attempt at regression analysis may perhaps seem slightly gloomy, but on the contrary, I hope that this might provide comfort if your own analysis throws up challenges or problems – you are in good company!  It also brings us back to the quote by George Box at the beginning of this blog post, where we started with the premise that all models are wrong.  They are at best a close approximation, and we must always be alert to their weaknesses.

What next?

Look out for the next HealthyR+: Practical Logistic Regression course and sign up.  What areas of medical statistics do you find fun, puzzling, tricky, surprising? Let us know below.