Using codepen.io and google cloud to build a handy risk calculator.

If you’ve been watching the news or twitter over the past week, you may have seen the appendicitis-related headlines about unnecessary operations being performed. The RIFT collaborative and Dmitri Nepogodiev have really spearheaded some cool work looking at who gets unnecessary operations, which are all well worth a read:

Original article:

https://bjssjournals.onlinelibrary.wiley.com/doi/10.1002/bjs.11440

(Selected news coverage):

https://www.theguardian.com/society/2019/dec/04/unnecessary-appendix-surgery-performed-on-thousands-in-uk

https://www.dailymail.co.uk/health/article-7750707/Thousands-young-British-women-needless-operations-remove-appendix.html

https://www.independent.co.uk/life-style/health-and-families/women-appendix-surgery-appendicitis-study-a9232146.html

So, when Dmitri asked if I could develop a web application for risk scoring to help identify those at low risk of appendicitis, I was very excited.

Having quite often used risk calculators in clinical practice, I started to write a list of what makes a good calculator and how to make one that can be used effectively. The most important were:

  • Easy to use
  • Works on any platform (as NHS IT has a wide variety of browsers!) and on mobile (some hospitals have great Wi-Fi through eduroam)
  • Can be quickly updated
  • Looks good and gives an intuitive result
  • Lightweight requiring minimal processing power, so many users can use simultaneously

Now we use a lot of R in surgical informatics, but Shiny wasn’t going to be the one for this as it’s not that mobile friendly and doesn’t necessarily work on every browser that smoothly (sorry shiny!). Similarly, the computational footprint required to run shiny is too heavy for this. So, using codepen.io and a pug html compiler, I wrote a mobile friendly website (Still a couple of tweaks I’d like to make to make entirely mobile friendly!).

Similarly, I get asked why not an app? Well app development requires developing on multiple platforms (Apple, Android, Blackberry) and can’t be used on those pesky NHS PCs. Furthermore, if something goes out of date or needs to be updated quickly – repairing it will take ages as updates sometimes have to be vetted by app stores etc.

My codepen.io for the calculator:

Codepen.io is a great development tool and allows you to combine and get inspired by other people’s work too!

I then set up a micro instance on google cloud, installed the pug compiler and apache2, selected a fixed IP and opened the HTTP port to the world and all done! (this set up is a little more involved than this but was straightforward!). The micro instance is very very cheap so it’s not expensive to run. The Birmingham crew then bought a lovely domain appy-risk.org for me to attach it to.

Here’s the obligatory increase in CPU usage since publication (slightly higher but as you can tell – it’s quite light:

HealthyR Ghana! Quick summary

These past two days are new frontier for the HealthyR course, taking the number of continents we’ve run it in up to 2.After the NIHR Unit on Global Surgery meeting, we travelled to Tamale, Ghana’s third largest city. The Wellcome Trust have kindly funded the development of the innovative, open-source HealthyR notebooks course. Spearheaded by Dr Riinu Ots, this course aims to provide an easy way for anyone in the world to learn R.This is particularly powerful where resources are limited and there are plenty of questions that need to be answered. Enter Stephen Tabiri, professor of Surgery at the University for Development Studies in Tamale. Stephen is as surgeon and has a large team of junior surgeons in training, nurses and other clinicians.In an innovative twist, it was held on a mix of laptops, from the data centre and on delegates own machines. Riinu had a brilliant solution, that served an offline R studio instance to delegates computers.Day 1 quickly introduced some key concepts to the delegates who quickly worked through the materials! After lunch a global surgery showcase event was held, which showcased the wide range of tools available to analyse data in R!Day 2 kicked off nicely, completing the basics session and then straight into everyone’s favourite session – Plotting! Here there were a lot of pleased delegates as they made complicated and colourful ggplots! People were making a lot of progress, in what can sometimes be a challenging language to learn!We finally closed on a logistic regression session delivered by Ewen Harrison, where people built their own models!Throughout the course there were numerous people bringing laptops to install RStudio software on their own desktops. A very enthusiastic and keen bunch of data scientists!Excitingly, members of the Ghana R community also attended, to offer support and discuss how best to provide a sustainable future for data science in Ghana.

Do you speak rlang?

Something for the more advanced R user! We’ll be back to our more exciting programming shortly (I hope!).

rlang? I already speak R

Quite right. rlang is part of the tidyverse side of things, so is probably more useful if you’re an advanced R user. It’s certainly not for the faint-hearted and needs a comprehensive understanding of how R ‘sees’ the code you write.

rlang is a low-level programming API for R which the tidyverse uses (meaning it speaks to R in as R like way as possible, rather than a ‘high-level’ – high level is more user orientated and interpretable). It enables you to extend what the tidyverse can do and adapt it for your own uses. It’s particularly good to use if you’re doing lots of more ‘programming’ type R work, for example, building a package, making a complex shiny app or writing functions. It might also be handy if you’re doing lots of big data manipulation and want to manipulate different datasets in the same way, for example.

Here’s an example of dynamically naming variables

In this example, say we have a tibble of variables, but we want to apply dynamic changes to it (so we feed R a variable, that can change, either using another function like purr::map or in a ShinyApp). In this instance, specifying each variable and each different possible consequence using different logical functions would take forever and be very clunky. So we can use rlang to simply put a dynamic variable/object through the same function.

We make use of the curly curlys too, which allow us to avoid using bulky enquo() – !! syntax

HealthyR Estonia Day 3

Well, what a great 3 days this has been! Again today, we gained extra people to join in HealthyR Notebooks – A formidable achievement for a statistics course!

We kicked off with a brilliant session by Ewen Harrison about survival analysis and time to event data, introducing new concepts and the R survival package. Then went into an amazing session by Riinu Ots, who showcased how to plot your data with real world, practical examples. This session really was brilliant, living up to Riinus catch-phrase ‘always plot your data’. This was followed by a short pop quiz, which all participants did brilliantly!

After a tasty lunch, we then continued into a new session, how to work with your data. This session is aimed at translating the learning of HealthyR, straight to a reallife dataset of the participants choosing. Participants were guided through the practical application of R to their own data, giving them a springboard to produce some cool analyses after the course.

Following the final session we departed to Tallinn for flights back home tomorrow.

All in all, HealthyR notebooks were a success and very fun to teach. Estonia was well worth the trip and showed we could teach R to an international audience (even having fun at the same time!). Looking forward to developing the course further when the team go to Ghana later this year. Big thanks to Julius for organising the course and to the Welcome Trust!

Tidying up edgeR and differential expression

For anyone who has done any differential expression from RNA array/ sequencing datasets in R, edgeR or DESeq2 are the packages to go to! They contain powerful functions and models which cater for most uses.

As with most bioinformatics type packages, they end up creating scripts with endless lines of code and square brackets aplenty. Now not only does this make scripts difficult to read, it’s difficult for R to handle in some aspects. Quite often these packages have custom objects etc. which makes changing formats and sending data into different packages a pain.

Well, if you do lots of differential expression, here’s some wrappers for you! Taking inspiration from the FinalFit and tidyverse approach,
I have written a few wrappers for differential expression analyses to make your code and life happier.

Please note – throughout this all I use count data as this is what edgeR likes, rather than FPKM/RPKM or RSEM.

Step 1 – Taking the first steps to expression happiness

So – first up, preparing and filtering your data. This function turns your data and any clinical/ sample data, wraps it up into a DGEList object,
then will filter it. I like to function based on proportions of lowly expressed transcripts, as purely filtering on arbitary CPM values has its own issues, particularly if your read depth is low.

tidy_dge() is a function which does all of this! It combines two functions dge_bind() – which sets up your DGEList based on two data frames and tidy_gene_filter() which then filters these objects and calculates normalisation values.

We can then combine this into one function tidy_dge(), which shrinks about 10-20 lines into one.

Step 2 – Translating annotations

Next up – convert_genes(). Switching between gene annotation systems (i.e. ENSEMBL, HUGO, ENTREZ) is a pain. We all know that. The package biomaRt tries to solve this a bit, but still we’re left with somewhat unwieldly functions and lines of code trying to pick out where genes sit and what to map them back to. For this, I have devised convert_gene().

convert_gene() passes a request to biomaRt (the ENSEMBL API) and will convert any (common) gene name to any other gene annotation out. It takes a little while to contact the servers, but is well worth it! Alternatively, you will be able to pass a saved biomaRt output to this, so you can make one pull only which speeds it up! If we are to do this, we should create three separate marts – one containing genes, one genes and GO codes, the other the GO codes and processes. Here’s some functions to do this:

The reason for this- in testing some of the marts have errors when being pulled with genes and GO codes all at once! The ensembl biomaRt API is also somewhat unstable and frequently produces 404 errors/ lots of downtime. Plus – we can then use the mart we created for genes in convert_gene().

Back to convert_gene()! It is designed to work with a glm object straight out of edgeR or with a dataframe of counts, but will detect if it’s another type of file. Just be careful! As the default setting is mouse currently:

GO – The final step to expression fulfillment

Finally – those GO analyses, how on earth do you then trace back and find which genes in which pathways are up-regulated? I’ve never been able to find a nice function to do this.

Fear not – we have a new one! gene_GO_explorer(). This is very handy indeed. It basically will take your glm object from edgeR and map the genes to GO processes and codes.


There are some functions wrapped up in this function for dissecting pathways out too; go_grep, go_code, go_pathway and go_components. These allow you to supply a list of GO processes, GO codes or to search the GO annotations for pathways of interest. For example, If your GO enrichment shows pathway activity for a specific GO pathway, you can input the GO code into go_code to get the genes involved in that pathway. For searching GO pathway annotations, we can use go_grep. Here, if we set go_grep to ‘stress’, gene_GO_explorer() will search and return any pathways with the term ‘stress’ in their description.

If genes are part of multiple pathways that are included in our search, the GO terms will be collapsed down together – separated by a semicolon.
This data can be displayed in a ‘long’ format if required by specifying long = TRUE.

Note – unlike goana, gene_GO_explorer() does not mind what type of gene annotation is supplied! I use convert_gene() to prepare my glms for goana then pop it through gene_GO_explorer().

If you’re not using a glm from edgeR – switch off the adjusted p-value bit of gene_GO_explorer() using adjust_p = FALSE, or set the column with your p-values to ‘PValues’.

I’ll be developing more of this sort of thing, in addition to some nicer wrappers for preparing your sequencing data straight from FASTQ formats, making user readable edgeR models and then querying protein interaction databases too! Comments/ thoughts appreciated if people find this kind of stuff useful. Considering making a package. FinalOmics anyone?

Tidyverse does sequencing data!

What I’ve been up to…

For the past few months I’ve been stationed between Surgical Informatics in Edinburgh and the CRUK Beatson Institute for Cancer Research in Glasgow. The aim – to identify new treatments for hepatocellular carcinoma, a disease that affects nearly a million people worldwide each year. Surgery is the only cure, but due to the complexity of surgery it tends to be quite a specialist and risky undertaking. This, combined with the late stage at which patients become symptomatic, means that a lot of liver cancer globally in incurable.

Current chemotherapy keeps patients in a ‘holding pattern’ where it slows the growth of tumours, but lacks the ability to kill enough of the tumour to make it possible to then use surgery as a cure (called ‘downstaging’). If we can find a drug or treatment that can facilitate this, that’d be very exciting.

So, I’ve been accruing loads of sequencing data from human and mouse tumours to try and identify weaknesses in these tumours that we can target. But analysing this RNA and DNA sequencing data takes a lot of time and computational power. So I’ve been investigating how to shrink this down and try and take as much of it straight to R as possible. Thus, simplifying the pipeline and hopefully speeding things up a bit. Traditionally, this process takes hours and days. But with this workflow I’ve shrunk it to about a day for 30+ samples so far. This is obviously dependent on how many samples, how big your genome/transcriptome is and sequencing coverage/depth too.

Quick alignment and mapping

Taking the FASTQ files generated by the sequencers, we first run them through the Salmon pseudoaligner. Salmon is a package that uses pseduoalignment to align and map the reads from RNASeq data, thus shrinking two steps into one. This also enables us to leverage the speed and accuracy advantages of pseudoalignment (meant to be more robust in several aspects). I tend to dump all my FASTQ files into one folder per species, experiment or cohort and then run Salmon on this.

String manipulation in Bash

One challenge I have found is how our institute appends suffixes to the files for identification. This is obviously great, but Salmon needs to know which reads come from which end in the case of paired end sequencing or which technical repeat. Otherwise we’re simply going to be aligning gobbledygook.

In this case, the Illumina sequencer kicks out files which have a ‘S’ in front of the sample number, an ‘L’ in front of the technical repeat number and an ‘R’ in front of the direction of sequence (i.e. for paired end reads).

So using some loops in bash, I devised the following (note index file for mouse transcriptome mm10_t_index):

#!/bin/bash
source /home/user/miniconda3/etc/profile.d/conda.sh
conda activate salmon

//note ’18’ can be any number – this is just the number of samples I had.
FILE_LIST=$(for fn in /home/user/data/sequencing_data_here/fastq/*; do echo ${fn::-18} ; done)

#now check we can loop through this list
#for fn_short in $FILE_LIST; do echo “${fn_short}_test”; done

#now remove duplicates
FILE_LIST=$(echo $FILE_LIST | awk ‘BEGIN{ORS=” “}{for (i=1; i<=NF; i++)a[$i]++} END {for (i in a) print i }’)
FILE_LIST=$(echo “$FILE_LIST” | tr ‘ ‘ ‘\n’)

for fn in $FILE_LIST;
do
samp=basename "${fn}"
echo “Processing sample ${samp}”
salmon quant -i “/home/user/mm10_t_index/” -l A \
-1 “${fn}”_L001_R1_001.fastq \
-2 “${fn}”_L001_R2_001.fastq \
-p 8 –validateMappings -o quants/${samp}_L001_quant
done

for fn in $FILE_LIST;
do
samp=basename "${fn}"
echo “Processing sample ${samp}”
salmon quant -i “/home/user/mm10_t_index/” -l A \
-1 “${fn}”_L002_R1_001.fastq \
-2 “${fn}”_L002_R2_001.fastq \
-p 8 –validateMappings -o quants/${samp}_L002_quant
done

for fn in $FILE_LIST;
do
samp=basename "${fn}"
echo “Processing sample ${samp}”
salmon quant -i “/home/user/mm10_t_index/” -l A \
-1 “${fn}”_L003_R1_001.fastq \
-2 “${fn}”_L003_R2_001.fastq \
-p 8 –validateMappings -o quants/${samp}_L003_quant
done

for fn in $FILE_LIST;
do
samp=basename "${fn}"
echo “Processing sample ${samp}”
salmon quant -i “/home/user/mm10_t_index/” -l A \
-1 “${fn}”_L004_R1_001.fastq \
-2 “${fn}”_L004_R2_001.fastq \
-p 8 –validateMappings -o quants/${samp}_L004_quant
done

Now, when run, this should generate aligned and mapped data in a separate ‘quants’ folder. Alongside the QC information etc. in each folder.

The most interesting files in these quantified data folders are the tab delimited quants.sf files. These contain the counts for mapped reads at the gene level. I tend to work with count data only, as from this it’s possible to derive TPM and FPKM.

Using the tidyverse with map() to quickly prepare sequencing data

Using the tidyverse we can make this really quick and easy!

First, using readr::read_tsv() and purrr::map() we can open all the files within a specific directory and add the pathname as a column (so we don’t lost track of our samples). Then we use mutate(), to put a path and sample name in. Finally, using spread() and summarise() we sum the counts across our technical repeats!

library(tidyverse)


#required functions
file_directory = ‘/mnt/data/this_is_folder_where_data_is’


file_list = dir(path = file_directory, full.names = T, pattern = “*quant.sf”, recursive = T)


#Step 1 – open all files, read and extract sample identifiers
file_list %>%
map(~read_tsv(.) %>%
mutate(path = gsub(‘/mnt/data/this_is_folder_where_data_is/quants/’, ”, .x)) %>% #Edit this to the appropriate file directory!
mutate(path = gsub(‘_quant/quant.sf’, ”, path)) %>%
mutate(path = gsub(‘__’, ‘_’, path))) -> gene_counts_list


#Step 2 – split repeats up and combine into 1 dataframe
gene_counts_list %>%
bind_rows(gene_counts_list) %>%
extract(path, into = c(“sample_id”, “n_repeat”), “(.*)_([^_]+)$”) %>%
select(sample_id, n_repeat,everything()) -> genes_all_samples_df


#Step 3 – sum reads together!
genes_all_samples_df %>%
group_by(sample_id, Name) %>%
summarise(NumReadsTotal = sum(NumReads)) %>%
spread(key = sample_id, value = NumReadsTotal) %>%
column_to_rownames(var = ‘Name’) -> gene_counts_by_sample


rm(file_directory, file_list, gene_counts_list)
#Save data
save.image(‘/mnt/data/results_folder’)

Happy sequencing! (and thanks Riinu Ots!)

Making sense of machine learning – how do we measure performance?

An exciting direction for the Surgical Informatics group is the application of machine learning models to clinical problems.

As we hear on a nearly daily basis, machine learning has loads to offer patients and clinicians, but how can we make these models understandable and importantly, how do we measure that these models are looking at what we’re interested in?

Currently, how well a diagnostic test performs is described by four main parameters (most students and clinicians will groan when they hear these words):

  • Sensitivity (how many people who have the condition are identified correctly)
  • Specificity (how many people who don’t have the condition are identified correctly)
  • Positive Predictive Value (how many times a test positive is a true positive)
  • Negative Predictive Value (how many times a test negative is a true negative)

Now, interestingly the field of machine learning has evolved some separate parameters for measuring the usefulness of machine learning models:

  • Recall (synonymous to sensitivity)
  • Precision (synonymous to positive predictive value)

There are other measures too, including F1 score and accuracy. The issue around these metrics is that although they are handy mathematically to describe models, they lack relevance to what is clinically important. For example, if a patient wants to know how many times a test might give a false result, the F1 score (a weighted average of precision and recall) is going to be pretty useless.

Now, if we want to make a machine learning risk prediction model, we need a clinically relevant metric to allow model training to be measured and optimised. In python, there’s lots of functions for this, however, R is far more common in healthcare data analysis. At Surgical Informatics, we use Keras to interact with TensorFlow in R. Keras for R is far newer than python, so there are fewer metric functions available.

Clinically, a model to predict a specific event happening is more useful than ruling it out, particularly if the event is serious (i.e. death). A recall metric would be perfect for this, however, there is no custom function available for recall in R.

So lets make one!

Fortunately Keras provides us with functions to perform calculations on tensors such as k_sum, k_round and k_clip. This lets us manipulate Tensors using Keras and come up with custom metrics. You can find other backend keras functions here:

https://keras.rstudio.com/articles/backend.html#backend-functions.

So if recall is equal to the number of true positives, divided by the number of true positives plus false negatives we need to write a function to define these.

Now should we just add pp and tp? Unforunately Keras doesn’t like this. So we use k_epsilon() to replace tp in the recall expression, to give:

And that should calculate the recall (or sensitivity) for the model!