ZOOMORPHIC ADVENTURES WITH DATA ANIMALS AT THE SURGICAL & CRITICAL CARE INFORMATICS AWAY DAY

If data were an animal, what would it be? 

Cath Montgomery, Medical Sociologist, writes about our recent exploration into pottery and data animals.

Often we think about data as an object: inert, manipulable, and something we control. We collect it, harvest it, scrape it, clean it, curate it, store it, share it, analyse it and display it. In these endeavours, we think about human agency and the work that we – as clinicians, statisticians, data scientists, sociologists – do to make sense of the world through quantified means. But what about the data themselves? What do they do? And what would it mean to give them agency? This is something that Science & Technology Studies scholars do routinely to underscore the ways in which the material world interacts with humans to create societal order. As a fun and playful way to think about some of the features of data that we identify with or relate to, we can ask, “If data were an animal, what would it be?”

After a full day and a half of talking about data at the strategy away day, it was time for people to get their hands dirty at Doodles pottery for some ‘team-building’. What better occasion to paint our own data animals! Everyone chose an item to paint: a mug, a jug, a bowl, and got to work dabbing, splatting, etching, and painting their designs. Creative activities like pottery painting are said to be good for team building because they nurture trust between colleagues; usually, everyone starts with minimal expertise, which is a good leveller, and everyone makes themselves a little bit vulnerable by putting their creations out into the world. This kind of activity also helps people get in touch with their inner  artist and the parts of their brain responsible for creativity, imagination and intuition. This is the birth place of data animals! 

If the description of data as inert, manipulable, something we control were sufficient, we might have seen a lot of domestic data animals – cats and dogs, rabbits and rodents. Of these, there were none. Instead, we had a zebra and a giraffe, centipedes and dragonflies, frogs, foxes, owls, a death butterfly and a skull. Certainly, it seems that data are not tame in this group’s collective imagination! 

So what did our data animals have to say about data? Riinu’s rainbow zebra shows the importance of reading between the lines; data analysis is not black and white and datasets are diverse, represented by the zebra’s rainbow stripes. Sarah’s giraffe represents the ability to use data for utilising resources that would otherwise be difficult to access (it’s also an animal in long-format). George’s frog follows an r-selection breeding strategy, otherwise known as an ‘r-strategist’: “this narrative is inspired by my approach to model selection – generating as many as one can sensibly think of and then witling them down using natural selection/data metric driven selection”.  Liz’s centipedes represent lots of quick-moving arms but overall, somewhat slow going; Annemarie’s death butterfly is superficially elegant and beautiful, but must be treated with respect as can be deadly if provoked or used badly. Ewen’s “ripped off owl jug” embodies imitation as the sincerest form of flattery: in data science, it is best to build on what has already been a success. Cath’s barn owl is a flash of light in the dark, but also eats other data animals for breakfast (sociologists of science and medicine can be a critical bunch).  Ian’s animal is deceased and only the skull remains: “being the oldest member of the group I have datasets dead and buried all over Scotland…but a little bit of “data mining” might resurrect some of them?”

So: from sex and death to work and the constant striving for resources, social benefit and success, the data animals have it all. It would be disingenuous to suggest that the explanations we wove to account for our creations preceded the act of painting them; nonetheless, the stories we tell about data are an important way in which we relate to the world and the work that we do to make sense of it through research. 

Making a Research Focus Wordcloud

Is it better to have a narrow or broad research focus? There are obviously pros/cons to both options (and arguably these aren’t mutually exclusive!), but it’s certainly an interesting thought posed in a recent tweet from @dnepo.

While I’m sure we all have a vague idea of where we sit on that spectrum of broad-narrow focus, there’s nothing like a bit of objective data (like a word cloud) to help us understand this better! While there are some online tools out there, R can make getting, cleaning, and displaying this data very easy and reproducible.

We will aim to cut down on the work required in collecting all your publication data by using google scholar – if you don’t have an account already, make one!

Firstly, we need 3 packages to achieve this:

  1. scholar: to download publications associated with your google scholar account.
  2. tidyverse: to clean and wrangle your publication data into the required format.
  3. wordcloud2: to generate a pretty wordcloud of your publication titles.
# install.packages(c("scholar", "wordcloud2"))
library(tidyverse); library(scholar); library(wordcloud2)

Secondly, we need to provide specific information to R to allow it to do the task.

  1. We need to get our Google Scholar ID from our account (look at the URL) to tell R where to download from (we’ll use mine as an example, but anyone’s can be used here).
  2. We want to tell R which words we can ignore because they’re just filler words or irrelevant (e.g. we don’t care how many times titles have “and” in them!). This is optional, but recommended!
gscholarid <- "MfGBD3EAAAAJ" # Kenneth McLean
remove <- c("and", "a","or", "in", "of", "on","an", "to", "the", "for", "with")

Finally, we can generate our word cloud! The code below is generic, so works for anyone so long as you supply the Google Scholar ID (“gscholarid”) and filler words (remove).

# Download dataframe of publications from Google Scholar
scholar::get_publications(id = gscholarid) %>%
  tibble::as_tibble() %>%
  
  # Do some basic cleaning of paper titles
  dplyr::mutate(title = stringr::str_to_lower(title),
                title = stringr::str_replace_all(title, ":|,|;|\\?", " "),
                title = stringr::str_remove_all(title, "\\(|\\)"),
                title = stringr::str_remove_all(title, "…"),
                title = stringr::str_remove_all(title, "\\."),
                title = stringr::str_squish(title)) %>%
  
  # Combine all text together then separate by spaces (" ")
  dplyr::summarise(word = paste(title, collapse = " ")) %>%
  tidyr::separate_rows(word, sep = " ") %>%
  
  # Count each unique word
  dplyr::group_by(word) %>%
  dplyr::summarise(freq = n()) %>%
  
  # Remove common filler words
  dplyr::filter(! (word %in% remove)) %>%
  
  # Put into descending order
  dplyr::arrange(-freq) %>%
  
  wordcloud2::wordcloud2()

And here we go! I think safe to say I’m surgical focussed, but quite a lot of different topics under that umbrella! Why not run the code here and figure out how your publications break down!

World map using the tidyverse (ggplot2) and an equal-area projection

This post was originally published here

There are several different ways to make maps in R, and I always have to look it up and figure this out again from previous examples that I’ve used. Today I had another look at what’s currently possible and what’s an easy way of making a world map in ggplot2 that doesn’t require fetching data from various places.
TLDR: Copy this code to plot a world map using the tidyverse:

Reshaping multiple variables into tidy data (wide to long)

This post was originally published here

There’s some explanation on what reshaping data in R means, why we do it, as well as the history, e.g., melt() vs gather() vs pivot_longer() in a previous post: New intuitive ways for reshaping data in R
That post shows how to reshape a single variable that had been recorded/entered across multiple different columns. But if multiple different variables are recorded over multiple different columns, then this is what you might want to do:

Setting up a simple one page website using Nicepage and Netlify

This post was originally published here

I’ve just set up a single page website (= online business card) for myself and my husband: https://pius.cloud/ . This post summarises what I did. If you’re looking to get started with something super quickly, then only the first two steps are essential (Creating a website and Serving a website).
Creating a website (using Nicepage) I’ve created websites using various tools such as straight up HTML, WordPress, Hugo+blogdown (this site – riinu.

HealthyR Online: Lockdown Learning

With news of the lockdown in March came the dawning reality that we wouldn’t be able to deliver our usual HealthyR 2.5 day quick start course in May.

The course is always over-subscribed so we were keen to find a solution rather than cancelling altogether.

HealthyR teaches the Notebook format which is already an online tool hosted by RStudio Cloud – so we knew that bit would work online. But what to do about getting attendees and tutors online, delivering lectures and offering interactive support with coding? Could we recreate our usual classroom environment online?

Never a group to shy away from a technical challenge, and with expertise in online education, we set about researching what online tools could be used.

After trying various options we went with Blackboard Collaborate to provide an online classroom, together with our usual RStudio Cloud to provide the Notebooks interface. Collaborate has a really nice feature of ‘break-out rooms’ where small groups can be assigned a separate online room with a tutor to work through exercises. The tutor can provide support and answer questions, using the screen share option to see exactly what each person might be having difficulty with.

After a few rehearsals to work out what roles to assign all our moderators and attendees, how to send people to the break rooms and recall them back to the main room we were set!

Ahead of the course, attendees were emailed the usual pre-course materials and a log in for their RStudio Cloud accounts, together with an invite to a Collaborate session for each of the 3 days. We split the 20 attendees who had confirmed attendance into groups of 5 and assigned one of our fantastic tutors to each group.

We also set up a an extra break out room with a dedicated tutor which could be used for anyone needing specific one-to-one help.

After the ice-breaker, ‘What’s a new thing you’ve done since lockdown?’ – everything from macrame to margaritas plus tie-dying and a lot of baking – the course got underway with the first lecture.

One or two delegates had some problems with internet connections, and the assigning of breakout rooms took a bit of getting used, but Riinu soon worked out an efficient system and the first coding exercises were underway!

We were delighted that the course received really positive feedback overall – none of us were sure this would work, but it did! The live coding sessions and pop quizzes were particularly popular.

We’ll definitely run HealthyR online again if the lockdown continues. Even after the lockdown, moving online widens access and offers the possibility for our international collaborators to join a course without having to travel.

Thank you to all our attendees who quickly adapted to the online format and to our amazing tutors, Tom, Kenny, Derek, Peter, Katie, Stephen, Michael and Ewen, who provided 3 days of their time to run the course, led as ever, by Riinu.

Course Feedback

Collaborate and RStudio Cloud worked very well for me. The breakout rooms were a nice touch to allow discussions.

Very well set-up, particularly considering the challenges of online teaching! Collaborate and RStudio made the course very accessible. Also a fantastic ratio of tutors to pupils and very clear explanations of key concepts in ’R’ languageand stats!

Clear and easy instructions. Worked seamlessly!

Teaching materials fantastic. In particular I thought linear and logistic
regressions were superbly well taught (as difficult to teach/understand). I think
I’ve now understand these for the first time having wasted loads of time reading
about them in the past!

This was a great course. I think in person would have allowed more interaction so I would still keep your original format available after this lockdown is over but well done on adapting and providing an excellent course.

Resources

https://healthyr.surgicalinformatics.org

All the HealthyR resources, including our new online book, are available for free on the HealthyR website

R: filtering with NA values

This post was originally published here

NA – Not Available/Not applicable is R’s way of denoting empty or missing values. When doing comparisons – such as equal to, greater than, etc. – extra care and thought needs to go into how missing values (NAs) are handled. More explanations about this can be found in the Chapter 2: R basics of our book that is freely available at the HealthyR website
This post lists a couple of different ways of keeping or discarding rows based on how important the variables with missing values are to you.

Using codepen.io and google cloud to build a handy risk calculator.

If you’ve been watching the news or twitter over the past week, you may have seen the appendicitis-related headlines about unnecessary operations being performed. The RIFT collaborative and Dmitri Nepogodiev have really spearheaded some cool work looking at who gets unnecessary operations, which are all well worth a read:

Original article:

https://bjssjournals.onlinelibrary.wiley.com/doi/10.1002/bjs.11440

(Selected news coverage):

https://www.theguardian.com/society/2019/dec/04/unnecessary-appendix-surgery-performed-on-thousands-in-uk

https://www.dailymail.co.uk/health/article-7750707/Thousands-young-British-women-needless-operations-remove-appendix.html

https://www.independent.co.uk/life-style/health-and-families/women-appendix-surgery-appendicitis-study-a9232146.html

So, when Dmitri asked if I could develop a web application for risk scoring to help identify those at low risk of appendicitis, I was very excited.

Having quite often used risk calculators in clinical practice, I started to write a list of what makes a good calculator and how to make one that can be used effectively. The most important were:

  • Easy to use
  • Works on any platform (as NHS IT has a wide variety of browsers!) and on mobile (some hospitals have great Wi-Fi through eduroam)
  • Can be quickly updated
  • Looks good and gives an intuitive result
  • Lightweight requiring minimal processing power, so many users can use simultaneously

Now we use a lot of R in surgical informatics, but Shiny wasn’t going to be the one for this as it’s not that mobile friendly and doesn’t necessarily work on every browser that smoothly (sorry shiny!). Similarly, the computational footprint required to run shiny is too heavy for this. So, using codepen.io and a pug html compiler, I wrote a mobile friendly website (Still a couple of tweaks I’d like to make to make entirely mobile friendly!).

Similarly, I get asked why not an app? Well app development requires developing on multiple platforms (Apple, Android, Blackberry) and can’t be used on those pesky NHS PCs. Furthermore, if something goes out of date or needs to be updated quickly – repairing it will take ages as updates sometimes have to be vetted by app stores etc.

My codepen.io for the calculator:

Codepen.io is a great development tool and allows you to combine and get inspired by other people’s work too!

I then set up a micro instance on google cloud, installed the pug compiler and apache2, selected a fixed IP and opened the HTTP port to the world and all done! (this set up is a little more involved than this but was straightforward!). The micro instance is very very cheap so it’s not expensive to run. The Birmingham crew then bought a lovely domain appy-risk.org for me to attach it to.

Here’s the obligatory increase in CPU usage since publication (slightly higher but as you can tell – it’s quite light:

RStudio Server LAN party: Laptop+Router+Docker to serve RStudio offline

This post was originally published here

TLDR: You can teach R on people’s own laptops without having them install anything or require an internet connection.

Members of the Surgical Informatics team in Ghana, 2019. More information: surgicalinformatics.org

Members of the Surgical Informatics team in Ghana, 2019. More information: surgicalinformatics.org

Introduction

Running R programming courses on people’s own laptops is a pain, especially as we use a lot of very useful extensions that actually make learning and using R much easier and more fun. But long installation instructions can be very off-putting for complete beginners, and people can be discouraged to learn programming if installation hurdles invoke their imposter syndrome.

We almost always run our courses in places with a good internet connection (it does not have to be super fast or flawless), so we get our students all set up on RStudio Server (hosted by us) or https://rstudio.cloud (a free service provided by RStudio!).
You connect to either of these options using a web browser, and even very old computers can handle this. That’s because the actual computations happen on the server and not on the student’s computer. So the computer just serves as a window to the training instance used.

Now, these options work really well as long as you have a stable internet connection. But for teaching R offline and on people’s own laptops, you either have to:

  1. make sure everyone installs everything correctly before they attend the course
  2. Download all the software and extensions, put them on USB sticks and try to install them together at the start
  3. start serving RStudio from a your computer using Local Area Network (LAN) created by a router

Now, we already discussed why the first option is problematic (gatekeeper for complete beginners). The second option – installing everything at the start together – means that you start the course with the most boring part. And since everyone’s computers are different (both by operating systems as well as different versions of the operating systems), this can take quite a while to sort. Therefore, queue in option c) – an RStudio Server LAN party.

Requirements

  1. A computer with more than 4GB of RAM. macOS alone uses around 2-3GB just to keep going, and running the RStudio Server docker container was using another 3-4 GB, so you’ll definitely need more than 4GB in total.
  2. A network router. For a small number of participants, the same one you already have at home will work. Had to specify “network” here, as apparently, even my Google search for “router” suggests the power tool before network routers.
  3. Docker – free software, dead easy to install on macOS (search the internet for “download Docker”). Looks like installation on the Windows Home operating system might be trickier. If you are a Windows Home user who is using Docker, please do post a link to your favourite instructions in the comments below.
  4. Internet connection for setting up – to download RStudio’s docker image and install your extra packages.
My MacBook Pro serving RStudio to 10 other computers in Ghana, November 2019.

My MacBook Pro serving RStudio to 10 other computers in Ghana, November 2019.

Set-up

Running RStudio using Docker is so simple you won’t believe me. It honestly is just a single-liner to be entered into your Terminal (Command Prompt on Windows):

docker run -d -p 8787:8787 -e ROOT=TRUE -e USER=user -e PASSWORD=password rstudio/verse 

This will automatically download a Docker image put together by RStudio. The one called verse includes all the tidyverse packages as well as publishing-related ones (R Markdown, Shiny, etc.). You can find a list of the difference ones here: https://github.com/rocker-org/rocker

Then open a browser and go to localhost:8787 and you should be greeted with an RStudio Server login! (Localhost only works on a Mac or Linux, if using Windows, take a note of your IP address and use that instead of localhost.) More information and instructions can be found here: https://github.com/rocker-org/rocker/wiki/Using-the-RStudio-image

Tip: RStudio suggests port 8787, which is what I used for consistency, but if you set it up on 80 you can omit the :80 as that’s the default anyway. So you can just go to localhost (or something like 127.0.0.0 if using Windows).

For those of you who have never seen or used RStudio Server, this is what it looks like:

Rstudio Server is almost identical to RStudio Desktop. Main difference is the “Upload” button in the Files pane. This one is running in a Docker container, served at port 8787, and accessed using Safari (but any web browser will work).

Rstudio Server is almost identical to RStudio Desktop. Main difference is the “Upload” button in the Files pane. This one is running in a Docker container, served at port 8787, and accessed using Safari (but any web browser will work).

The Docker single-liner above will create a single user with sudo rights (since I’ve included -e ROOT=TRUE). After logging into the instance, you can then add other users and copy the course materials to everyone using these scripts: https://github.com/einarpius/create_rstudio_users Note that the instance is running Debian, so you’ll need very basic familiarity with managing file permissions on the command line. For example, you’ll need to make the scripts executable with chmod 700 create_users.sh.

Then connect to the same router you’ll be using for your LAN party, go to router settings and assign yourself a fixed IP address, e.g., 168.192.1.78. Once other people connect to the network created by this router (either by WiFi or cable), they need to type 168.192.1.78:8787 into any browser and can just start using RStudio. This will work as long as your computer is running Docker and you are all connected to the same router.

I had 10 people connected to my laptop and, most of the time, the strain on my CPU was negligible – around 10-20%. That’s because it was a course for complete beginners and they were mostly reading the instructions (included in the training Notebooks they were running R code in). So they weren’t actually hitting Run at the same time, and the tasks weren’t computationally heavy. When we did ask everyone to hit the “Knit to PDF” button all at the same time, it got a bit slower and my CPU was apparently working at 200%. But nothing crashed and everyone got their PDFs made.

Why are you calling it a LAN party?

My friends and I having a LAN party in Estonia, 2010. We would mostly play StarCraft or Civilization, or as pictured here - racing games to wind down at the end.

My friends and I having a LAN party in Estonia, 2010. We would mostly play StarCraft or Civilization, or as pictured here – racing games to wind down at the end.

LAN stands for Local Area Network and in most cases means “devices connected to the same WiFi*”. You’ve probably used LANs lots in your life without even realising. One common example is printers: you know when a printer asks you to connect to the same network to be able to print your files? This usually means your computer and the printer will be in a LAN. If your printed accepted files via any internet connection, rather than just the same local network, then people around the world could submit stuff for your printer. Furthermore, if you have any smart devices in your home, they’ll be having a constant LAN party with each other.

The term “LAN party” means people coming together to play multiplayer computer games – as it will allow people to play in the same “world”, to either build things together or fight with each other. Good internet access has made LAN parties practically obsolete – people and their computers no longer have to physically be in the same location to play multiplayer games together. I use the term very loosely to refer to anything fun happening on the same network. And being able to use RStudio is definitely a party in my books anyway.

But it is for security reasons (e.g., the printer example), or sharing resources in places without excellent internet connection where LAN parties are still very much relevant.

* Overall, most existing LANs operate via Ethernet cables (or “internet cables” as most people, including myself refer to them). WiFi LAN or WLAN is a type of LAN. Have a look at your home router, it will probably have different lights for “internet” and “WLAN”/“wireless”. A LAN can also be connected to the internet – if the router itself is connected to the internet. That’s the main purpose of a router – to take the internet coming into your house via a single Ethernet cable, and share it with all your other devices. A LAN is usually just a nice side-effect of that.

Docker, containers, images

Docker image – a file bundling an operating system + programs and files
Docker container – a running image (it may be paused or stopped)

List of all your containers: docker ps -a (just docker ps will list running containers, so the ones not stopped or paused)

List your images: docker images

Run a container using an image:

docker run -d -p 8787:8787 -e ROOT=TRUE -e USER=user -e PASSWORD=password rstudio/verse 

When you run rstudio/verse for the first time it will be downloaded into your images. The next time it will be taken directly from there, rather than downloaded. So you’ll only need internet access once.

Stop an active container: docker stop container-name

Start it up again: docker start container-name

Save a container as an image (for versioning or passing on to other people):

docker commit container-name pository:tag

For example: docker commit rstudio-server rstudio/riinu:test1

Rename container (by default it will get a random label, I’d change it to rstudio-server):

docker rename happy_hippo rstudio-server

You can then start your container with: docker start rstudio-server