RStudio Server LAN party: Laptop+Router+Docker to serve RStudio offline

This post was originally published here

TLDR: You can teach R on people’s own laptops without having them install anything or require an internet connection.

Members of the Surgical Informatics team in Ghana, 2019. More information: surgicalinformatics.org

Members of the Surgical Informatics team in Ghana, 2019. More information: surgicalinformatics.org

Introduction

Running R programming courses on people’s own laptops is a pain, especially as we use a lot of very useful extensions that actually make learning and using R much easier and more fun. But long installation instructions can be very off-putting for complete beginners, and people can be discouraged to learn programming if installation hurdles invoke their imposter syndrome.

We almost always run our courses in places with a good internet connection (it does not have to be super fast or flawless), so we get our students all set up on RStudio Server (hosted by us) or https://rstudio.cloud (a free service provided by RStudio!).
You connect to either of these options using a web browser, and even very old computers can handle this. That’s because the actual computations happen on the server and not on the student’s computer. So the computer just serves as a window to the training instance used.

Now, these options work really well as long as you have a stable internet connection. But for teaching R offline and on people’s own laptops, you either have to:

  1. make sure everyone installs everything correctly before they attend the course
  2. Download all the software and extensions, put them on USB sticks and try to install them together at the start
  3. start serving RStudio from a your computer using Local Area Network (LAN) created by a router

Now, we already discussed why the first option is problematic (gatekeeper for complete beginners). The second option – installing everything at the start together – means that you start the course with the most boring part. And since everyone’s computers are different (both by operating systems as well as different versions of the operating systems), this can take quite a while to sort. Therefore, queue in option c) – an RStudio Server LAN party.

Requirements

  1. A computer with more than 4GB of RAM. macOS alone uses around 2-3GB just to keep going, and running the RStudio Server docker container was using another 3-4 GB, so you’ll definitely need more than 4GB in total.
  2. A network router. For a small number of participants, the same one you already have at home will work. Had to specify “network” here, as apparently, even my Google search for “router” suggests the power tool before network routers.
  3. Docker – free software, dead easy to install on macOS (search the internet for “download Docker”). Looks like installation on the Windows Home operating system might be trickier. If you are a Windows Home user who is using Docker, please do post a link to your favourite instructions in the comments below.
  4. Internet connection for setting up – to download RStudio’s docker image and install your extra packages.
My MacBook Pro serving RStudio to 10 other computers in Ghana, November 2019.

My MacBook Pro serving RStudio to 10 other computers in Ghana, November 2019.

Set-up

Running RStudio using Docker is so simple you won’t believe me. It honestly is just a single-liner to be entered into your Terminal (Command Prompt on Windows):

docker run -d -p 8787:8787 -e ROOT=TRUE -e USER=user -e PASSWORD=password rstudio/verse 

This will automatically download a Docker image put together by RStudio. The one called verse includes all the tidyverse packages as well as publishing-related ones (R Markdown, Shiny, etc.). You can find a list of the difference ones here: https://github.com/rocker-org/rocker

Then open a browser and go to localhost:8787 and you should be greeted with an RStudio Server login! (Localhost only works on a Mac or Linux, if using Windows, take a note of your IP address and use that instead of localhost.) More information and instructions can be found here: https://github.com/rocker-org/rocker/wiki/Using-the-RStudio-image

Tip: RStudio suggests port 8787, which is what I used for consistency, but if you set it up on 80 you can omit the :80 as that’s the default anyway. So you can just go to localhost (or something like 127.0.0.0 if using Windows).

For those of you who have never seen or used RStudio Server, this is what it looks like:

Rstudio Server is almost identical to RStudio Desktop. Main difference is the “Upload” button in the Files pane. This one is running in a Docker container, served at port 8787, and accessed using Safari (but any web browser will work).

Rstudio Server is almost identical to RStudio Desktop. Main difference is the “Upload” button in the Files pane. This one is running in a Docker container, served at port 8787, and accessed using Safari (but any web browser will work).

The Docker single-liner above will create a single user with sudo rights (since I’ve included -e ROOT=TRUE). After logging into the instance, you can then add other users and copy the course materials to everyone using these scripts: https://github.com/einarpius/create_rstudio_users Note that the instance is running Debian, so you’ll need very basic familiarity with managing file permissions on the command line. For example, you’ll need to make the scripts executable with chmod 700 create_users.sh.

Then connect to the same router you’ll be using for your LAN party, go to router settings and assign yourself a fixed IP address, e.g., 168.192.1.78. Once other people connect to the network created by this router (either by WiFi or cable), they need to type 168.192.1.78:8787 into any browser and can just start using RStudio. This will work as long as your computer is running Docker and you are all connected to the same router.

I had 10 people connected to my laptop and, most of the time, the strain on my CPU was negligible – around 10-20%. That’s because it was a course for complete beginners and they were mostly reading the instructions (included in the training Notebooks they were running R code in). So they weren’t actually hitting Run at the same time, and the tasks weren’t computationally heavy. When we did ask everyone to hit the “Knit to PDF” button all at the same time, it got a bit slower and my CPU was apparently working at 200%. But nothing crashed and everyone got their PDFs made.

Why are you calling it a LAN party?

My friends and I having a LAN party in Estonia, 2010. We would mostly play StarCraft or Civilization, or as pictured here - racing games to wind down at the end.

My friends and I having a LAN party in Estonia, 2010. We would mostly play StarCraft or Civilization, or as pictured here – racing games to wind down at the end.

LAN stands for Local Area Network and in most cases means “devices connected to the same WiFi*”. You’ve probably used LANs lots in your life without even realising. One common example is printers: you know when a printer asks you to connect to the same network to be able to print your files? This usually means your computer and the printer will be in a LAN. If your printed accepted files via any internet connection, rather than just the same local network, then people around the world could submit stuff for your printer. Furthermore, if you have any smart devices in your home, they’ll be having a constant LAN party with each other.

The term “LAN party” means people coming together to play multiplayer computer games – as it will allow people to play in the same “world”, to either build things together or fight with each other. Good internet access has made LAN parties practically obsolete – people and their computers no longer have to physically be in the same location to play multiplayer games together. I use the term very loosely to refer to anything fun happening on the same network. And being able to use RStudio is definitely a party in my books anyway.

But it is for security reasons (e.g., the printer example), or sharing resources in places without excellent internet connection where LAN parties are still very much relevant.

* Overall, most existing LANs operate via Ethernet cables (or “internet cables” as most people, including myself refer to them). WiFi LAN or WLAN is a type of LAN. Have a look at your home router, it will probably have different lights for “internet” and “WLAN”/“wireless”. A LAN can also be connected to the internet – if the router itself is connected to the internet. That’s the main purpose of a router – to take the internet coming into your house via a single Ethernet cable, and share it with all your other devices. A LAN is usually just a nice side-effect of that.

Docker, containers, images

Docker image – a file bundling an operating system + programs and files
Docker container – a running image (it may be paused or stopped)

List of all your containers: docker ps -a (just docker ps will list running containers, so the ones not stopped or paused)

List your images: docker images

Run a container using an image:

docker run -d -p 8787:8787 -e ROOT=TRUE -e USER=user -e PASSWORD=password rstudio/verse 

When you run rstudio/verse for the first time it will be downloaded into your images. The next time it will be taken directly from there, rather than downloaded. So you’ll only need internet access once.

Stop an active container: docker stop container-name

Start it up again: docker start container-name

Save a container as an image (for versioning or passing on to other people):

docker commit container-name pository:tag

For example: docker commit rstudio-server rstudio/riinu:test1

Rename container (by default it will get a random label, I’d change it to rstudio-server):

docker rename happy_hippo rstudio-server

You can then start your container with: docker start rstudio-server

New intuitive ways for reshaping data in R: long live pivot_longer() and pivot_wider()

This post was originally published here

TLDR: there are two new and very intuitive R functions for reshaping data: see Examples of pivot_longer() and pivot_wider() below. At the time of writing, these new functions are extremely fresh and only exist in the development version on GitHub (see Installation), we should probably wait for the tidyverse team to officially release them (in CRAN) before putting them into day-to-day use.

Exciting!

Introduction

The juxtapose of data collection vs data analysis: data that was very easy to collect, is probably very hard to analyse, and vice versa. For example, if data is collected/written down whichever format was most convenient at the time of data collection, it is probably not recorded in a regularly shaped table, with various bits of information in different parts of the document. And even if data is collected into a table, it is often intuitive (for data entry) to include information about the same variable in different columns. For example, look at this example data I just made up:

library(tidyverse)

candydata_raw = read_csv("2019-04-07_candy_preference_data.csv")
candy_type likes age: 5 likes age: 10 likes age: 15 gets age: 5 gets age: 10 gets age: 15
Chocolate 4 6 8 2 4 6
Lollipop 10 8 6 8 6 4

For each candy type, there are 8 columns with values. But actually, these 8 columns capture a combination of 3 variables: age, likes and eats. This is known as the wide format, and it is a convenient way to either note down or even present values. It is human-readable. For effective data analysis, however, we need data to be in the tidy data format, where each column is a single variable, and each row a single observation (https://www.jstatsoft.org/article/view/v059i10). It needs to be less human-readable and more computer-friendly.

Some of you may remember now retired reshape2::melt() or reshape2::dcast(), and many of you (inclduing myself!) have struggled remebering the arguments for tidyr::gather() and tidyr::spread(). Based on extensive community feedback, the tidyverse team have reinveted these functions using both more intuitive names, as well as clearer syntax (arguments):

Installation

These functions were added just a month ago, so these functions are not yet included in the standard version of tidyr that comes with install.packages("tidyverse") or even update.packages() (the current version of tidyr on CRAN is 0.8.3). To play with the bleeding edge versions of R packages, run install.packages("devtools") and then devtools::install_github("tidyverse/tidyr"). If you are a Mac user and it asks you “Do you want to install from sources the package which needs compilation?”, say Yes.

You might need to Restart R (Session menu at the top) and load library(tidyverse) again. You can check whether you now have these functions installed by typing in pivot_longer and pressing F1 – if a relevant Help tab pops open you got it.

Examples

candydata_longer = candydata_raw %>% 
  pivot_longer(contains("age"))
candy_type name value
Chocolate likes age: 5 4
Chocolate likes age: 10 6
Chocolate likes age: 15 8
Chocolate gets age: 5 2
Chocolate gets age: 10 4
Chocolate gets age: 15 6
Lollipop likes age: 5 10
Lollipop likes age: 10 8
Lollipop likes age: 15 6
Lollipop gets age: 5 8
Lollipop gets age: 10 6
Lollipop gets age: 15 4

Now, that’s already a lot better, but we still need to split the name column into the two different variables it really includes. “name” is what pivot_longer() calls this new column by default. Remember, each column is a single variable.

candydata_longer = candydata_raw %>% 
  pivot_longer(contains("age")) %>% 
  separate(name, into = c("questions", NA, "age"), convert = TRUE)
candy_type questions age value
Chocolate likes 5 4
Chocolate likes 10 6
Chocolate likes 15 8
Chocolate gets 5 2
Chocolate gets 10 4
Chocolate gets 15 6
Lollipop likes 5 10
Lollipop likes 10 8
Lollipop likes 15 6
Lollipop gets 5 8
Lollipop gets 10 6
Lollipop gets 15 4

And pivot_wider() can be used to do the reverse:

candydata = candydata_longer %>% 
  pivot_wider(names_from = questions, values_from = value)
candy_type age likes gets
Chocolate 5 4 2
Chocolate 10 6 4
Chocolate 15 8 6
Lollipop 5 10 8
Lollipop 10 8 6
Lollipop 15 6 4

It is important to spell out the arguments here (names_from =, values_frame =) since they are not the second and third arguments of pivot_wider() (like they were in spread()). Investigate the pivot_wider+F1 Help tab for more information.

Wrap-up and notes

Now these are datasets we can work with: each column is a variable, each row is an observation.

Do not start replacing working and tested instances of gather() or spread() in your existing R code with these new functions. That is neither efficient nor necessary – gather() and spread() will remain in tidyr to make sure people’s scripts don’t suddenly stop working. Meaning: tidyr is backward compatible. But after these functions are officially released, I will start using them in all new scripts I write.

I made the original messy columns still relatively nice to work with – no typos and reasonable delimiters. Usually, the labels are much worse and need the help of janitor::clean_names(), stringr::str_replace(), and multiple iterations of tidyr::separate() to arrive at a nice tidy tibble/data frame.

tidyr::separate() tips:

into = c("var1", NA, "var2") – now this is an amazing trick I only came across this week! This is a convenient way to drop useless (new) columns. Previously, I would have achieved the same result with:

... %>% 
    separate(..., into = c("var1", "drop", "var2")) %>% 
    select(-drop) %>% 
    ...
    

convert = TRUE: by default, separate() creates new variables that are also just “characters”. This means our age would have been a chacter vector of, e.g., “5”, “10”, rather than 5, 10, and R wouldn’t have known how to do arithmetic on it. In this example, convert = TRUE is equivalent to mutate(age = as.numeric(age)).

Good luck!

P.S. This is one of the coolest Tweets I’ve ever seen:

Quick take-aways from RStudio::conf Training Day 01

This week, Riinu, Steve, and Cameron are attending the annual RStudio Workshops (Tue-Wed), Conference (Thu-Fri), and the tidyverse developer day (Sat) in Austin, Texas.

We won’t even try to summarise everything we’re learning here, since the content is vast and the learning is very much hands on, but we will be posting a small selection of some take-aways in this blog.

We’re all attending different workshops (Machine Learning, Big Data, Markdown&Shiny).

Interesting take-away no. 1: terminology

Classification means categorical outcome variable.

Regression means continuous (numeric) outcome variable.

The is a bit confusing when using logistic regression – which by this definition is “classification”, rather than “regression”. But it is very common machine learning terminology, and makes sense considering the wide range of different methods used for classification (so not just regression).

Interesting take-away no. 2: library(parsnip)

The biggest strength of R is how many different packages (=extensions) it has. Basically, if you can think of a statistical or machine learning method, it’s probably implemented in R. This is because a lot of R users are also R developers – if you find a method that you really want to use, but that hasn’t been implemented yet, you can just go on and implement it youself. And then publish this new functionality as an R package than everyone can use.

However, this also means that different R packages sometimes do similar things using very different syntax. This is where the parsnip packages comes to resque, providing a unified interface for using some of these modelling packages.

For example:

Instead of figuring out the syntax for lm() (basic linear regression model), and then for Stan, and Spark, and keras, set_engine() from library(parsnip) provides us with a unified interface for all of these different methods for linear regression.

A fully working example can be found in the course materials (all publicly available):

https://github.com/topepo/rstudio-conf-2019/blob/master/Materials/Part_2_Basic_Principles.R

Interesting take-away no 3: Communication by a new means

The Rmarkdown workshop raised two interesting points within the first few mintues of starting – how prevalent communication by html has become (i.e. the internet, use of interactive documents and apps to relay industry and research data to colleagues and the wider commmunity).

But, maybe more importantly, how little is understood by the general public and how it can be used relatively easily for impressive interactivity with few lines of code….followed by the question – how about that raise boss??

For example, how using the package plotly can add immediate interactivity following on from all the ggplot basics learnt at healthyR:

 

And when you come across a website called “pimp my Rmarkdown” how can you not want to play!!!!

Interesting take-away no. 4: monitor progress with Viewer pane

Regular knitting, including at the start of an Rmd document to ensure any errors are highlighted early, is key. Your RMarkdown is a toddler who loves to misbehave. Previewing your document in a new window can take time and slow you down….

Frequent knitting into the Viewer pane can give you quick updates on how your code is behaving and identify bugs early!

The default in Rstudio loads your document into a new window when the Knit button is hit. A loading of a preview into the Viewer pane can be set as follows:

Tools tab > Global Options > RMarkdown > Set “Output preview in” to Viewer pane

Rmarkdown hack of the day: New chunk shortcut

Control + Alt + I
or
Cmd + Alt + I

Global map of country names

This post was originally published here

This post demonstrates the use of two very cool R packages – ggrepel and patchwork.

ggrepel deals with overlapping text labels (Code#1 at the bottom of this post):

patchwork is a very convenient new package for combining multiple different plots together (i.e. what we usually to use grid and gridExtra for).

More info:

To really demonstrate the power of them, let’s make a global map of country names using ggrepel:

Now this is very good already with hardly any overlapping labels and the world is pretty recognisable. And really, you can make this plot with just 2 lines of code:

So what these two lines make is already very amazing.

But I feel like Europe is a little bit misshapen and that the Caribbean and Africa are too close together. So I divided the world into regions (in this case same as continents except Russia is it’s own region – it’s just so big). Then wrote two functions that asked ggrepel to plot each region separately and use patchwork to patch each region together:

This gives continents a much better shape, but it does severaly misplace Polynesia. See if you can find where, e.g., Tonga is and where it should be.

To see what I did with patchwork there, let’s add black borders to each region (Code#2):

Code#1:

Code#2:

My data science toolbox

This post was originally published here

I’ve been doing data science for over 10 years now. Although most of this time I didn’t realise I was doing data science. I thought I was just doing normal science but focusing on simulations and data analysis, rather than field or lab work. I’ve switched fields a few times now- physics BSc, Chemistry PhD, now working in medical research. Therefore, instead of this lenghty introduction:

“I’m a physicist by background with substantial interdisciplinary expertise in simulations, data analysis, programming…”

I just go with:

I’m a data scientist.

Anyway, here’s how my toolbox and technical skills have evolved over the years:

My data science toolbox evolution.

P.S. Once a physicist, always a physicist.

Islay distilleries in 3 days

This post was originally published here

Day 0 (Sunday 18-February 2018)

Left Edinburgh at 8am for a 1pm ferry Kennacraig to Port Askaig (Islay). Edinburgh-Kennacraig should be a 3.5h drive (and it was), but we left early to allow for any delays on the road. Arrived on Islay at 3pm and our accommodation near Port Ellen (southern Islay, close to to Ardbeg, Lagavulin, Laphroiaig) was a 40 min drive from the port.

Map of Islay with all its lovely distilleries.

Map of Islay with all its lovely distilleries.

Day 1 (Monday 19-February 2018): Ardbeg, Lagavulin, Laphroiaig

We hadn’t booked anything other than the ferry and accommodation. February is very low season so we were right to think that no other advance bookings were necessary.

We had a lazy morning and drove to Laphroaig at about 11am. We asked which tours or tasting events were on that day and booked Einar onto the Layers of Laphroaig tasting at 3pm (as the driver, I was allowed to accompany him for free). We then drove to Lagavulin (just a few miles from Laphroaig) and booked us onto the tour at 1pm. We then drove to Ardbeg (another few miles) and had second breakfast at their cafe. Then drove back to Lagavulin for the tour, and then back to Laphroaig for the testing.

Ardbeg’s epic cafe.

Ardbeg’s epic cafe.

Waiting for the tour to begin at Lagavulin’s homey tasting room.

Waiting for the tour to begin at Lagavulin’s homey tasting room.

In Laphroiaig’s tasting room: The Layers of Laphroiaig introduced whiskies from different casks that make up their range of malts. These include ex-bourbon, virgin oak (I did not know Scotch could be matured in virgin casks - I thought it always had to be ex-something!), ex-sherry, ex-port. We were the only ones booked on this so it ended up being a private tasting.

In Laphroiaig’s tasting room: The Layers of Laphroiaig introduced whiskies from different casks that make up their range of malts. These include ex-bourbon, virgin oak (I did not know Scotch could be matured in virgin casks – I thought it always had to be ex-something!), ex-sherry, ex-port. We were the only ones booked on this so it ended up being a private tasting.

Day 2 (Tuesday 20-Febaruary 2018): Kilchoman, Bruichladdich

Einar drove us to Kilchoman where I had a tasting of their 3 limited edition malts in the visitor centre. Kilchoman is a “farm-distillery” and they even grow some of their own barley. We bought a bottle of their “100% Islay” which is made from barley grown at the premises. Unfortunately, we completely forgot to take any pictures there. Must go back.

Driving on Islay.

Driving on Islay.

We then went to Bruichladdich and booked me on the Warehouse Experience at 2pm. Simiarly to Laphroaig, the driver was allowed to accompany for free. We had lunch at Port Charlotte while waiting for the event.

Bruichladdich warehouse experience

Bruichladdich warehouse experience

We then went by Bowmore (it was nearly 5pm) and asked about the different tours and experiences they have on the next day. Decided to do the “Bottle Your Own in the Vaults” first thing on Wednesday morning.

Day 3 (Wednesday 21-February 2018): Bowmore, Bunnahabhain, Ardnahoe, Caol Ila

Bottling a 17-year-old sherry cask beauty at Bowmore.

Bottling a 17-year-old sherry cask beauty at Bowmore.

We then dropped by Bunnahabhain – no tours were running that but we were offered a few free tasters at the shop. On our way back from Bunnahabhain we took a picture at Ardanahoe (a new distillery that opens any day now).

Visiting Bunnahabhain and stopping at soon to be opened Ardnahoe.

Visiting Bunnahabhain and stopping at soon to be opened Ardnahoe.

The final distillery was Caol Ila where we went on the standard tour. The view in the stills room was just out of this world. They didn’t allow us to take pictures inside, so I took this from their website:

Caol Ila stills with a view of the Isle of Jura. Picture from: https://www.malts.com/en-row/distilleries/caol-ila/

Caol Ila stills with a view of the Isle of Jura. Picture from: https://www.malts.com/en-row/distilleries/caol-ila/

Me outside Caol Ila with Jura in the background

Me outside Caol Ila with Jura in the background

What we brought back with us

In addition to whisky distilleries, we also visited a nano-brewery, and it turns out that The Botanist (a gin) is made at Bruichladdich.

In addition to whisky distilleries, we also visited a nano-brewery, and it turns out that The Botanist (a gin) is made at Bruichladdich.

Converting old WordPress posts to Hugo

This post was originally published here

Between 2014-2018 I published 29 posts on riinudata.wordpress.com. Today I’m converting all of those to my new website powered by blogdown-Hugo.

Step 1

Read the Migration: From WordPress chapter of the blogdown book.

Step 2

Get all your wordpress posts into one XML: WP Admin – Tools – Export.

Step 3

Install Exitwp and its dependencies (pyyamp, beautifulsoup4, html2text):

This worked on macOS1 High Sierra – I already had python installed.

Step 4

Working in the directory that git clone created (exitwp):

  • Put the WordPress XML in the wordpress-xml directory.
  • Run xmllint riinu_wordpress.xml, worked the first time for me and I didn’t get any errors (so not sure what the fix errors if there are would entail).
  • Back in the exitwp folder, run python exitwp.py
  • This created folders build/jekyll/riinudata.wordpress.com/_posts and the content looked like this:
  • Move all these into exitwp/post folder.

Step 5

  • Take a copy of https://github.com/yihui/oldblog_xml/blob/master/convert.R to clean these .markdown files up and ready for Hugo. I edited the first three lines, skipped the “Do not run if…” chunk as I’d already done that in Step 3, edited the authors = c(), did not run the very last chunk (local({if (!dir.exist...})).
  • Move all of the files (now .md) into content/post of your blogdown repo. Build and voila!

Further modifications

Looks like most of my posts were converted like a charm, with nicely formatted code blocks and images. But I few things I noticed that I think I have to fix:

  • GitHub gists are now displayed as links, will make those into code blocks (or embed them using a Hugo shortcodes.
  • Most images show up perfectly, but some have gotten stuck in a code block, e.g. showing up as <img src="https://surgicalinformatics.org/wp-content/uploads/2018/02/rplot.png" alt="Rplot"/>. Will sort these

Overall I feared a lot worse and am super happy with the conversion experience. Took exactly 3 h.

My name is Hildegard and I approve this message.

My name is Hildegard and I approve this message.


  1. I’m only 1.5 years late to discover that OS X has been rebranded as macOS: https://www.wired.com/2016/06/apple-os-x-dead-long-live-macos/

Hello world: blogdown loves Hugo

This post was originally published here

We are live!

I wrote my last blog post on WordPress on 20-October 2017 and promised myself this was the last time. I’ve been blogging on WordPress since 2014 and the more I used it the more painful it got! This is most likely caused by the fact that I have been thrifting further and further away from point-and-click interfaces anyway…oh and discovering MARKDOWN.

My two rules:

So I finally got round to creating a blogdown-Hugo site:

Hugo is a website generator that is code-based (no more dragging around those pesky WordPress elements); blogdown is an R package that will help you generate Hugo, Jekyll, or Hexo sites, especially if you will be including R Markdown in it.

Steps on 12-February 2018:

  • Created a new blogdown project on RStudio, set kakawait/hugo-tranquilpeak-theme as the theme
  • Edited my name, email etc. information in the config.toml.
  • Absolutely could not figure out how to change coverImage = "cover.jpg". Tried putting my cover image in /static/img/, /static/_images/, source/assets/images and tried linking to these any way I could think of (e.g. with and without the first /) but it just wasn’t happening. Ended up putting my picture in /themes/hugo-tranquilpeak-theme/static/images/ and blatantly naming it cover.jpg (replacing the theme’s default photo). This worked.
  • Pushed the whole project to https://github.com/riinuots/hugo-tranquil-website and then created a submobule in https://github.com/riinuots/hugo-tranquil-website/tree/master/themes so when the theme gets updated I can pull the new version. This is not essential. I need to figure out the cover image issue though.
  • Set up Netlify as in https://bookdown.org/yihui/blogdown/netlify.html which was superquick but then spent some time troubleshooting why my theme wasn’t displaying properly. Turns out that for this theme, it is essential to set the baseURL = "https://riinu.netlify.com/" (in config.toml).
  • Created this Hello World post which seemed to work fine at first. I then added an unquoted semicolon to the title, broke everything and spent 2 h trying to figure out what went wrong. These were the errors I was getting and that no-one else in the world (Google) seemed to have reported:
    • edits to the new post not happening, but the site isn’t broken either
    • clean_site() errors with:

rmarkdown::clean_site() Error in file.exists(files) : invalid 'file' argument

  • after spending 2h on Google/github/rstudio/rmarkdown, blogdown book, blogdown repo, Hugo documentation, I finally came across hugo -v (v for verbose). Noticed

yaml: line 1: mapping values are not allowed in this context

(which I had indeed seen before at some point during these 2 hours). Anyway, seeing it for the second time clicked – markdown thinks I’m mapping something that shouldn’t be mapped (mapping usually means defining variables). My title was (second line of the markdown file, really) title: Hello world: blogdown loves Hugo, but if using a semicolon you need quotes: title: "Hello world: blogdown loves Hugo".

Still better than WordPress.

Next steps:

  • Set up Disqus (comments).
  • Bring over old posts from https://riinudata.wordpress.com
  • Write all the new posts ideas I’ve been gathering over the past 4 months.

Your first Shiny app

This post was originally published here

What is Shiny?

Shiny is an R package (install.packages("shiny")) for making your outputs interactive. Furthermore, Shiny creates web apps meaning your work can be shared online with people who don’t use R. In other words: with Shiny, R people can make websites without ever learning Javascript etc.

I am completely obsessed with Shiny and these days I end up presenting most of my work in a Shiny app.

If it’s not worth putting in a Shiny app it’s not worth doing.

Your first Shiny app

Getting started with Shiny is actually a lot easier than a lot of people make it out to be. So I created a very short (9 slides) presentation outlining my 5-step programme for your first Shiny app.

This is the app: https://riinu.shinyapps.io/shiny_testing/

This is the presentation: http://rpubs.com/riinu/shiny

And here are the steps (also included in the presentation):

STEP 0: install.packages("shiny"). Use RStudio.

STEP 1: Create a script called app.R using this skeleton:

https://gist.github.com/riinuots/c6ec0691633df2929adc7de90bdbc792

STEP 2: Copy your plot code into the renderPlot function.

STEP 3: Add a sliderInput to your User Interface (ui). A slider is just one of the many Shiny widgets you could be using: https://shiny.rstudio.com/gallery/widget-gallery.html

STEP 4: Tell your Server you wish the dplyr::filter() to use the value from the slider. All inputs from the User Interface (ui) are stored in input$variable_name: replace the 2007 with input$year.

STEP 5 (optional): Add animate = TRUE.

Press Control+Shift+Enter or the “Run App” button. You now have a Shiny app running on your computer. To deploy it to the internet, e.g. like I’ve done in the link above, see this.