Skip to content

Saving leaflet maps

Leaflet is a great way to create maps in R which gets a background map from the internet. In RStudio at least, the map is opened in the viewer pane and is not a plot in the usual sense (e.g. opening in a device). That means that you can’t save it via the usual method of e.g.

png("x.png")leaflet() %>% addTiles()

It just produces and empty plot. fdetsch gives the answer  of how to save leaflet figures on this StackOverflow answer, which uses the mapview package.

m <- leaflet() %>% addTiles()
mapshot(m, file = "world.png")

Hope that helps someone 🙂

Update ——

An answer to this StackOverflow question shows how to turn off zooming and scrolling in leaflet maps…might be useful…

Update 2 —–

An answer to this StackOverflow question shows how to use different Tiles (background maps)… reliefs for instance…

The various options are listed here.




openrouteservice – geodata!

The openrouteservice provides a new method to get geodata into R. It has an API (or a set of them) and an R package has been written to communicate with said API(s) and is available from GitHub. I’ve just been playing around with the examples on this page, in the thought of using it for a project (more on that later if I get anywhere with it).

Anyways…onto the code…which is primarily a modification from the examples page I mentioned earlier (see that page for more examples).


Load some libraries


Set the API key


Locations of interest and send the request to the API asking for the region that is accessible within a 15 minute drive of the coordinates.

coordinates <- list(c(8.55, 47.23424), c(8.34234, 47.23424), c(8.44, 47.4))

x <- ors_isochrones(coordinates, 
 range = 60*15, # maximum time to travel (15 mins)
 interval = 60*15, # results in bands of 60*15 seconds (15 mins)
 intersections=FALSE) # no intersection of polygons

By changing the interval to, say, 60*5, three regions per coordinate are returned representing regions accessible within 5, 10 and 15 minutes drive. Changing the intersections argument would produce a separate polygon for any overlapping regions. The information of the intersected polygons is limited though, so it might be better to do the intersection with other tools afterwards.

The results can be plotted with leaflet…

leaflet() %>%
 addTiles() %>%
 addGeoJSON(x) %>%

The blue regions are the three regions accessible within 15 minutes. A few overlapping regions are evident, each of which would be saved to a unique polygon had we set intersections to TRUE.


The results from the API come down in a GeoJSON format which is given a class of, in this case ors_isochrones, which isn’t recognized by so many formats so you might want to convert it to an sp object, giving access to all of the tools for those formats. That’s easy enough to do via the geojsonio package…

class(x) <- "geo_list"
y <- geojson_sp(x)


You can also derive coordinates from (partial) addresses. Here is an example for a region of Bern in Switzerland, using the postcode.

coord <- ors_geocode("3012, Switzerland")

This resulted in 10 hits, the first of which was correct…the others were in different countries…

unlist(lapply(coord$features, function(x) x$properties$label))
[1] "3012, Bern, Switzerland"                                
 [2] "A1, Bern, Switzerland"                                  
 [3] "Bremgartenstrasse, Bern, Switzerland"                   
 [4] "131 Bremgartenstrasse, Bern, Switzerland"               
 [5] "Briefeinwurf Bern, Gymnasium Neufeld, Bern, Switzerland"
 [6] "119 Bremgartenstrasse, Bern, Switzerland"               
 [7] "Gym Neufeld, Bern, Switzerland"                         
 [8] "131b Bremgartenstrasse, Bern, Switzerland"              
 [9] "Gebäude Nord, Bern, Switzerland"                        
[10] "113 Bremgartenstrasse, Bern, Switzerland"

The opposite (coordinate to address) is also possible, again returning multiple hits…

address <- ors_geocode(location = c(7.425898, 46.961598))
unlist(lapply(address$features, function(x) x$properties$label))
[1] "3012, Bern, Switzerland" 
[2] "A1, Bern, Switzerland" 
[3] "Bremgartenstrasse, Bern, Switzerland" 
[4] "131 Bremgartenstrasse, Bern, Switzerland" 
[5] "Briefeinwurf Bern, Gymnasium Neufeld, Bern, Switzerland" 
[6] "119 Bremgartenstrasse, Bern, Switzerland" 
[7] "Gym Neufeld, Bern, Switzerland" 
[8] "131b Bremgartenstrasse, Bern, Switzerland" 
[9] "Gebäude Nord, Bern, Switzerland" 
[10] "113 Bremgartenstrasse, Bern, Switzerland" 

Other options are distances/times/directions between points and places of interest (POI) near a point or within a region.

Hope that helps someone! Enjoy!



grconvertX and grconvertY

These two functions are unbelievably useful for positioning graphical elements (text, axes, labels, …) in R. They allow one to convert coordinates between various different formats. For instance, you can convert your user coordinate (say 5 where x ranges from 0 to 200) to normalized device coordinates (proportional distance across the device) and vice versa. Very useful for positioning panel labels…

grconvertX(.1, "npc", "user")

returns the x coordinate that is 10% across the plot region. So with that and the equivalent y function, you can place your labels in exactly the same position on every panel and plot… e.g.

plot(rnorm(20), rnorm(20))
text(grconvertX(.1, "npc", "user"), grconvertY(.9, "npc", "user"), "a)")

To show the difference between the from side, run the following…

text(grconvertX(.1, "npc", "user"), grconvertY(.9, "npc", "user"), "npc")
text(grconvertX(.1, "ndc", "user"), grconvertY(.9, "ndc", "user"), "ndc", xpd = "n") # the text will be near the outer margin

It’s also very useful for positioning axis labels from boxplot when the labels are too long…

Hope that’s helpful to someone… 🙂

Flow charts in R

Flow charts are an important part of a clinical trial report. Making them can be a pain though. One good way to do it seems to be with the grid and Gmisc packages in R. X and Y coordinates can be designated based on the center of the boxes in normalized device coordinates (proportions of the device space – 0.5 is this middle) which saves a lot of messing around with corners of boxes and arrows.

A very basic flow chart, based very roughly on the CONSORT version, can be generated as follows…


# set some parameters to use repeatedly
leftx <- .25
midx <- .5
rightx <- .75
width <- .4
gp <- gpar(fill = "lightgrey")
# create boxes
(total <- boxGrob("Total\n N = NNN", 
 x=midx, y=.9, box_gp = gp, width = width))

(rando <- boxGrob("Randomized\n N = NNN", 
 x=midx, y=.75, box_gp = gp, width = width))
# connect boxes like this
connectGrob(total, rando, "v")

(inel <- boxGrob("Ineligible\n N = NNN", 
 x=rightx, y=.825, box_gp = gp, width = .25, height = .05))

connectGrob(total, inel, "-")

(g1 <- boxGrob("Allocated to Group 1\n N = NNN", 
 x=leftx, y=.5, box_gp = gp, width = width))
(g2 <- boxGrob("Allocated to Group 2\n N = NNN", 
 x=rightx, y=.5, box_gp = gp, width = width))

connectGrob(rando, g1, "N")
connectGrob(rando, g2, "N")

(g11 <- boxGrob("Followed up\n N = NNN", 
 x=leftx, y=.3, box_gp = gp, width = width))
(g21 <- boxGrob("Followed up\n N = NNN", 
 x=rightx, y=.3, box_gp = gp, width = width))

connectGrob(g1, g11, "N")
connectGrob(g2, g21, "N")

(g12 <- boxGrob("Completed\n N = NNN", 
 x=leftx, y=.1, box_gp = gp, width = width))
(g22 <- boxGrob("Completed\n N = NNN", 
 x=rightx, y=.1, box_gp = gp, width = width))

connectGrob(g11, g12, "N")
connectGrob(g21, g22, "N")

Sections of code to make the boxes are wrapped in brackets to print them immediately. The code creates something like the following figure:


For detailed info, see the Gmisc vignette. This code is also on github.



Count models in JAGS

Looks like I’ll be diving into some Bayesian analyses using JAGS. This post is primarily intended as a collection of links to [potentially] useful information, but also includes a few initial thoughts (I might update it occasionally with new links).

In terms of R packages, a very brief play suggests that R2jags is more user friendly than rjags (seems to be easier to get hold of the output and chains etc). Both packages seem to be missing good vignettes. The latter misses good examples too (although there are quite a few examples for both on the internet).

A tip from Zuur & Ieno 2016: use a model matrix and assign priors based on the number of columns of the model matrix. Saves having to change the model description when adding/removing variables (how to include different priors for different betas though…?). E.g. (modified from Zuur & Ieno 2016)

X <- model.matrix(~x1+x2+x3)
k <- ncol(X)
mod <- "
model {
 # priors
 for (i in 1:k) { 
      b[i] <- dnorm(0, 0.1) }
 # likelihood
 for (i in 1:N) {
      Y[i] ~ dpois(mu[i])      
      log(mu[i]) <- inprod(b[], X[i,])

General links:

Guide to JAGS through R, goes into quite a bit of detail on installing, diagnostics, even running JAGS from the terminal (as well as 2 or 3 three of the R packages for interfacing with JAGS).

Bayesian regression example (wiki style).

Advanced BUGS topics (includes the “zero-trick” to fit any distribution in JAGS)

Newest R2JAGS questions on StackOverflow


Count models:

A post with a couple of examples for JAGS count models (including offsets)

A post with a question regarding autocorrelation of parameters in negative binomial models and the answer on how to get around it.

A report using some data on owl feeding behaviour using 3 or 4 methods for ZIP/ZINBs. (This is not JAGS stuff, but interesting for ZIPs/ZINBs)



A book website with lots of code (Bayesian methods in health economics by Gianluca Baio). But see this R-bloggers post which mentions that an update broke some of the code, and details how to fix it. No idea if the book is any good, but the code should/could be useful

As with their other books*, HighStat’s “beginners guide” to ZIP models is very well written and easy to follow. Because mixed effects zero inflated methods are not implemented in standard software, Zuur and Ieno use JAGS quite a lot. Chapter 12 gives two methods (and code) for assessing overdispersion.

Zuur & Ieno (2016) A beginners guide to zero inflated models using R.


*well, the only other one I’ve read (the fantastic “penguin book” – Mixed Effects Models and Extensions in Ecology with R)

Emails from R

There are a few packages for sending email directly from R, but I work in a place where none of these work due to strict network settings. To at least partially circumvent this, here’s some code to produce a PowerShell script to send email(s) via Outlook. The PowerShell script can then be run either by a shell call (again, not possible in my workplace) or by right clicking the file and selecting run with PowerShell.


# Addresses 
add <- c("", "")
subject <- "Testing"

# construct message
# opening
start <- 'Hi, 
how are you?

# main content
body <- '
sent almost exclusively from R 

# signature
sig <- '
And this is my signature

# paste components together
Body <- paste(start, body, sig)

# construct PowerShell code (*.ps1)
email <- function(recip, subject, Body, filename, attachment = NULL, append){
  file <- paste0(filename, ".ps1")
  write('$Outlook = New-Object -ComObject Outlook.Application', file, append = append)
  write('$Mail = $Outlook.CreateItem(0)', file, append = TRUE)
  write(paste0('$Mail.To = "', recip, '"'), file, append = TRUE)
  write(paste0('$Mail.Subject = "', subject, '"'), file, append = TRUE)
  write(paste0('$Mail.Body = "', Body, '"'), file, append = TRUE)
    write(paste0('$File = "', attachment, '"'), file, append = TRUE)
    write('$Mail.Attachments.Add($File)', file, append = TRUE)
  write('$Mail.Send()', file, append = TRUE)
  if(append) write('', file, append = TRUE)

for(i in 1:length(add)){
  file <- paste0("email", i, ".ps1")
  att <- file.path(getwd(), "blabla.txt")
  email(add[i], subject, Body, file, attachment = att) # with attachment
  # email(add[i], subject, Body, file)                 # without attachment
  # email(add[i], subject, Body, file, append = TRUE)  # multiple emails in a single PS file 


Now you can go and run the PowerShell script from within windows explorer.


<UPDATE> The clue to this solving this came from here

“Pretty” table columns 2

I recently wrote a few functions for making some common column formats and posted them here. I’ve recently added them to github gist so that you can download them straight into R using


I changed a couple of the function names though and added an argument to specify_decimal so that you can provide a minimum value. The function arguments are now:

specify_decimal(x, k, min)
npropconv(n, perc, dp.n, dp.perc)
valciconv(val, CIlower, CIupper, dp.val, dp.CI)
minval(values, min)

minval is the new name for minp.

Hope thats useful…