png("x.png")leaflet() %>% addTiles() dev.off()

It just produces and empty plot. fdetsch gives the answer of how to save leaflet figures on this StackOverflow answer, which uses the mapview package.

library(leaflet) library(mapview) m <- leaflet() %>% addTiles() mapshot(m, file = "world.png")

Hope that helps someone

Update ——

An answer to this StackOverflow question shows how to turn off zooming and scrolling in leaflet maps…might be useful…

]]>

Anyways…onto the code…which is primarily a modification from the examples page I mentioned earlier (see that page for more examples).

devtools::install_github("GIScience/openrouteservice-r")

Load some libraries

library(openrouteservice) library(leaflet)

Set the API key

ors_api_key("your-key-here")

Locations of interest and send the request to the API asking for the region that is accessible within a 15 minute drive of the coordinates.

coordinates <- list(c(8.55, 47.23424), c(8.34234, 47.23424), c(8.44, 47.4)) x <- ors_isochrones(coordinates, range = 60*15, # maximum time to travel (15 mins) interval = 60*15, # results in bands of 60*15 seconds (15 mins) intersections=FALSE) # no intersection of polygons

By changing the interval to, say, 60*5, three regions per coordinate are returned representing regions accessible within 5, 10 and 15 minutes drive. Changing the intersections argument would produce a separate polygon for any overlapping regions. The information of the intersected polygons is limited though, so it might be better to do the intersection with other tools afterwards.

The results can be plotted with leaflet…

leaflet() %>% addTiles() %>% addGeoJSON(x) %>% fitBBox(x$bbox)

The blue regions are the three regions accessible within 15 minutes. A few overlapping regions are evident, each of which would be saved to a unique polygon had we set intersections to TRUE.

The results from the API come down in a GeoJSON format which is given a class of, in this case ors_isochrones, which isn’t recognized by so many formats so you might want to convert it to an sp object, giving access to all of the tools for those formats. That’s easy enough to do via the geojsonio package…

library(geojsonio) class(x) <- "geo_list" y <- geojson_sp(x) library(sp) plot(y)

You can also derive coordinates from (partial) addresses. Here is an example for a region of Bern in Switzerland, using the postcode.

coord <- ors_geocode("3012, Switzerland")

This resulted in 10 hits, the first of which was correct…the others were in different countries…

unlist(lapply(coord$features, function(x) x$properties$label))

[1] "3012, Bern, Switzerland" [2] "A1, Bern, Switzerland" [3] "Bremgartenstrasse, Bern, Switzerland" [4] "131 Bremgartenstrasse, Bern, Switzerland" [5] "Briefeinwurf Bern, Gymnasium Neufeld, Bern, Switzerland" [6] "119 Bremgartenstrasse, Bern, Switzerland" [7] "Gym Neufeld, Bern, Switzerland" [8] "131b Bremgartenstrasse, Bern, Switzerland" [9] "Gebäude Nord, Bern, Switzerland" [10] "113 Bremgartenstrasse, Bern, Switzerland"

The opposite (coordinate to address) is also possible, again returning multiple hits…

```
address <- ors_geocode(location = c(7.425898, 46.961598))
unlist(lapply(address$features, function(x) x$properties$label))
[1] "3012, Bern, Switzerland"
[2] "A1, Bern, Switzerland"
[3] "Bremgartenstrasse, Bern, Switzerland"
[4] "131 Bremgartenstrasse, Bern, Switzerland"
[5] "Briefeinwurf Bern, Gymnasium Neufeld, Bern, Switzerland"
[6] "119 Bremgartenstrasse, Bern, Switzerland"
[7] "Gym Neufeld, Bern, Switzerland"
[8] "131b Bremgartenstrasse, Bern, Switzerland"
[9] "Gebäude Nord, Bern, Switzerland"
[10] "113 Bremgartenstrasse, Bern, Switzerland"
```

Other options are distances/times/directions between points and places of interest (POI) near a point or within a region.

Hope that helps someone! Enjoy!

]]>

grconvertX(.1, "npc", "user")

returns the x coordinate that is 10% across the plot region. So with that and the equivalent y function, you can place your labels in exactly the same position on every panel and plot… e.g.

```
plot(rnorm(20), rnorm(20))
text(grconvertX(.1, "npc", "user"), grconvertY(.9, "npc", "user"), "a)")
```

To show the difference between the from side, run the following…

text(grconvertX(.1, "npc", "user"), grconvertY(.9, "npc", "user"), "npc") text(grconvertX(.1, "ndc", "user"), grconvertY(.9, "ndc", "user"), "ndc", xpd = "n") # the text will be near the outer margin

It’s also very useful for positioning axis labels from boxplot when the labels are too long…

Hope that’s helpful to someone…

]]>A very basic flow chart, based very roughly on the CONSORT version, can be generated as follows…

library(grid) library(Gmisc) grid.newpage() # set some parameters to use repeatedly leftx <- .25 midx <- .5 rightx <- .75 width <- .4 gp <- gpar(fill = "lightgrey") # create boxes (total <- boxGrob("Total\n N = NNN", x=midx, y=.9, box_gp = gp, width = width)) (rando <- boxGrob("Randomized\n N = NNN", x=midx, y=.75, box_gp = gp, width = width)) # connect boxes like this connectGrob(total, rando, "v") (inel <- boxGrob("Ineligible\n N = NNN", x=rightx, y=.825, box_gp = gp, width = .25, height = .05)) connectGrob(total, inel, "-") (g1 <- boxGrob("Allocated to Group 1\n N = NNN", x=leftx, y=.5, box_gp = gp, width = width)) (g2 <- boxGrob("Allocated to Group 2\n N = NNN", x=rightx, y=.5, box_gp = gp, width = width)) connectGrob(rando, g1, "N") connectGrob(rando, g2, "N") (g11 <- boxGrob("Followed up\n N = NNN", x=leftx, y=.3, box_gp = gp, width = width)) (g21 <- boxGrob("Followed up\n N = NNN", x=rightx, y=.3, box_gp = gp, width = width)) connectGrob(g1, g11, "N") connectGrob(g2, g21, "N") (g12 <- boxGrob("Completed\n N = NNN", x=leftx, y=.1, box_gp = gp, width = width)) (g22 <- boxGrob("Completed\n N = NNN", x=rightx, y=.1, box_gp = gp, width = width)) connectGrob(g11, g12, "N") connectGrob(g21, g22, "N")

Sections of code to make the boxes are wrapped in brackets to print them immediately. The code creates something like the following figure:

For detailed info, see the Gmisc vignette. This code is also on github.

]]>

In terms of R packages, a very brief play suggests that R2jags is more user friendly than rjags (seems to be easier to get hold of the output and chains etc). Both packages seem to be missing good vignettes. The latter misses good examples too (although there are quite a few examples for both on the internet).

A tip from Zuur & Ieno 2016: use a model matrix and assign priors based on the number of columns of the model matrix. Saves having to change the model description when adding/removing variables (how to include different priors for different betas though…?). E.g. (modified from Zuur & Ieno 2016)

X <- model.matrix(~x1+x2+x3) k <- ncol(X) mod <- " model { # priors for (i in 1:k) { b[i] <- dnorm(0, 0.1) } # likelihood for (i in 1:N) { Y[i] ~ dpois(mu[i]) log(mu[i]) <- inprod(b[], X[i,]) } ... "

Guide to JAGS through R, goes into quite a bit of detail on installing, diagnostics, even running JAGS from the terminal (as well as 2 or 3 three of the R packages for interfacing with JAGS).

Bayesian regression example (wiki style).

Advanced BUGS topics (includes the “zero-trick” to fit any distribution in JAGS)

Newest R2JAGS questions on StackOverflow

A post with a couple of examples for JAGS count models (including offsets)

A post with a question regarding autocorrelation of parameters in negative binomial models and the answer on how to get around it.

A report using some data on owl feeding behaviour using 3 or 4 methods for ZIP/ZINBs. (This is not JAGS stuff, but interesting for ZIPs/ZINBs)

A book website with lots of code (Bayesian methods in health economics by Gianluca Baio). But see this R-bloggers post which mentions that an update broke some of the code, and details how to fix it. No idea if the book is any good, but the code should/could be useful

As with their other books*, HighStat’s “beginners guide” to ZIP models is very well written and easy to follow. Because mixed effects zero inflated methods are not implemented in standard software, Zuur and Ieno use JAGS quite a lot. Chapter 12 gives two methods (and code) for assessing overdispersion.

Zuur & Ieno (2016) A beginners guide to zero inflated models using R.

*well, the only other one I’ve read (the fantastic “penguin book” – Mixed Effects Models and Extensions in Ecology with R)

]]>

# Addresses add <- c("xxx@yyy.cc", "aaa@bbb.cc") subject <- "Testing" # construct message # opening start <- 'Hi, how are you? ' # main content body <- ' sent almost exclusively from R ' # signature sig <- ' And this is my signature ' # paste components together Body <- paste(start, body, sig) # construct PowerShell code (*.ps1) email <- function(recip, subject, Body, filename, attachment = NULL, append){ file <- paste0(filename, ".ps1") write('$Outlook = New-Object -ComObject Outlook.Application', file, append = append) write('$Mail = $Outlook.CreateItem(0)', file, append = TRUE) write(paste0('$Mail.To = "', recip, '"'), file, append = TRUE) write(paste0('$Mail.Subject = "', subject, '"'), file, append = TRUE) write(paste0('$Mail.Body = "', Body, '"'), file, append = TRUE) if(!is.null(attachment)){ write(paste0('$File = "', attachment, '"'), file, append = TRUE) write('$Mail.Attachments.Add($File)', file, append = TRUE) } write('$Mail.Send()', file, append = TRUE) if(append) write('', file, append = TRUE) } for(i in 1:length(add)){ file <- paste0("email", i, ".ps1") att <- file.path(getwd(), "blabla.txt") email(add[i], subject, Body, file, attachment = att) # with attachment # email(add[i], subject, Body, file) # without attachment # email(add[i], subject, Body, file, append = TRUE) # multiple emails in a single PS file }

Now you can go and run the PowerShell script from within windows explorer.

<UPDATE> The clue to this solving this came from here

]]>source("https://gist.githubusercontent.com/aghaynes/7fd9d1c52b1cc4cb566a/raw/")

I changed a couple of the function names though and added an argument to specify_decimal so that you can provide a minimum value. The function arguments are now:

specify_decimal(x, k, min)

npropconv(n, perc, dp.n, dp.perc)

valciconv(val, CIlower, CIupper, dp.val, dp.CI)

minval(values, min)

minval is the new name for minp.

Hope thats useful…

]]>

specify_decimal <- function(x, dp){ out <- format(round(x, dp), nsmall=dp) out <- gsub(" ", "", out) out } npropconv <- function(n, perc, dp.n=0, dp.perc=1){ n <- specify_decimal(n, dp.n) perc <- specify_decimal(perc, dp.perc) OUT <- paste(n, " (", perc, ")", sep="") OUT }

valciconv <- function(val, CIlower, CIupper, dp.val=2, dp.CI=2){ val <- specify_decimal(val, dp.val) CIlower <- specify_decimal(CIlower, dp.CI) CIupper <- specify_decimal(CIupper, dp.CI) OUT <- paste(val, " (", CIlower, " - ", CIupper, ")", sep="") OUT }

minp <- function(pvalues, min=0.001){ pv <- as.numeric(pvalues) for(i in 1:length(pv)){ if(pv[i] < min) pvalues[i] <- paste("<", min) } pvalues }

And heres what they do…

> specify_decimal(x=c(0.01, 0.000001), dp=3) [1] "0.010" "0.000" > npropconv(n=7, perc=5, dp.n=0, dp.perc=1) [1] "7 (5.0)" > valciconv(val=7, CIlower=3, CIupper=9, dp.val=2, dp.CI=2) [1] "7.00 (3.00 - 9.00)" > minp(0.00002, min=0.05) [1] "< 0.05" > minp(0.00002, min=0.001) [1] "< 0.001"

Any arguments with beginning with dp specify the number of decimal places for the relevant parameter (i.e. dp.n sets the decimal places for the n parameter). It would make sence to follow specify_decimal with a call to minp to deal with the 0s:

> decim <- specify_decimal(c(0.01, 0.000001),3) > minp(decim, 0.001) [1] "0.010" "< 0.001"

Incidently, although I had p values in mind for minp, it can of course be used with any value at all!

Hope someone finds them handy!!!

]]>UPDATE: These confidence intervals, together with many more, have actually been programmed in the binom package (binom.confint). Use them instead. For Stata users, CIs from proportions are available with the ci program.

Firstly I give you the Simple Asymtotic Method:

simpasym <- function(n, p, z=1.96, cc=TRUE){ out <- list() if(cc){ out$lb <- p - z*sqrt((p*(1-p))/n) - 0.5/n out$ub <- p + z*sqrt((p*(1-p))/n) + 0.5/n } else { out$lb <- p - z*sqrt((p*(1-p))/n) out$ub <- p + z*sqrt((p*(1-p))/n) } out }

which can be used thusly….

```
simpasym(n=30, p=0.3, z=1.96, cc=TRUE)
$lb
[1] 0.119348
$ub
[1] 0.480652
```

Where n is the sample size, p is the proportion, z is the z value for the % interval (i.e. 1.96 provides the 95% CI) and cc is whether a continuity correction should be applied. The returned results are the lower boundary ($lb) and the upper boundary ($ub).

The second method is the Score method and is define as follows:

scoreint <- function(n, p, z=1.96, cc=TRUE){ out <- list() q <- 1-p zsq <- z^2 denom <- (2*(n+zsq)) if(cc){ numl <- (2*n*p)+zsq-1-(z*sqrt(zsq-2-(1/n)+4*p*((n*q)+1))) numu <- (2*n*p)+zsq+1+(z*sqrt(zsq+2-(1/n)+4*p*((n*q)-1))) out$lb <- numl/denom out$ub <- numu/denom if(p==1) out$ub <- 1 if(p==0) out$lb <- 0 } else { out$lb <- ((2*n*p)+zsq-(z*sqrt(zsq+(4*n*p*q))))/denom out$ub <- ((2*n*p)+zsq+(z*sqrt(zsq+(4*n*p*q))))/denom } out }

and is used in the same manner as simpasym…

```
scoreint(n=30, p=0.3, z=1.96, cc=TRUE)
$lb
[1] 0.1541262
$ub
[1] 0.4955824
```

These formulae (and a couple of others) are discussed in Newcombe, R. G. (1998) who suggests that the score method should be more frequently available in statistical software packages.

Hope that help someone!!!

Reference:

Newcombe, R. G. (1998) Two-sided confidence intervals for the single proportion: comparison of seven methods. Statist. Med., 17: 857-872. doi: 10.1002/(SICI)1097-0258(19980430)17:8<857::AID-SIM777>3.0.C

]]>

\begin{figure} \includegraphics[trim= left bottom right top, clip=true]{file.jpeg} \end{figure}

This method doesnt always work however. For instance, if you use xelatex, like me.

There is a way around this though – the *adjustbox* package. It includes, among other handy functions, the function *\adjincludegraphics* which does the trick. One of the best things about it is that it even seems to use the same coding, so no need to change all of the argument names…just add adj to the function call.

\usepackage{adjustbox} ... \begin{figure} \adjincludegraphics[trim= left bottom right top, clip=true]{file.jpeg} \end{figure}

I think that its even possible to have it export its settings so that *\includegraphics* is recoded to do the same as *\adjincludegraphics*. I’ve not yet tried that though. (It should be just a case of using *\usepackage[Export]{adjustbox}* instead of *\usepackage{adjustbox}*)