If you’ve used R for any length of time, you’ve probably heard the admonishment to “vectorise your code”. But what does that actually mean? Vectorising your code is not just about avoiding for loops, although that’s often a step. Vectorising is about taking a whole-object approach to a problem, thinking about vectors, not scalars. There are two key attributes of a vectorised function:
It makes many problems simpler. Instead of having to think about the components of a vector, you only think about entire vectors.
The loops in a vectorised function are written in C instead of R. Loops in C are much faster because they have much less overhead.
Chapter 9 stressed the importance of vectorised code as a higher level abstraction. Vectorisation is also important for writing fast R code. This doesn’t mean simply using
lapply(). Instead, vectorisation means finding the existing R function that is implemented in C and most closely applies to your problem.
Vectorised functions that apply to many common performance bottlenecks include:
colMeans(). These vectorised matrix functions will always be faster than using
apply(). You can sometimes use these functions to build other vectorised functions.
function(x) rowSums(x) > 0 rowAny <- function(x) rowSums(x) == ncol(x)rowAll <-
Vectorised subsetting can lead to big improvements in speed. Remember the techniques behind lookup tables (Section 4.5.1) and matching and merging by hand (Section 4.5.2). Also remember that you can use subsetting assignment to replace multiple values in a single step. If
xis a vector, matrix or data frame then
x[is.na(x)] <- 0will replace all missing values with 0.
If you’re extracting or replacing values in scattered locations in a matrix or data frame, subset with an integer matrix. See Section 4.2.3 for more details.
If you’re converting continuous values to categorical make sure you know how to use
Be aware of vectorised functions like
Matrix algebra is a general example of vectorisation. There loops are executed by highly tuned external libraries like BLAS. If you can figure out a way to use matrix algebra to solve your problem, you’ll often get a very fast solution. The ability to solve problems with matrix algebra is a product of experience. A good place to start is to ask people with experience in your domain.
Vectorisation has a downside: it is harder to predict how operations will scale. The following example measures how long it takes to use character subsetting to look up 1, 10, and 100 elements from a list. You might expect that looking up 10 elements would take 10 times as long as looking up 1, and that looking up 100 elements would take 10 times longer again. In fact, the following example shows that it only takes about ~10x longer to look up 100 elements than it does to look up 1. That happens because once you get to a certain size, the internal implementation switches to a strategy that has a higher set up cost, but scales better.
setNames(as.list(sample(100, 26)), letters) lookup <- "j" x1 <- sample(letters, 10) x10 <- sample(letters, 100, replace = TRUE) x100 <- ::mark( bench lookup[x1], lookup[x10], lookup[x100],check = FALSE c("expression", "min", "median", "itr/sec", "n_gc")] )[#> # A tibble: 3 x 4 #> expression min median `itr/sec` #> <bch:expr> <bch:tm> <bch:tm> <dbl> #> 1 lookup[x1] 435.04ns 449.13ns 1896101. #> 2 lookup[x10] 1.3µs 1.35µs 675432. #> 3 lookup[x100] 3.37µs 5.19µs 208830.
Vectorisation won’t solve every problem, and rather than torturing an existing algorithm into one that uses a vectorised approach, you’re often better off writing your own vectorised function in C++. You’ll learn how to do so in Chapter 25.
The density functions, e.g.,
dnorm(), have a common interface. Which arguments are vectorised over? What does
rnorm(10, mean = 10:1)do?
Compare the speed of
apply(x, 1, sum)with
rowSums(x)for varying sizes of
How can you use
crossprod()to compute a weighted sum? How much faster is it than the naive
sum(x * w)?