Introduction

tensorr provides methods to manipulate and store sparse tensors. Tensors are multidimensional generalizations of matrices (two dimensional) and vectors (one dimensional).

It has three main goals:

  • Provide an efficient format to store sparse tensors in R.
  • Provide standard tensor operations such as multiplication and unfolding.
  • Provide standard tensor decomposition techniques such as CP and Tucker.

The aim of this vignette is not to provide a mathematical overview of Tensors, please see Kolda and Bader (2009) instead. It assumes that you have some working knowledge of tensors and want to know how to use them in R.

Let’s start with a simple motivating example for sparse tensor usage. Say we have a three-dimensional 2x2x2 tensor with non-zero values in the first and fifth positions (remember R arrays/matrices are column oriented). We could represent this object with a standard R array:

dims <- c(2,2,2)
z <- array(c(10,0,0,0,20,0,0,0), dims)
z
## , , 1
## 
##      [,1] [,2]
## [1,]   10    0
## [2,]    0    0
## 
## , , 2
## 
##      [,1] [,2]
## [1,]   20    0
## [2,]    0    0

But since many of the values are zero, it makes more sense to only store this as a sparse tensor. Let’s create this object by providing the indices (subscripts) of the non-zero values.

library(tensorr)

subs <- list(c(1,1,1), c(1,1,2))
vals <- c(10, 20)
dims <- c(2,2,2)
x <- sptensor(subs, vals, dims)
x
## <A 2x2x2 sparse tensor with 2 non-zero entries>
## subs: <1,1,1> <1,1,2>
## vals: 10 20

For this small example, the benefit from using a sparse tensor is not apparent (and in fact the sparse object is larger than the dense one if you check object.size), but for larger tensors this advantage will prove useful.

We’ll go over the different kinds of operations you can perform below, but for now feel free to try a few out:

# element-wise math operations
x + x
2 * x
x * x
max(x)

# extracting 
x[1,1,1]
x[1:4]
x[list(c(1,1,1), c(1,1,2), c(2,2,2))]

# replacing
x[1,1,2] <- 30

# converting
as_dtensor(x)

# tensor multiplication
m <- matrix(1:6, nrow = 3, ncol = 2)
ttm(x, m, 2)

# unfolding
unfold(x, 1)

Tensor Classes

The tensorr package provides S4 classes for sparse and dense tensor representations. The sptensor class is a new sparse tensor class, with non-zero values and subscripts stored in coordinate list format (coo) to reduce storage requirements (Note the Matrix package refers to this as triplet format, and its corresponding class is TsparseMatrix). The dtensor class is a wrapper around R’s existing dense multidimensional array, but adds functionality for tensor operations such as multiplication and unfolding.

Sparse Tensors

A sparse tensor can be created with a list or matrix of subscripts, a numeric vector of non-zero values, and an integer vector of dimensions. Here’s a summary of the basic commands needed to create sparse tensors.

Let’s create a 2x2x2 sptensor with non-zero values in the first and fifth positions. You can create one with a list of subscripts.

subs <- list(c(1,1,1), c(1,1,2))
vals <- c(10, 20)
dims <- c(2,2,2)
x <- sptensor(subs, vals, dims)

Or, alternatively, you can provide a matrix of subscripts. Note that the subscripts are represented as a matrix where the ith row corresponds to the ith dimension and the jth column is the subscript to the jth non-zero value.

subs <- matrix(c(1,1,1, 1,1,2), nrow = length(dims))
x <- sptensor(subs, vals, dims)

The constructor components are stored as slots in the object and can be accessed via slots (x@subs, x@vals, x@dims) or the preferred accessor functions.

# subscripts for non-zero values
nzsubs(x)
##      [,1] [,2]
## [1,]    1    1
## [2,]    1    1
## [3,]    1    2
# non-zero values
nzvals(x)
## [1] 10 20
# dimensions
dim(x)
## [1] 2 2 2

See methods(class = "sptensor") for a full list of operations associated with this class.

Dense Tensors

Dense tensors can be created by simply providing an existing multidimensional array to the constructor. You can access the non-zero subscripts, non-zero values, and dimensions the same way you would for a sparse tensor.

dims <- c(2,2,2)
arr <- array(c(10,0,0,0,20,0,0,0), dims)
z <- dtensor(arr)
##      [,1] [,2]
## [1,]    1    1
## [2,]    1    1
## [3,]    1    2
## [1] 10 20
dim(z)
## [1] 2 2 2

See methods(class = "dtensor") for a full list of operations associated with this class.

Unfolded Tensors

You can also directly create unfolded-sptensor and unfolded-dtensor classes, though most likely you will only interact with these objects after unfold-ing an existing tensor. The unfolding operation is discussed more in depth in a later section. The complement to unfolding is refold-ing. Here’s an example of unfolding our tensor along the first dimension and then refolding it back.

unfold(x, 1)
## <A 2x2x2 unfolded sparse tensor along mode 1 >
## 2 x 4 sparse Matrix of class "dgTMatrix"
##               
## [1,] 10 . 20 .
## [2,]  . .  . .
## <A 2x2x2 sparse tensor with 2 non-zero entries>
## subs: <1,1,1> <1,1,2>
## vals: 10 20

Converting Between Classes

You can easily convert between sparse and dense tensor representations.

# convert dense tensor to sparse
as_sptensor(z)
## <A 2x2x2 sparse tensor with 2 non-zero entries>
## subs: <1,1,1> <1,1,2>
## vals: 10 20
# convert sparse tensor to dense
as_dtensor(x)
## <A 2x2x2 dense tensor>
## , , 1
## 
##      [,1] [,2]
## [1,]   10    0
## [2,]    0    0
## 
## , , 2
## 
##      [,1] [,2]
## [1,]   20    0
## [2,]    0    0

You can also turn a data frame of indices into a sparse tensor. Each column in the data frame corresponds to a specific dimension. The last column in the data frame is assumed to contain the value, unless otherwise specified.

df <- data.frame(i = c(1, 1), j = c(1, 1), k = c(1,2), val = c(10, 20))
df
##   i j k val
## 1 1 1 1  10
## 2 1 1 2  20
as_sptensor(df, dims = dims)
## <A 2x2x2 sparse tensor with 2 non-zero entries>
## subs: <1,1,1> <1,1,2>
## vals: 10 20

Extracting and Replacing

The extraction [ and replacement [<- functions work exactly as they would for multidimensional arrays, with one addition - they can also accept a list or matrix of subscripts. This feature was added to aid in programming workflows (as opposed to interactive usage).

If you have a high dimensional array it can be cumbersome to write all the subscript for each dimension, e.g. x[1,2,4,5,1,1,1,21,100], and it is more likely that you will make mistakes. But if you are able to programmatically generate these subscripts then you can simply pass a list/matrix instead.

Standard Indexing

If the dimensions of your tensor aren’t too high, you can take advantage of standard indexing the way you would with any matrix or array in R. This format takes a comma-separated list of arguments (since [ and [<- are actually functions)

x[1,1,1]
## [1] 10
x[1,2,2]
## [1] 0
x[1,1,1] <- 100
x[2,2,2] <- 200
x
## <A 2x2x2 sparse tensor with 3 non-zero entries>
## subs: <1,1,1> <1,1,2> <2,2,2>
## vals: 100 20 200

You can also pass ranges or leave out arguments where you want to extract or replace all the values in that dimension.

x[1,,]
## <A 1x2x2 sparse tensor with 2 non-zero entries>
## subs: <1,1,1> <1,1,2>
## vals: 100 20
x[1,1:2,1:2]
## <A 1x2x2 sparse tensor with 2 non-zero entries>
## subs: <1,1,1> <1,1,2>
## vals: 100 20
x[1,,,drop=TRUE]
## <A 2x2 sparse tensor with 2 non-zero entries>
## subs: <1,1> <1,2>
## vals: 100 20

Linear Indexing

You can also index the tensor by treating it as a single vector of values. Note that R indexes values in column-wise fashion, which means that the first index changes the fastest and the last index changes the slowest as you traverse the array. For example, these would be the indices of a 2x2x2 multidimensional array.

array(1:8, c(2,2,2))
## , , 1
## 
##      [,1] [,2]
## [1,]    1    3
## [2,]    2    4
## 
## , , 2
## 
##      [,1] [,2]
## [1,]    5    7
## [2,]    6    8

Using this pattern, we can extract or replace tensor values by passing a single vector of numeric values with each value corresponding to a linear index.

# get the first three values
x[c(1,2,3)]
## [1] 100   0   0
# alternatively
x[1:3]
## [1] 100   0   0
# replace the first and fifth values
x[c(1,5)] <- c(-10, -20)
x
## <A 2x2x2 sparse tensor with 3 non-zero entries>
## subs: <1,1,1> <1,1,2> <2,2,2>
## vals: -10 -20 200

List/Matrix Indexing

Another way to extract or replace values from a tensor is use a list/matrix of subscripts (similarly to how you would construct an sptensor). As stated above, this can be useful if you have a high dimensional tensor and have a way to programmatically produce indices. In the example below we’ll create the list manually.

subs <- list(c(1,1,1), c(1,2,1), c(1,1,2))
x[subs]
## [1] -10   0 -20
x[subs] <- c(50, 60, 70)
x
## <A 2x2x2 sparse tensor with 4 non-zero entries>
## subs: <1,1,1> <1,2,1> <1,1,2> <2,2,2>
## vals: 50 60 70 200

Dimnames Indexing

You can also add dimnames to your sparse tensor and extract by specifying the dimname.

dimnames(x) <- list(LETTERS[1:2], letters[1:2], month.abb[1:2])
dimnames(x)
## [[1]]
## [1] "A" "B"
## 
## [[2]]
## [1] "a" "b"
## 
## [[3]]
## [1] "Jan" "Feb"
x[,,"Feb"]
## <A 2x2x1 sparse tensor with 2 non-zero entries>
## subs: <1,1,1> <2,2,1>
## vals: 70 200
identical(x[,,"Feb"], x[,,2])
## [1] TRUE

Group Generics

A number of group generics are also defined for tensors (if you’re unfamiliar with group generics, see ?S4GroupGeneric), including

  • Arithmetic (+, - , *, …)
  • Comparisons (==, >, !=, …)
  • Logic (&, |)
  • Math (abs, sqrt, ..)
  • Summary (max, min, sum, …)

We we’ll go over a few examples for each group, but not every one. Feel free to experiment on your own! For these examples we’ll use our sparse tensor x, which currently has values:

x
## <A 2x2x2 sparse tensor with 4 non-zero entries>
## subs: <1,1,1> <1,2,1> <1,1,2> <2,2,2>
## vals: 50 60 70 200

Note that these operations will throw a warning if the operation converts zero values to non-zero values since this will likely cause the sparse tensor to become extremely dense.

Arithmetic

These are element-wise operations. To perform tensor operations see the section on Tensor Multiplication.

x + x
## <A 2x2x2 sparse tensor with 4 non-zero entries>
## subs: <1,1,1> <1,2,1> <1,1,2> <2,2,2>
## vals: 100 120 140 400

Note that if an operation results in all the values equal to zero, then the tensor will return empty subscripts and values.

x - x
## <A 2x2x2 sparse tensor with 0 non-zero entries>
## subs: <empty>
## vals: <empty

Comparisons and Logic

These operations are also element-wise, returning a tensor with logical values.

x > 100
## <A 2x2x2 sparse tensor with 1 non-zero entries>
## subs: <2,2,2>
## vals: TRUE
x > 2*x
## <A 2x2x2 sparse tensor with 0 non-zero entries>
## subs: <empty>
## vals: <empty

Note the warning when returning a tensor that is mostly TRUE values.

x == x
## Warning in x == x: Operation converts zero values to non-zero values.
## Tensor is likely dense now
## <A 2x2x2 sparse tensor with 8 non-zero entries>
## subs: <1,1,1> <2,1,1> <1,2,1> <2,2,1> <1,1,2> ...
## vals: TRUE TRUE TRUE TRUE TRUE ...

Math

sqrt(x)
## <A 2x2x2 sparse tensor with 4 non-zero entries>
## subs: <1,1,1> <1,2,1> <1,1,2> <2,2,2>
## vals: 7.071068 7.745967 8.3666 14.14214
log1p(x)
## <A 2x2x2 sparse tensor with 4 non-zero entries>
## subs: <1,1,1> <1,2,1> <1,1,2> <2,2,2>
## vals: 3.931826 4.110874 4.26268 5.303305
abs(x)
## <A 2x2x2 sparse tensor with 4 non-zero entries>
## subs: <1,1,1> <1,2,1> <1,1,2> <2,2,2>
## vals: 50 60 70 200

Again, we’ll get a warning if we apply a function that converts zero values to non-zero values.

log(x)
## Warning in log(x): Operation converts zero values to non-zero values.
## Tensor is likely dense now
## <A 2x2x2 sparse tensor with 8 non-zero entries>
## subs: <1,1,1> <2,1,1> <1,2,1> <2,2,1> <1,1,2> ...
## vals: 3.91202300542815 -Inf 4.0943445622221 -Inf 4.24849524204936 ...

Summary

Note that any time we apply min or range to a tensor we’ll get 0 if there are any zero values in the tensor. If you just want the min or range of non-zero values call these functions on the result of nzvals

max(x)
## [1] 200
range(x)
## [1]   0 200
range(nzvals(x))
## [1]  50 200

Unfolding

Unfolding, or matricizing, re-orders the fibers of tensor to be columns in a matrix. A fiber is analogous to a row or column in a 2D matrix in that they are obtained by holding every dimension constant except for one. For example, the mode-1 fibers of our sparse tensor x are x[,1,1], x[,2,1], x[,1,2], and x[,2,2]. The unfold function takes a tensor, finds the fibers, and makes them the columns in a new matrix.

u <- unfold(x,1)
u
## <A 2x2x2 unfolded sparse tensor along mode 1 >
## 2 x 4 sparse Matrix of class "dgTMatrix"
##                  
## [1,] 50 60 70   .
## [2,]  .  .  . 200

Unfolding is important because many tensor operations can be expressed as operations on unfolded tensors, so we can take advantage of existing tools and methods for working with matrices. For example, the n-mode product of a tensor and a matrix can be written as the matrix product of the unfolded tensor and the matrix. See Tensor Multiplication and Kolda and Bader (2009) for more info.

Of course, each unfolded tensor can be easily refolded to its original state.

## <A 2x2x2 sparse tensor with 4 non-zero entries>
## subs: <1,1,1> <1,2,1> <1,1,2> <2,2,2>
## vals: 50 60 70 200

Tensor Multiplication

Tensor multiplication is analogous to matrix multiplication, but is a little more complex due to the number of dimensions.

Tensor Times Matrix

Currently, this package only implements the n-mode product. This product keeps all tensor indices constant except for the nth, and sums the product of these values with a matrix of size \(j \times i_n\). If we have a tensor \(\mathbf{X}\) and a matrix \(U\), then we can write this product down as (per Kolda (2009)):

\[(\mathbf{X} \times_n U)_{i_1i_2...i_{n-1} j i_{n+1}...i_N} = \sum_{i_n = 1}^{I_n}{x_{i_1i_2...i_N}u_{ji_n}}\] However, as stated previously, this operation can also be expressed by unfolding the sparse tensor:

\[\mathbf{Y} = \mathbf{X} \times_n U \Leftrightarrow Y_{(n)} = UX_{(n)}\] where \(Y_{(n)}\) and \(X_{(n)}\) represent unfolded tensors along the nth dimension. This product can be executed using ttm. For example, we can multiply our sparse tensor along the 2nd mode. Notice how dimensions of the resulting tensor change in the 2nd dimension.

m <- matrix(1:6, nrow = 3, ncol = 2)
ttm(x, m, 2)
## <A 2x3x2 sparse tensor with 9 non-zero entries>
## subs: <1,1,1> <1,2,1> <1,3,1> <1,1,2> <2,1,2> ...
## vals: 290 400 510 70 800 ...

Tensor Times Vector

Using ttv to multiply a tensor times a vector is equivalent to using ttm to multiply by a matrix with a single column except that the nth dimension of size one will be dropped automatically. So the result of ttv is a tensor with one less dimension.

v <- 1:3
ttv(x,c(3,4),2)
## <A 2x2 sparse tensor with 3 non-zero entries>
## subs: <1,1> <1,2> <2,2>
## vals: 390 210 800

Tensor Times Tensor (Outer Product)

The outer product of two tensors results in a tensor with dimension c(dim(x),dim(y)). This is essentially a sparse implementation of the outer function in the base package. The function ttt is an alias for outerprod.

## <A 2x2x2x2x2x2 sparse tensor with 16 non-zero entries>
## subs: <1,1,1,1,1,1> <1,2,1,1,1,1> <1,1,2,1,1,1> <2,2,2,1,1,1> <1,1,1,1,2,1> ...
## vals: 2500 3000 3500 10000 3000 ...
identical(ttt(x,x), outerprod(x,x))
## [1] TRUE

Norm and Inner Product

You can also calculate the Frobenius norm of a tensor and the inner product between two tensors.

norm(x)
## [1] 225.8318
sqrt(innerprod(x,x))
## [1] 225.8318

Future Work

I plan to add common tensor decompositions, such as CP and Tucker, to the tensorr package in the near future. Any other requests and suggestions are welcome.

References

Many of the dense and sparse implementation ideas were adapted from:

  • B. W. Bader and T. G. Kolda. Algorithm 862: MATLAB tensor classes for fast algorithm prototyping, ACM Transactions on Mathematical Software 32(4):635-653, December 2006.
  • B. W. Bader and T. G. Kolda. Efficient MATLAB computations with sparse and factored tensors, SIAM Journal on Scientific Computing 30(1):205-231, December 2007.
  • scikit-tensor

For a review on tensors, see:

  • T. G. Kolda and B. W. Bader, Tensor Decompositions and Applications, SIAM Review 51(3):455-500, September 2009