Lowrank matrix factorization with attributes
Ecole des mines de Paris, France
September 2006
Abstract
We develop a new collaborative filtering (CF) method that combines both previously known users’ preferences, i.e. standard CF, as well as product/user attributes, i.e. classical function approximation, to predict a given user’s interest in a particular product. Our method is a generalized low rank matrix completion problem, where we learn a function whose inputs are pairs of vectors – the standard low rank matrix completion problem being a special case where the inputs to the function are the row and column indices of the matrix. We solve this generalized matrix completion problem using tensor product kernels for which we also formally generalize standard kernel properties. Benchmark experiments on movie ratings show the advantages of our generalized matrix completion method over the standard matrix completion one with no information about movies or people, as well as over standard multitask or single task learning methods.
1 Introduction
Collaborative Filtering (CF) refers to the task of predicting preferences of a given user based on their previously known preferences as well as the preferences of other users. In a book recommender system, for example, one would like to suggest new books to a customer based on what he and others have recently read or purchased. This can be formulated as the problem of filling a matrix with customers as rows, objects (e.g., books) as columns, and missing entries corresponding to preferences that one would like to infer. In the simplest case, a preference could be a binary variable (thumbs up/down), or perhaps even a more quantitative assessment (scale of 1 to 5).
Standard CF assumes that nothing is known about the users or the objects apart from the preferences expressed so far. In such a setting the most common assumption is that preferences can be decomposed into a small number of factors, both for users and objects, resulting in the search for a lowrank matrix which approximates the partially observed matrix of preferences. This problem is usually a difficult nonconvex problem for which only heuristic algorithms exist [14]. Alternatively convex formulations have been obtained by relaxing the rank constraint by constraining the trace norm of the matrix [15].
In many practical applications of CF, however, a description of the users and/or the objects through attributes (e.g., gender, age) or measures of similarity is available. In that case it is tempting to take advantage of both known preferences and descriptions to model the preferences of users. An important benefit of such a framework over pure CF is that it potentially allows the prediction of preferences for new users and/or new objects. Seen as learning a preference function from examples, this problem can be solved by virtually any algorithm for supervised classification or regression taking as input a pair (user, object). If we suppose for example that a positive definite kernel between pairs can be deduced from the description of the users and object, then learning algorithms like support vector machines or kernel ridge regression can be applied. These algorithms minimize an empirical risk over a ball of the reproducing kernel Hilbert space (RKHS) defined by the pairwise kernel.
Both the rank constraint and the RKHS norm restriction act as regularization based on prior hypothesis about the nature of the preferences to be inferred. The rank constraint is based on the hypothesis that preferences can be modelled by a limited number or factors to describe users and objects. The RKHS norm constraint assumes that preferences vary smoothly between similar users and similar objects, where the similarity is assessed in terms of the kernel for pairs.
The main contribution of this work is to propose a framework which combines both regularizations on the one hand, and which interpolates between the pure CF approach and the pure attributebased approaches on the other hand. In particular, the framework encompasses lowrank matrix factorization for collaborative filtering, multitask learning, and classical regression/classification over product spaces. We show on a benchmark experiment of movie recommendations that the resulting algorithm can lead to significant improvements over other stateoftheart methods.
2 Kernels and tensor product spaces
In this section, we review the classical theory of tensor product reproducing kernel Hilbert spaces, providing a natural generalization of finitedimensional matrices for functions of two variables. The general setup is as follows. We consider the general problem of estimating a function given a finite set of observations in . We assume that both spaces and are endowed with positive semidefinite kernels, respectively and , and denote by and the corresponding RKHS. A typical application of this setting is where is a person, is a movie, kernels and represent similarities between persons and movies, respectively, and represents a person’s rating of a movie . We note that if and are finite sets, then is simply a matrix of size .
2.1 Tensor product kernels and RKHS
We denote by the tensor product kernel, known to be a positive definite kernel [4, p.70]:
(1) 
and by the associated RKHS. A classical result of Aronszajn [2] states that is the tensor product of the two spaces and (denoted ), i.e., is the completion of all functions , which can be finitely decomposed as , where and , . An atomic term defined as , with and , is usually denoted . The space is equipped with a norm such that , and thus .
2.2 Rank
An element of can always be decomposed as a possibly infinite sum of atomic terms of the form where and . We define the rank of an element of as the minimal number of atomic terms in any decomposition of , i.e, is the smallest integer such that can be expanded as:
for some functions and , if such an integer does exist (otherwise, the rank is infinite).
When the two RKHS are spaces of linear functions on an Euclidean space, then the tensor product can be identified to the space of bilinear forms on the product of the two Euclidean spaces, and the notion of rank coincides with the usual notion of rank for matrices. We note that an alternative characterization of is the supremum of the ranks of the matrices defined by over the choices of finite sets and (see technical annex for a proof).
2.3 Trace norm
Given a rectangular matrix , the rank is not an easy function to optimize or constrain, since it is neither convex nor continuous. Following the 1norm approximation to the 0norm, the trace norm has emerged has an efficient convex approximation of the rank [8, 15]. The trace norm is defined as the sum of the singular values. This definition is not easy to extend to functional tensor product spaces because it involves eigendecompositions. Rather, we use the equivalent formulation
where is the squared Frobenius norm.
We thus extend the notion of trace norm as
Lemma 1
This is a norm, equal to the sum of singular values when the two RKHS are spaces of linear functions on an Euclidean space.
3 Representer theorems
In this section we explicitly state and prove representer theorems in tensor product spaces when a functional is minimized with rank constraints. These theorems underlie the algorithms proposed in the next section.
In a collaborative filtering task, the data usually have a matrix form, i.e., many ’s (resp. ’s) are identical. We let denote the distinct values of elements of in the training data, and, respectively, denote the distinct values of elements of . We assume that we have observations of only a subset of . We thus denote and the indices of the th observation and the observed target.
3.1 Classical representer theorem in the tensor product RKHS
Given a loss function , for example the square loss function , a first classical approach to learn dependencies between the pair and the variable is to consider it as a supervised learning problem over the product space , and for example to search for a function in the RKHS of the product kernel which solves the following problem:
(2) 
By the representer theorem [9] the solution of (2) has an expansion of the form:
for some vector . Note that the number of parameters is the number of observed values. For many loss functions the problem (2) boils down to classical machine learning algorithms such as support vector machines, kernel logistic regression or kernel ridge regression, which can be solved by usual implementations with the product kernel (1).
3.2 Representer theorem with rank constraint
In order to take advantage of the possible representation of our predictor as a sum of a small number of factors, we propose to consider the following generalization of (2):
(3) 
As the following proposition shows, the solution of this constrained minimization problem can also be reduced to a finitedimensional optimization problem (see a proof in the technical annex):
Proposition 1
This proposition is a crucial contribution of this paper as it allows to learn the function , by learning the coefficients and : denoting by the th column of (similarly for ), we obtain from Proposition 1 that an equivalent formulation of (3) is:
(5) 
where is the kernel matrix for the elements of (similarly for ). In order to link with the trace norm formulation in the next section, if we denote , we note that the matrix of predicted values is equal to resulting in the following optimization problem:
(6) 
3.3 Representer theorems and trace norm
The trace norm does not readily lead to a representer theorem and a finite dimensional optimization problem. It is thus preferable to penalize the trace norm of the predicted values , and minimize
(7) 
We have the following representer theorem (whose proof is postponed to the technical annex) for the problem (7):
Proposition 2
The optimal solution of (7) can be written in the form , where .
The optimization problem can thus be rewritten as:
(8) 
Note that in contrast to the finite representation without any constraint on the rank or the trace norm (where the number of parameters is the number of observed values), the number of parameters is the total number of elements in the matrix, and this method, though convex, is thus of higher computational complexity.
3.4 Reformulation in terms of Kronecker products
Kronecker products
Given a matrix and a matrix , the Kronecker product is a matrix in defined by blocks of size where the block is .
The most important properties are the following (where it is assumed that all matrix operations are welldefined): , , , and if where is the stack of columns of .
Reformulation
We have , and thus is the vector of predicted values for all pairs and is equal to . The matrix is the kernel matrix associated with all pairs, for the kernel . The results in this section does not provide additional representational power beyond the kernel , but present additional regularization frameworks for tensor product spaces. In the next section, we tackle the representational part of our contribution and show how the kernels and can be tailored to the task of matrix completion.
4 Kernels for matrix completion with attributes
The three formulations (2), (3) and (7) differ in the way they handle regularization. They all require a choice of kernel over and to define the RKHS norm in , which we discuss in this section. The main theme of this section is the distinction between kernels linked to the attributes and the kernel linked to the identities of each different and (referred to as Dirac kernels).
Dirac kernels
In the standard collaborative filtering framework, where no attribute over or is available, a natural kernel over and is the Dirac kernel ( if , otherwise). If both and are Dirac kernels, then is also a Dirac kernel by (1) and the classical approach (2) is irrelevant in that case (the function being equal to on unseen examples). The lowrank constraint added in (3) results in a relevant problem: in fact for we exactly recover the classical lowrank matrix factorization problem.
Attribute kernels
When attributes are available, they can be used to define kernels which we denote below. When both and are kernels derived from attributes, then (2) boils down to classical regression or classification over pairs. Problem (3) provides an alternative problem, where the rank of the function is constrained.
Multitask learning
Suppose now that attributes are available only for objects in , and not for . It is then possible to take and . In that case the optimization problem (2) boils down to solving a classical classification or regression problem for each value independently. Adding the rank constraint in (3) removes the independence among tasks by enforcing a decomposition of the tasks into a limited number of factors, which leads to an algorithm for multitask learning, based on a lowrank representation of the predictor function. This approach is to be contrasted with the framework of [7], which is equivalent to finite of cardinality , and . Our framework focuses on a low rank representation for the predictor of each task, while the framework of [7] focuses on a set of predictor for each task that has small variance. An extension of this multitask learning framework, leading to similar penalizations by trace norms, was independently derived by Argyriou et al. [1].
General formulation
Supposing now that attributes are available on both and , let us consider the following interpolated kernels:
where and . The resulting product kernel is a sum of four terms:
By varying and this kernel provides an interpolation between collaborative filtering (), classical attributebased regression on pairs (), and multitask learning ( and , or and ).
In terms of kernel matrices on the set of all pairs, if we denote the kernel matrix associated with the attributes associated with , and the kernel matrix associated with the attributes associated with , this is equivalent to using the matrix , which is the sum of four positive kernel matrices. The first one is simply the kernel matrix for the tensor product space, while the last one is proportional to identity and usually appears in kernel methods as the numerical effect of regularization [13]. The two additional matrices makes the learning across rows and columns possible.
Generalization to new points
One the usual drawbacks of collaborative filtering is the impossibility to generalize to unseen data points (i.e., a new movie or a new person in the context of movie recommendation). When attributes are used, a prediction based on those can be made, and thus using attributes has an added benefit beyond better performance on matrix completion tasks.
5 Algorithms
In this section, we describe the algorithms used for the optimization formulation in (5) and (8). We also show that recent developments in multiple kernel learning can be applied to both setting (enforced rank constraint or trace norm).
5.1 Fixed rank
The function is convex in each argument separately but is not jointly convex. There are thus two natural optimization algorithms: (1) alternate convex minimization with respect to and , and (2) direct joint minimization using QuasiNewton iterative methods [5](in simulations we have used the latter scheme).
As in [12], the fixed rank formulation, although not convex, has the advantage of being parameterized by lowrank matrices and is thus of much lower complexity than the convex formulation that we know present. We present experimental results in the next section using only this fixed rank formulation.
5.2 Convex formulation
If the loss is convex, then the function is a convex function. However, even when the loss is differentiable, the trace norm is non differentiable, which makes iterative descent methods such as NewtonRaphson non applicable [6].
For specific losses which are SDPrepresentable, i.e., which can represented by a semidefinite program, such as the square loss or the hinge loss, the minimization of this function can be cast as semidefinite program (SDP) [6]. For differentiable losses which are not SDPrepresentable, such as the logistic loss, an efficient algorithm is to modify the trace norm to make it differentiable (see e.g [12]). For example, instead of penalizing the sum of singular values , one may penalize the sum of , which leads to a twice differentiable function [10].
5.3 Learning the kernels
In this section, we show that the rank constraint and the trace norm constraint can also be used in the multiple kernel learning framework [3, 11]. We can indeed show that if the loss is convex, then, as a function of the kernel matrices, the optimal values of the optimization problems (5) and (8) are convex functions, and thus the kernel can be learned efficiently by minimizing those functions. We do not use this method in our experiments, however, we only include this for completeness.
Proposition 3
Given , the following function is convex in :
The following function only depends on the Kronecker product and is a convex function of .
This proposition (whose proof is in the technical annex) shows that in the case of the rank constraint (5), if we parameterize as , the weights and can be learned simultaneously [3, 11]. In particular, the optimal weighting between attributes and the Dirac Kernel can be learned directly from data. Note that a similar proposition holds when the role of and are exchanged; when alternate minimization is used to minimize the objective function, the kernels can be learned at each step. In the case of the trace norm constraint (8) it shows that we can learn a linear combination of basis kernels, either the 4 kernels presented earlier obtained from Dirac’s and attributes, or more general combinations. We leave this avenue open for future research.
6 Experiments
We tested the method on the wellknown MovieLens 100k dataset from the GroupLens Research Group at the University of Minnesota. This dataset consists of ratings of 1682 movies by 943 users. The ratings consisted of a score from the range 1 to 5, where 5 is the highest ranking. Each user rated some subset of the movies, with a minimum of 20 ratings per user, and the total number of ratings available is exactly 100,000, averaging about 105 per user. To speed up the computation, we used a random subsample of 800 movies and 400 users, for a total of 20541 ratings. We divided this set into 18606 training ratings and 1935 for testing. This dataset was rather appropriate as it included attribute information for both the movies and the users. Each movie was labelled with at least one among genres (e.g., action or adventure), while users’ attributes age, gender, and an occupation among a list of occupations (e.g., administrator or artist).
We performed experiments using the rank constraint described in 3, and we used the more standard approach of cross validation to choose kernel parameters. Thus, our method requires selection of four parameters: the rank of the estimation matrix; the regularization parameter ; and the values , the tradeoff between the Dirac kernel and Attribute kernel, for the users and movies respectively. The parameters and both act as regularization parameters and we choose them using cross validation. The values were chosen out of , the rank ranged over , and was chosen from .
In Table 1, we show the performance for various choices or rank and for various values of and , after selecting in each case using crossvalidation. We also show in bold the performance for the parameters selected using crossvalidation. Notice that, performance is consistently worse when and are chosen at the corners when compared with values at the interior of . We observed this to be true, in fact, not only when and are chosen by crossvalidation, but for every choice of and . Figure 1 shows the test mean squared error for and selected at each point using cross validation (Left), as well as (Right) that for a fixed and (the plot looks similar for any fixed values of and ) over the range of and . Observe that the performance is best in the interior of the , area, and worsens as we get towards the edges and particularly at the corners. This is what we might expect: at the corners we are no longer taking advantage of either attribute information or ID information, either for the class of movies or of the class of users.
Also notice in Table 1 that, as expected, regularization through controlling the rank is indeed important. Regularization through parameter is also necessary: for shown in Table 1, test performance is 1.0351 when , but is 1.1401 when , and 1.1457 when (we observe such changes in performance across values of for all other choices of , , and ). Hence, it is important to balance both regularization terms. In fact, we use cross validation to select all parameters.

1.5391  1.6436  1.1999  1.1310  1.1106  1.0676 

1.5552  1.4008  1.2221  1.1138  1.0544  1.0478  
1.3294  1.3787  1.2315  1.0999  1.0611  1.0351  
1.3806  1.4234  1.2192  1.0818  1.0587  1.0596 
7 Conclusion
We presented a method for solving a generalized matrix completion problem where we have attributes describing the matrix dimensions. Various approaches, such as standard low rank matrix completion, are special cases of our method, and preliminary experiments confirm the benefits of our method. An interesting direction of future research is to explore further the multitask learning algorithm we obtained with lowrank constraint. On the theoretical side, a better understanding of the effects of norm and rank regularizations and their interaction would be helpful.
References
 [1] A. Argyriou, T. Evgeniou, and M. Pontil. Multitask feature learning. In Adv. NIPS 19, 2007.
 [2] N. Aronszajn. Theory of reproducing kernels. Trans. Am. Math. Soc., 68:337 – 404, 1950.
 [3] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. In Proc. ICML, 2004.
 [4] C. Berg, J. P. R. Christensen, and P. Ressel. Harmonic analysis on semigroups. SpringerVerlag, NewYork, 1984.
 [5] J. F. Bonnans, J. C. Gilbert, C. Lemaréchal, and C. A. Sagastiz bal. Numerical Optimization Theoretical and Practical Aspects. Springer, 2003.
 [6] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univ. Press, 2003.
 [7] T. Evgeniou, C. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. J. Mach. Learn. Res., 6:615–637, 2005.
 [8] M. Fazel, H. Hindi, and S. Boyd. A rank minimization heuristic with application to minimum order system approximation. In Proc. American Control Conference, volume 6, 2001.
 [9] G. S. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. J. Math. Anal. Appl., 33:82–95, 1971.
 [10] A. S. Lewis and H. S. Sendov. Twice differentiable spectral functions. SIAM J. Mat. Anal. App., 23(2):368–386, 2002.
 [11] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization. J. Mach. Learn. Res., 6:1099–1125, 2005.
 [12] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proc. ICML, 2005.
 [13] J. ShaweTaylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
 [14] N. Srebro and T. S. Jaakkola. Weighted lowrank approximations. In Proc. ICML, 2003.
 [15] N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximummargin matrix factorization. In Adv. NIPS 17, 2005.
Appendix A Rank of a function in a tensor product RKHS (Section 2.2)
An element of can always be decomposed as a possibly infinite sum of atomic terms of the form where and . We define the rank of an element of as the minimal number of atomic terms in any decomposition of , i.e, is the smallest integer such that can be expanded as:
for some functions and , if such an integer does exist (otherwise, the rank is infinite).
When the two RKHS are spaces of linear functions on an Euclidean space, then the tensor product can be identified to the space of bilinear forms on the product of the two Euclidean spaces, and the notion of rank coincides with the usual notion of rank for matrices. We note that an alternative characterization of is the supremum of the ranks of the matrices over the choices of finite sets and (see technical annex for a proof).
Proposition 4
is equal to the supremum of the ranks of the matrices over the choices of finite sets and .
Proof
If can be expanded as , then
for any choice of finite sets and
, the matrix
can be expanded as , where and
, and has therefore a rank not larger than
. Taking the smallest possible , this shows that the rank of
can not be larger than the rank of . Conversely, if
with , we
need to show that there exist two sets of points such that the
corresponding matrix has rank at least . We first observe
that both sets and
form linearly independent families in
and , respectively. Indeed, if for example , then which contradicts the
hypothesis that . It is therefore possible to
find to sets of points and
, such that the both sets of
vectors and
form linearly independent families in
(where and ).
The matrix corresponding to these sets of points has rank .
Appendix B Representer theorem with rank constraint (Proof of Proposition 1)
Here we show that the solution of:
(9) 
can be reduced to a finitedimensional optimization problem:
Proposition 5
Proof Let denote the subspace of spanned by the functions for and , and denote its orthogonal supplement (). Similarly, let and denote respectively the subspace of and spanned by the functions for , (resp. for ) and and the corresponding orthogonal supplements in and . Any function of rank less than can be expanded as , with and . Now, denote by the unique decomposition of over , and define similarly with and . We therefore obtain:
(11) 
We now claim that the last three terms in (11)
are in . Indeed, taking for example a term
, we have because ,
and therefore . A
similar computation shows that
and are both in .
On the other hand, because and
, one easily gets that . Therefore is the orthogonal
projection of onto and is of rank at most . We can
conclude like for the classical representer theorem that is is
not restricted to , then
provides a rank function of
with a strictly smaller functional value, leading to a
contradiction.
Appendix C Representer theorems and trace norm (proof of Proposition 2)
Here we show a representer theorem for the solution of:
(12) 
Proposition 6
The optimal solution of (12) can be written in the form , where .
Proof
Our objective function is the sum of a function of values of for all pairs plus a squared RKHS norm. The usual representer
in the RKHS associated with
then applies, and we get a solution of the form
where .
Appendix D Learning the kernels (proof of Proposition 3)
In this section we prove that if the loss is convex, then, as a function of the kernel matrices, the optimal values of the optimization problems proposed in the paper are convex functions,
Proposition 7
Given , the function
is convex in .
Proof Given , the objective function is
convex and thus (under appropriate classical conditions), the
minimum value is equal to the maximum value of the dual problem,
obtained by adding variables and adding constraints , together
with the appropriate Lagrange multipliers. The results follow from
derivations obtained in [3, 11].
Proposition 8
The function
(13) 
only depends on the Kronecker product and is a convex function of .
Proof
The objective function (13) was originally obtained
from (12), which is the sum of a term that is a convex function of the values of
a function , for all pairs , and the norm
.
The results of [11] applies to this
case and thus the minimum value is a convex function in the kernel
matrix for all pairs and the kernel ,
which is exactly .