Multi-Task Feature Learning
From Wiki Course Notes
Imagine that a user wishes to perform some task, such as shopping for a book. If the user does not have much experience shopping for books, he or she may draw upon his or her knowledge of shopping for some other product, like music CDs. The tasks are not exactly the same, but there are similarities. The user may recognize that books are made by authors, music by artists. Both have different genres, and perhaps appeal to different demographics.
Multi-task feature learning aims to model this paradigm. Statistically, we wonder if there is some common set of features a user may apply in order to do some abstract task (returning to the example, the common task is shopping). And, specifically, can we learn multiple related tasks simultaneously, as opposed to independently?
Multi-task feature learning is especially relevant to situations where the number of observations may be small. This creates a need for finding shared structures between the tasks, so that observations can be aggregated to learn many tasks simultaneously. The search for these shared structures can be seen as a dimensionality reduction, or alternatively a regularization, problem.
The authors generalize the idea of 1-norm regularization that presents a low dimensional representation of a single task to learn the shared representation across multiple tasks. Regularization parameter controls the number of learned common features. They present an equivalent convex optimization problem to their original formulation, and propose an algorithm to solve this problem.
The learning part of the algorithm simultaneously learns both the features and the task functions through two alternating steps. The first step which is supervised includes independently learning the parameters of the tasks regression or classification functions. In the second step, which is unsupervised, a low-dimensional representation for these task parameters is found, which the authors show is equivalent to learning common features across the tasks.
Learning sparse multi-task representations
For simplicity, we will first define the following notation:
- Let be the set of real numbers, the non-negative ones and the positive ones.
- Let be the number of tasks and let the set of tasks be .
- For each task , we have input/output examples .
- The goal is then to estimate functions , that approximate the training data well and are also statistically predictive for new data.
- If , is their standard inner product in .
- For every , is the -norm of vector .
- If is a matrix, then and are the th row and the th column of , respectively.
- For every , is the -norm of .
- is the set of real symmetric matrices and is the subset of positive semidefinite ones.
- If is a matrix, then .
- If is a real matrix, then range() is the set , for some .
- is the set of orthogonal matrices.
- is the pseudoinverse of a matrix .
It is assumed that the functions are related to each other, so they all share a small set of features. Formally, the hypothesis is that the functions can be expressed as:
In the above equation, are the features and are the regression parameters.
The main assumption is that the vast majority of the features have zero coefficients across all the tasks.
For simplicity, the authors focused on linear features, i.e. , where and are assumed to be orthonormal. They noted that extensions to non-linear functions may be done, for example, by using kernels; however, they did not focus on these extensions in their paper. Furthermore, the functions are also linear, i.e. , where .
is the matrix whose columns are the vectors . is the matrix whose elements are the . The authors then obtained that where .
The assumption that the tasks share a small set of features implies that has many rows which are identically equal to zero. This, in turn, implies that the corresponding features, which are the columns of , will not be used to represent the task parameters, which are the columns of . Thus, is a low-rank matrix.
The authors' goal was to find the feature vectors and the parameters .
As a starting point, the authors considered the problem in which there is only one task (referred to as task ) and the features are fixed. In this problem, the aim was to learn the parameter vector from data .
This unconstrained problem is expressed as:
To expand the problem to the multi-task case, the authors introduced the following regularization error function:
The first term in is the average empirical error across the tasks, and the second term is a regularization term that penalizes the -norm of . The -norm of is computed by first computing the 2-norm of the rows of and then the 1-norm of the vector . The regularization term in ensures that will likely be sparse and since each row of represents a feature this regularization limits the number of common features.
However, since the authors did not simply want to select the features, but rather they also wanted to learn them, they further minimized the function over , i.e. they considered the following optimization problem:
Solving the above optimization problem, results in learning a low-dimensional representation which is shared across the tasks.
In addition, when is not learned and is set to , solving finds a common set of variables across the tasks. Under this condition for , can be re-expressed as the following convex optimization problem:
Equivalent convex optimization formulation
Equation is difficult to solve due to the following two reasons:
- It is not convex ( although it is separately convex in each of ).
- is not smooth, which makes optimization more difficult.
However, can be re-expressed as an equivalent convex problem.
Before doing so, for every and , the authors defined the following function:
Theorem 3.1 (A detailed proof is given in the authors' paper listed in Reference) Problem is equivalent to the following problem:
That is is an optimal solution for if and only if is an optimal solution for , where
Since the rank of is determined by the number of nonzero eigenvalues, we can see from the above definition of that the rank of gives the number of common relevant features shared by the tasks.
By defining the function
and showing that it is jointly convex in both and the authors show (details are available in the authors' paper listed in the Reference section) that the function in is jointly convex in both and .
The authors solved by alternately minimizing the function with respect to and the ( (the -th column of ).
When is held fixed, is found by learning its parameters independently using a regularization method such as an SVM. When is held fixed, is found by solving the following optimization problem:
Theorem 4.1 (The proof is given in the authors' paper listed in the Reference section) The optimal solution of is
Now, we have enough theory to describe an algorithm.
Algorithm 1: Multi-Task Feature Learning
- Parameters: γ
- Output matrix D and matrix W
- Initial Value:
- while convergence condition not met:
- for :
- find with and
- end for
- for :
- end while
In , is the sum of the singular values of , and thus it is sometimes called the trace norm (this work by Jason D. M. Rennie shows how the trace norm of a matrix can be expressed in a number of different ways). Using the trace norm, is reduced to a regularization problem that depends only on . However, the trace norm is not smooth. As a result, the authors argue that their algorithm (Algorithm 1) that uses an alternating minimization strategy is beneficial in that it is:
- simple to implement
- has a natural interpretation
The authors further note that Algorithm 1 alternately performs a supervised step and an unsupervised step; in the unsupervised step, common representations across the tasks are learned and, in the supervised step, task-specific functions are learned using these representations.
The authors further note that if in is additionally constrained to be diagonal, then reduces to .
The authors performed their experiments on a synthetic data set and a real data set, and they used the square loss function and tuned the regularization parameter using leave-one-out cross validation.
The authors created synthetic data sets by generating = 200 task parameters from a -dimensional Gaussian distribution with zero mean and covariance equal to . More details regarding the authors' synthetic data can be found in the paper listed in Reference.
The following figure (taken from the authors' paper listed in Reference) shows the performance of the authors' algorithm for 10, 25, 100, and 200 tasks and the performance of 200 independent standard ridge regressions on the data:
As is also demonstrated by past empirical and theoretical works, the authors' experiment on synthetic data demonstrated that learning multiple tasks together results in significant improvements over learning the tasks independently. Furthermore, this experiment demonstrated the following two key points:
- The performance of the authors' algorithm improves as the number of tasks increases, i.e. adding more tasks results in better estimates of the underlying features.
- The performance of the authors' algorithm is moderate for low-dimensionality data, but it increases as the number of irrelevant dimensions increases.
Conjoint analysis experiment
The authors then tested their method using a real data set that contained 180 persons' ratings regarding their propensities for purchasing one of 20 different personal computers. More details regarding this data can be found in the paper listed in Reference. In this data set, the people corresponded to tasks and the PC models corresponded to training instances.
The following figure (taken from the authors' paper listed in Reference) shows that the performance of the authors' algorithm improves as the number of tasks increases.
The authors also showed that their algorithm performs much better than independent ridge regressions.
The plot in the middle shows that there is a single most important feature that is shared by all people. This feature seems to capture how people weigh the technical characteristics of a computer (such as RAM, CPU, and CD-ROM) against the computer's price. The authors thus concluded that their algorithm is able to find interesting patterns in people’s purchasing decision processes.
The authors have presented an algorithm that learns common sparse function representations across a pool of related tasks. In this algorithm, the number of learned features is not a parameter, and it is instead controlled by the algorithm's regularization parameter. This algorithm also appears to be the first convex optimization formulation for multi-task feature learning.
Andreas Argyriou. Theodoros Evgeniou. Massimiliano Pontil. Multi-Task Feature Learning.