Principal Component Analysis class
The class is used to compute a special basis for a set of vectors.
The basis will consist of eigenvectors of the covariance matrix
computed from the input set of vectors. The class PCA can also
transform vectors to/from the new coordinate space defined by the
basis. Usually, in this new coordinate system, each vector from the
original set (and any linear combination of such vectors) can be
quite accurately approximated by taking its first few components,
corresponding to the eigenvectors of the largest eigenvalues of the
covariance matrix. Geometrically it means that you compute a
projection of the vector to a subspace formed by a few eigenvectors
corresponding to the dominant eigenvalues of the covariance matrix.
And usually such a projection is very close to the original vector.
So, you can represent the original vector from a high-dimensional
space with a much shorter vector consisting of the projected vector's
coordinates in the subspace. Such a transformation is also known as
Karhunen-Loeve Transform, or KLT. See
PCA.
Example
The following shows a quick example of how to reduce dimensionality
of samples from 10 to 3.
Xtrain = randn(100,10);
Xtest = randn(100,10);
pca = cv.PCA(Xtrain, 'MaxComponents',3);
Y = pca.project(Xtest);
Xapprox = pca.backProject(Y);
The class also implements the save/load pattern to regular MAT-files,
so we can do the following:
pca = cv.PCA(randn(100,5));
save out.mat pca
clear pca
load out.mat
disp(pca)