Updating formula for the sample covariance and correlation
Wiley Online Library requires cookies for authentication and use of other site features; therefore, cookies must be enabled to browse the site.
If the covariance matrix of our data is a diagonal matrix, such that the covariances are zero, then this means that the variances must be equal to the eigenvalues .
This is illustrated by figure 4, where the eigenvectors are shown in green and magenta, and where the eigenvalues clearly equal the variance components of the covariance matrix.
Equation (13) holds for each eigenvector-eigenvalue pair of matrix .
In the 2D case, we obtain two eigenvectors and two eigenvalues.
In statistics this is often refered to as ‘white data’ because its samples are drawn from a standard normal distribution and therefore correspond to white (uncorrelated) noise: However, although equation (12) holds when the data is scaled in the x and y direction, the question rises if it also holds when a rotation is applied.
However, if the covariance matrix is not diagonal, such that the covariances are not zero, then the situation is a little more complicated.
The eigenvalues still represent the variance magnitude in the direction of the largest spread of the data, and the variance components of the covariance matrix still represent the variance magnitude in the direction of the x-axis and y-axis.
Each of the examples in figure 3 can simply be considered to be a linearly transformed instance of figure 6:where and are the scaling factors in the x direction and the y direction respectively.
In the following paragraphs, we will discuss the relation between the covariance matrix , and the linear transformation matrix .
So, if we would like to represent the covariance matrix with a vector and its magnitude, we should simply try to find the vector that points into the direction of the largest spread of the data, and whose magnitude equals the spread (variance) in this direction.