Lukas Püttmann    About    Blog

Cosine similarity

A typical problem when analyzing large amounts of text is trying to measure the similarity of documents. An established measure for this is cosine similarity.

I.

It’s the cosine of the angle between two vectors. Two vectors have a maximum cosine similarity of 1 if they are parallel and the lowest cosine similarity of 0 if they are perpendicular to each other.

Say you have two documents and . Write these documents as vectors , where is the number of all words that show up in both documents. An entry is the number of occurences of a particular word in a document. Cosine similarity is then (Manning et al. 2008):

Given that entries can only be positive, cosine similarity will always take positive values. The division by the term in the denominator serves to normalize the length of document and bounds values between 0 and 1.

Cosine similarity is equal to the usual (Pearson’s) correlation coefficient, if we first demean the word vectors.

II.

Consider a dictionary of three words. Let’s define (in Matlab) three documents that contain some of these words:

w1 = [1; 0; 0];
w2 = [0; 1; 1];
w3 = [1; 0; 10];

W = [w1, w2, w3];

Calculate the correlation between these:

corr(W)

Which gets us:

ans =

    1.0000   -1.0000   -0.4193
   -1.0000    1.0000    0.4193
   -0.4193    0.4193    1.0000

Documents 1 and 2 have the lowest possible correlation and all other relationships have at least some association.

Define a function for cosine similarity:

function cs = cosine_similarity(x1, x2)
  l1 = sqrt(sum(x1 .^ 2));
  l2 = sqrt(sum(x2 .^ 2));

  cs = (x1' * x2) / (l1 * l2);
end

And calculate the values for our word vectors:

cosine_similarity(w1, w2)
cosine_similarity(w2, w3)
cosine_similarity(w1, w3)

Which gets us:

ans =

     0


ans =

    0.7036


ans =

    0.0995

Documents 1 and 2 again have the lowest possible similarity. The association between documents 2 and 3 is especially high, as both contain the third word in the dictionary which also happens to be of particular importance in document 3.

Demean the vectors and then run the same calculation:

cosine_similarity(w1 - mean(w1), w2 - mean(w2))
cosine_similarity(w2 - mean(w2), w3 - mean(w3))
cosine_similarity(w1 - mean(w1), w3 - mean(w3))

Producing:

ans =

   -1.0000


ans =

    0.4193


ans =

   -0.4193

They’re indeed the same as the correlations.

References

Manning, C. D., P. Raghavan and H. Schütze (2008). Introduction to Information Retrieval. Cambridge University Press. (link)