There are many different ways of classification and this kind of thing is tackled in data mining.
One similarity technique in data mining that uses what is called "code-blocks" relate to self-organizing maps:
Self-organizing map - Wikipedia, the free encyclopedia
However if you have a distribution, then one technique that is at the heart of statistical inference is that you can do a statistical test of whether two parameters are statistically significantly the same or different.
An example of this is done with two means: we can construct a hypothesis under various models to say whether one population mean is the same as another population mean with different samples corresponding to different populations.
Now one analogue (and it's far from the only one) is that if you have say a multi-variable parameterized distribution for your vector or some parameterization that accurately captures the "thing" you want to test, then you can do a statistical hypothesis test if you know it's distribution and then test whether they are the same.
However before looking at the above idea of statistical inference, the first thing I would recommend you do (in spite of the above) is that you look at a way of defining a metric on your space that calculates a "distance" function between the two vectors, in which you can use this metric to decide similarity.
Data mining does this exact thing but in a variety of contexts: it assigns different kinds of metrics between points and uses a variety of tests to see if something is "similar". Usually, the real thing that is advanced is transforming the data from one space to another and then applying a metric in the new space rather than the old one.
There are lots of reasons to transform something to a new space (even a much higher dimension one which is done in multi-dimensional scaling) but the real core of this is to transform it so that you get a particular attribute that you don't see in the un-transformed space.