## Probabilistic Recognition

Moderator: Computer Vision

diane
Windoof-User Beiträge: 25
Registriert: 21. Jun 2009 17:33

### Probabilistic Recognition

I would like to make some details clearer here for the people asking a question this afternoon.

During the office hour today, what we discussed about is the general principle of global histogram based probabilistic recognition.

In the slides "histograms", from slide 39 to 52, a more sophisticated framework is described.
In fact m_k is not a just an histogram bin, but it is feature vector, as described in slide 43.
The idea behind this technique is to avoid extracting all the information from the image, but to use a small number of feature vectors. See slide 45 for an illustration.

If you need to go further than the information provided in the slides, please check the paper
http://alumni.media.mit.edu/~bernt/Pubs/icpr96.ps.gz

Pavel
Windoof-User Beiträge: 37
Registriert: 21. Okt 2006 23:42

### Re: Probabilistic Recognition

In short my question is: does Probabilistic Recognition save one the distance computations between histograms?

even after looking at the paper this method is not clear to me. what exactly is remembered in the training phase? Currently I think of a normalized histogram over the whole image which represents the probability density for lets say color.

Now comes testing. I take K local measurements M_i. M_i is according to the paper a single multidimensional receptive field vector. (what?!) As "local" was the last word I understood, I think of M as a local patch of the image. I could now again compute a histogram over the patch. But what is p(M_i|O_j) then? Do I have use a distance measure to compare the histogram of the patch to the histogram during training?

diane
Windoof-User Beiträge: 25
Registriert: 21. Jun 2009 17:33

### Re: Probabilistic Recognition

A local measurement M_k (not necessary a patch, but a value extracted in a given location) is extracted from an image.
p(M_i|O_n) is coming from learning. It is the normalized histogram which represents the probability density function of object O_n.
There is no distance measure involved in the training part or in the testing part.
During testing, for each measurement M_k, one bin is "activated" and the corresponding probability (learnt during training) is used in the formula in slide 43 as one term of the product.
As a remark k means k'th measurement, and does not tell you anything about the bin you will use.

DanielR
Mausschubser Beiträge: 83
Registriert: 19. Feb 2008 13:15

### Re: Probabilistic Recognition

Ah, i think now it is clear for me.

However, I still think that "Histogram bin" and "feature vector" are terms that can be used interchangeably in this approach, at least when we use global histograms for the training of our probability density functions.

Here some "blabla" why I think so: If we use, for example, RGB Histograms in our approach,then all the possible feature vectors are simply described
by the set of all possible triple = {0,...255}³

When it comes to training now, two feature vectors mi and mj that are related to the same Bin because of our discretization
will get assigned the same probability density function P(o_n|mj)= P(o_n|mi) for all possible objects. This is because our
probability is calculated using the value of the bin of mi (or mj respectively) in the histogram for object o_n and divide it
by the sum of all values of this particular bin in all the different histograms for the different objects.

During testing, we just consider a small set of feature vectors and look up from our trained probability density functions,
how these features "vote" for a certain object. Therefore,we have to find the corresponding probability density function. As
it is the same for "indistinguishable" (concerning our discretization) feature vectors,we simply use the one that is associated
with the correct "bin".

Right?

Pavel
Windoof-User Beiträge: 37
Registriert: 21. Okt 2006 23:42

### Re: Probabilistic Recognition

thanks for the clarification - I think I understood it now. Just too bad that the probability of this being asked tomorrow is close to zero 