A3 P2 - calculating class-conditional probabilities

Moderator: Computer Vision

Flix
Erstie
Erstie
Beiträge: 20
Registriert: 14. Jan 2013 12:37

A3 P2 - calculating class-conditional probabilities

Beitrag von Flix » 1. Jul 2013 22:57

[edit: I already came across the answer, see below]

Hi,

the assignment states the following formula for the calculation of keypoint-probabilities with laplace smoothing:

\(p(\mu_i | C_j) = \frac{\displaystyle 1+\sum_{I_m \in C_j} n(i,m)}{\displaystyle k + \sum_{t=1}^k \sum_{I_m \in C_j} n(t,m)}\)

whereas the slides (l7, p. 26) contain the following formula:

\(p(w | C) = \frac{\displaystyle 1 + \sum_{i:y_i=C} | \{t : x_{it} = w\}|}{\displaystyle m + \sum_i n_i}\)

I am only concerned about the denominator, which, as far as I get it, expresses something different in both formulas.

\(k + \sum_{t=1}^k \sum_{I_m \in C_j} n(t,m)\)

sums up the occurances of all keypoints in image class \(C_j\).

\(m + \sum_i n_i\)

sums up the occurances of all keypoints, regardless of the images class.

We are looking for the relative frequency of a keypoint k in a class A, which should be \(\frac{occurances\ of\ k\ in\ A}{total\ occurances\ of\ k}\), so I tend to the latter alternative.

What am I missing?
Zuletzt geändert von Flix am 1. Jul 2013 23:37, insgesamt 1-mal geändert.

Flix
Erstie
Erstie
Beiträge: 20
Registriert: 14. Jan 2013 12:37

Re: A3 P2 - calculating class-conditional probabilities

Beitrag von Flix » 1. Jul 2013 23:36

After some more thinking, I realized that the relative frequency of a feature k in A is of course \(\frac{occurances\ of\ k\ in\ A}{number\ of\ features\ in\ A}\). So my interpretation of the formula in the exercise sheet would be the correct solution.

lustiz
Mausschubser
Mausschubser
Beiträge: 70
Registriert: 29. Apr 2009 10:28

Re: A3 P2 - calculating class-conditional probabilities

Beitrag von lustiz » 1. Jul 2013 23:47

Hi, I just took a look at the lecture slides and, indeed, they are not quite clear about this. However, I think the lecture slides actually correspond to the assignment version just dropping the class' index. The major thing about Naive Bayes is its conditional independence assumptions.. and here we are conditioning on the class j so you do not really want to sum up over all other classes.

You want to compute how likely a certain mean i is knowing that you are in class j. So you divide the number of occurences of mean i in class j by the number of occurences of all means of class j (and add smoothing)..

EDIT: alrighty :P

Antworten

Zurück zu „Computer Vision“