## A3 P2

Moderator: Computer Vision 2

Dragon
Mausschubser Beiträge: 80
Registriert: 18. Apr 2006 15:36

### A3 P2

I don't manage to calculate the mrf_grad_log_prior correctly.

I tried to compute the derivative analytically (computing the derivative of the students t distribution and divide it by the students t diestribution) and then do this for all four neighbors of a pixel.
My idea was to construct a Matrix for every neighborhood.
One for the left neighbor of each pixel in my original Matrix, one for the right and so on.
Can anyone tell me whats wrong here?

Another problem is that I don't know how to treat the borders efficiently.

>flo<
Erstie Beiträge: 20
Registriert: 6. Sep 2005 18:08

### Re: A3 P2

Try computing the gradient potientials for the horizontal direction and for the vertical direction likewise. Then you can add/subtract these results in order to get the result composed of the several neighborhood potentials. Try sketching the situation for a small example (e.g. 3x3) on a sheet of paper--that helps.

Computerversteher Beiträge: 353
Registriert: 2. Okt 2006 18:53

### Re: A3 P2

could someone give me a hint on what
should be small (in absolute value)
means?
Is small < 10^-5 or is small < 1 ?

I think I implemented it correctly and I'm getting a maximum absolute difference of 0.25 and an average distance of 0.09. This somehow seems to be too big.

SebFreutel
Computerversteher Beiträge: 317
Registriert: 30. Okt 2006 21:54

### Re: A3 P2

Maradatscha hat geschrieben:I think I implemented it correctly and I'm getting a maximum absolute difference of 0.25 and an average distance of 0.09.
I get very similar values.
If I make a scatterplot of the gradient values of each pixel (of a 10x10 patch from la.png or a random image) and corresponding estimated gradients (something like plot(g1(:),g2(:),'rx') at the end of test_grad.m), I get an uncorrelated point cloud in the x and y range -.25 to .25, so the analytical and estimated gradients seem to have nothing in common.

btw, this is actually A3 P3.

/edit, solved: okay, the mistake was that in the gradient computation, one cannot simply take the sum of the partial derivatives, but has to pay attention to the signs, since there is sometimes an inner derivative -1 coming from terms like $$\frac{\partial}{\partial x_{i,k}}(x_{i+1,k} - x_{i,k})$$.