Desire Maximization Belief Propagation

Expectation maximization belief propagation l.jpg
1 / 28
1161 days ago, 349 views
PowerPoint PPT Presentation

Presentation Transcript

Slide 1

Desire Maximization & Belief Propagation Alan Yuille Dept. Measurements UCLA

Slide 2

1. Seat Goal of this Talk. The objective is to present the Expectation-Maximization (EM) and Belief Propagation (BP) calculations. EM is one of the significant calculations utilized for induction for models where there are covered up/missing/idle factors.

Slide 3

Example: Geman and Geman

Slide 4

Images are piecewise smooth Assume that pictures are smooth with the exception of at sharp discontinuities (edges). Support from the insights of genuine pictures (Zhu & Mumford).

Slide 5

Graphical Model & Potential The Graphical Model. An undirected diagram. Concealed Markov Model. The potential. In the event that the inclination in u turns out to be too expansive, then the line procedure is actuated and the smoothness is cut.

Slide 6

The Posterior Distribution We apply Bayes control to get a back appropriation:

Slide 7

Line Process: Off and On Illustration of Line Processes. No Edge

Slide 8

Choice of Task. What would we like to gauge?

Slide 9

Expectation Maximization.

Slide 10


Slide 11

Back to the Geman & Geman display

Slide 12

Image Example

Slide 13

Neural Networks and the Brain An early variation of this calculation was detailed as a Hopfield arrange. Koch, Marroquin, Yuille (1987) It is simply conceivable that a variation of this calculation is executed in V1 – Prof. Tai Sing Lee (CMU).

Slide 14

EM for Mixture of two Gaussians A blend model is of shape:

Slide 15

EM for a Mixture of two Gaussians Each perception has been produced by one of two Gaussians. In any case, we don't have a clue about the parameters (i.e. mean and difference) of the Gaussians and we don't know which Gaussian created every perception. Hues show the task of focuses to bunches (red and blue). Intermediates (e.g. purple) speak to probabilistic assignments. The ovals speak to the present parameters estimations of every group.

Slide 16

Expectation-Maximization: Summary We can apply EM to any deduction issue with shrouded factors. The accompanying constraints apply: (1) Can we play out the E and M steps? For the picture issue, the E step was explanatory and the M step required understanding straight conditions. (2) Does the calculation merge to the worldwide most extreme of P(u|d)? This is valid for a few issues, however not for all.

Slide 17

Expectation Maximization: Summary For an essential class of issues – EM has a decent cooperative association with element programming (see next address). Scientifically, the EM calculation falls into a class of improvement strategies known as Majorization (Statistics) and Variational Bounding (Machine Learning). Majorization (De Leeuw) is significantly more seasoned…

Slide 18

Belief Propagation (BP) and Message Passing BP is an induction calculation that is correct for graphical models characterized on trees. It is like element programming (see next address). It is frequently known as "loopy BP" when connected to diagrams with shut circles. Exactly, it is frequently a fruitful estimated calculation for charts with shut circles. In any case, it has a tendency to corrupt seriously when the quantity of shut circles increments.

Slide 19

BP and Message Parsing We characterize a dispersion (undirected chart) BP comes in two structures: (I) aggregate item, and (II) max-item. Entirety item (Pearl) is utilized for assessing the minor disseminations of the factors x.

Slide 20

Message Passing: Sum Product Sum-item continues by passing messages between hubs.

Slide 21

Message Parsing: Max Product The maximum item calculation (Gallager) additionally utilizes messages yet it replaces the total by a maximum. The redesign lead is:

Slide 22

Beliefs and Messages We develop "convictions" – assessments of the minor probabilities – from the messages: For graphical models characterized on trees ( shut circles): (i) entirety item will meet to the marginals of the dispersion P(x). (ii) max-item joins to the most extreme likelihood conditions of P(x). Be that as it may, this is not exceptionally uncommon, in light of the fact that different calculations do this – see next address.

Slide 23

Loopy BP The significant enthusiasm for BP is that it performs well exactly when connected to charts with shut circles. However, (i) joining is not ensured (the calculation can waver) (ii) the subsequent convictions are just approximations to the right marginals .

Slide 24

Bethe Free Energy There is one noteworthy hypothetical result (Yedidia et al). The settled purposes of BP relate to extrema of the Bethe free vitality. The Bethe free vitality is one of an arrangement of approximations to the free vitality.

Slide 25

BP without messages. Utilize the convictions to develop nearby approximations B(.) to the dispersion. Overhaul convictions by rehashed underestimation

Slide 26

BP without messages Local approximations (predictable on trees).

Slide 27

Another Viewpoint of BP There is additionally a relationship amongst BP and Markov Chain Monte Carlo (MCMC). BP resemble a deterministic type of the Gibbs sampler. MCMC will be portrayed in later addresses.

Slide 28

Summary of BP gives correct results on trees (like element programming). BP gives shockingly great rough results on diagrams with circles. No assurances of meeting, yet settled purposes of BP compare to extrema of the Bethe Free vitality. BP can be defined without messages. BP resemble a deterministic variant of the Gibbs sampler in MCMC.