\( \newcommand{\matr}[1] {\mathbf{#1}} \newcommand{\vertbar} {\rule[-1ex]{0.5pt}{2.5ex}} \newcommand{\horzbar} {\rule[.5ex]{2.5ex}{0.5pt}} \)
deepdream of
          a sidewalk
Show Question
\( \newcommand{\cat}[1] {\mathrm{#1}} \newcommand{\catobj}[1] {\operatorname{Obj}(\mathrm{#1})} \newcommand{\cathom}[1] {\operatorname{Hom}_{\cat{#1}}} \newcommand{\multiBetaReduction}[0] {\twoheadrightarrow_{\beta}} \newcommand{\betaReduction}[0] {\rightarrow_{\beta}} \newcommand{\betaEq}[0] {=_{\beta}} \newcommand{\string}[1] {\texttt{"}\mathtt{#1}\texttt{"}} \newcommand{\symbolq}[1] {\texttt{`}\mathtt{#1}\texttt{'}} \)
Math and science::INF ML AI::probabilistic graphical models

Motivation for graphical models

Daphne Koller describes the motivation for graphical models:

With the goal of representing a joint distribution \( P \) over some set of random variables \( \mathcal{X} = \{X_1, ..., X_n\} \). Even in the simplest case where these variables are binary-valued, a joint distribution requires specifying \( 2^n - 1 \) numbers. For all but the smallest \( n \), the explicit representation of the joint distribution is unmanageable from every perspective. It is expensive both in terms of computation and memory. The numbers can be very small and not correspond to events that people can reaonably contemplate. If we wanted to learn the distribution from data, we would need ridiculously large amounts of data to estimate this many parameters robustly. These problems were the main barrier to the adoption of probabilistic methods for expert systems until the development of the methodologies described in this book.

Probabilistic graphical models use a graph-based representation as the basis for compactly encoding a complex distribution over a high-dimensional space.


Source

Probabilistic Graphical Models
Daphne Koller