# Exponential Families

** Published:**

In my last post I discussed log-linear models. In this post I’d like to take another perspective on log-linear models, by thinking of them as members of an *exponential family*. There are many reasons to take this perspective: exponential families give us efficient representations of log-linear models, which is important for continuous domains; they always have conjugate priors, which provide an analytically tractable regularization method; finally, they can be viewed as maximum-entropy models for a given set of sufficient statistics. Don’t worry if these terms are unfamiliar; I will explain all of them by the end of this post. Also note that most of this material is available on the Wikipedia page on exponential families, which I used quite liberally in preparing the below exposition.

## 1. Exponential Families

An *exponential family* is a family of probability distributions, parameterized by ${\theta \in \mathbb{R}^n}$, of the form

$p(x \mid \theta) \propto h(x)\exp(\theta^T\phi(x)).$ (1)

Notice the similarity to the definition of a log-linear model, which is

$p(x \mid \theta) \propto \exp(\theta^T\phi(x)).$ (2)

So, a log-linear model is simply an exponential family model with ${h(x) = 1}$. Note that we can re-write the right-hand-side of (1) as ${\exp(\theta^T\phi(x)+\log h(x))}$, so an exponential family is really just a log-linear model with one of the coordinates of $\theta$ constrained to equal ${1}$. Also note that the normalization constant in (1) is a function of $\theta$ (since $\theta$ fully specifies the distribution over ${x}$), so we can express (1) more explicitly as

$p(x \mid \theta) = h(x)\exp(\theta^T\phi(x)-A(\theta)),$ (3)

where

$A(\theta) = \log\left(\int h(x)\exp(\theta^T\phi(x)) d(x)\right).$ (4)

Exponential families are capable of capturing almost all of the common distributions you are familiar with. There is an extensive table on Wikipedia; I’ve also included some of the most common below:

*Gaussian distributions.*Let ${\phi(x) = \left[ \begin{array}{c} x \\ x^2\end{array} \right]}$. Then ${p(x \mid \theta) \propto \exp(\theta_1x+\theta_2x^2)}$. If we let ${\theta = \left[\frac{\mu}{\sigma^2},-\frac{1}{2\sigma^2}\right]}$, then ${p(x \mid \theta) \propto \exp(\frac{\mu x}{\sigma^2}-\frac{x^2}{2\sigma^2}) \propto \exp(-\frac{1}{2\sigma^2}(x-\mu)^2)}$. We therefore see that Gaussian distributions are an exponential family for ${\phi(x) = \left[ \begin{array}{c} x \\ x^2 \end{array} \right]}$.*Poisson distributions.*Let ${\phi(x) = [x]}$ and ${h(x) = \left\{\begin{array}{ccc} \frac{1}{x!} & : & x \in \{0,1,2,\ldots\} \\ 0 & : & \mathrm{else} \end{array}\right.}$. Then ${p(k \mid \theta) \propto \frac{1}{k!}\exp(\theta x)}$. If we let ${\theta_1 = \log(\lambda)}$ then we get ${p(k \mid \theta) \propto \frac{\lambda^k}{k!}}$; we thus see that Poisson distributions are also an exponential family.*Multinomial distributions.*Suppose that ${X = \{1,2,\ldots,n\}}$. Let ${\phi(k)}$ be an ${n}$-dimensional vector whose ${k}$th element is ${1}$ and where all other elements are zero. Then ${p(k \mid \theta) \propto \exp(\theta_k) \propto \frac{\exp(\theta_k)}{\sum_{k=1}^n \exp(\theta_k)}}$. If ${\theta_k = \log P(x=k)}$, then we obtain an arbitrary multinomial distribution. Therefore, multinomial distributions are also an exponential family.

## 2. Sufficient Statistics

A *statistic* of a random variable ${X}$ is any deterministic function of that variable. For instance, if ${X = [X_1,\ldots,X_n]^T}$ is a vector of Gaussian random variables, then the sample mean ${\hat{\mu} := (X_1+\ldots+X_n)/n}$ and sample variance ${\hat{\sigma}^2 := (X_1^2+\cdots+X_n^2)/n-(X_1+\cdots+X_n)^2/n^2}$ are both statistics.

Let ${\mathcal{F}}$ be a family of distributions parameterized by $\theta$, and let ${X}$ be a random variable with distribution given by some unknown ${\theta_0}$. Then a vector ${T(X)}$ of statistics are called *sufficient statistics* for ${\theta_0}$ if they contain all possible information about ${\theta_0}$, that is, for any function ${f}$, we have

$\mathbb{E}[f(X) \mid T(X) = T_0, \theta = \theta_0] = S(f,T_0),$ (5)

for some function ${S}$ that has no dependence on ${\theta_0}$.

For instance, let ${X}$ be a vector of ${n}$ independent Gaussian random variables ${X_1,\ldots,X_n}$ with unknown mean ${\mu}$ and variance ${\sigma}$. It turns out that ${T(X) := [\hat{\mu},\hat{\sigma}^2]}$ is a sufficient statistic for ${\mu}$ and ${\sigma}$. This is not immediately obvious; a very useful tool for determining whether statistics are sufficient is the **Fisher-Neyman factorization theorem**:

Theorem 1 (Fisher-Neyman)Suppose that ${X}$ has a probability density function ${p(X \mid \theta)}$. Then the statistics ${T(X)}$ are sufficient for $\theta$ if and only if ${p(X \mid \theta)}$ can be written in the form$p(X \mid \theta) = h(X)g_\theta(T(X)).$ (6)

In other words, the probability of ${X}$ can be factored into a part that does not depend on $\theta$, and a part that depends on $\theta$ only via ${T(X)}$.

What is going on here, intuitively? If ${p(X \mid \theta)}$ depended only on ${T(X)}$, then ${T(X)}$ would definitely be a sufficient statistic. But that isn’t the only way for ${T(X)}$ to be a sufficient statistic — ${p(X \mid \theta)}$ could also just not depend on $\theta$ at all, in which case ${T(X)}$ would trivially be a sufficient statistic (as would anything else). The Fisher-Neyman theorem essentially says that the only way in which ${T(X)}$ can be a sufficient statistic is if its density is a product of these two cases.

*Proof:* If (6) holds, then we can check that (5) is satisfied:

$\begin{array}{rcl} \mathbb{E}[f(X) \mid T(X) = T_0, \theta = \theta_0] &=& \frac{\int_{T(X) = T_0} f(X) dp(X \mid \theta=\theta_0)}{\int_{T(X) = T_0} dp(X \mid \theta=\theta_0)}\\ \\ &=& \frac{\int_{T(X)=T_0} f(X)h(X)g_\theta(T_0) dX}{\int_{T(X)=T_0} h(X)g_\theta(T_0) dX}\\ \\ &=& \frac{\int_{T(X)=T_0} f(X)h(X)dX}{\int_{T(X)=T_0} h(X) dX}, \end{array} $

where the right-hand-side has no dependence on $\theta$.

On the other hand, if we compute ${\mathbb{E}[f(X) \mid T(X) = T_0, \theta = \theta_0]}$ for an arbitrary density ${p(X)}$, we get

$\begin{array}{rcl} \mathbb{E}[f(X) \mid T(X) = T_0, \theta = \theta_0] &=& \int_{T(X) = T_0} f(X) \frac{p(X \mid \theta=\theta_0)}{\int_{T(X)=T_0} p(X \mid \theta=\theta_0) dX} dX. \end{array} $

If the right-hand-side cannot depend on $\theta$ for *any* choice of ${f}$, then the term that we multiply ${f}$ by must not depend on $\theta$; that is, ${\frac{p(X \mid \theta=\theta_0)}{\int_{T(X) = T_0} p(X \mid \theta=\theta_0) dX}}$ must be some function ${h_0(X, T_0)}$ that depends only on ${X}$ and ${T_0}$ and not on $\theta$. On the other hand, the denominator ${\int_{T(X)=T_0} p(X \mid \theta=\theta_0) dX}$ depends only on ${\theta_0}$ and ${T_0}$; call this dependence ${g_{\theta_0}(T_0)}$. Finally, note that ${T_0}$ is a deterministic function of ${X}$, so let ${h(X) := h_0(X,T(X))}$. We then see that ${p(X \mid \theta=\theta_0) = h_0(X, T_0)g_{\theta_0}(T_0) = h(X)g_{\theta_0}(T(X))}$, which is the same form as (6), thus completing the proof of the theorem. $\Box$

Now, let us apply the Fisher-Neyman theorem to exponential families. By definition, the density for an exponential family factors as

$p(x \mid \theta) = h(x)\exp(\theta^T\phi(x)-A(\theta)). $

If we let ${T(x) = \phi(x)}$ and ${g_\theta(\phi(x)) = \exp(\theta^T\phi(x)-A(\theta))}$, then the Fisher-Neyman condition is met; therefore, ${\phi(x)}$ is a vector of sufficient statistics for the exponential family. In fact, we can go further:

Theorem 2Let ${X_1,\ldots,X_n}$ be drawn independently from an exponential family distribution with fixed parameter $\theta$. Then the empirical expectation ${\hat{\phi} := \frac{1}{n} \sum{i=1}^n \phi(X_i)}$ is a sufficient statistic for $\theta$._

*Proof:* The density for ${X_1,\ldots,X_n}$ given $\theta$ is

$\begin{array}{rcl} p(X_1,\ldots,X_n \mid \theta) &=& h(X_1)\cdots h(X_n) \exp(\theta^T\sum_{i=1}^n \phi(X_i) - nA(\theta)) \\ &=& h(X_1)\cdots h(X_n)\exp(n [\hat{\phi}-A(\theta)]). \end{array} $

Letting ${h(X_1,\ldots,X_n) = h(X_1)\cdots h(X_n)}$ and ${g_\theta(\hat{\phi}) = \exp(n[\hat{\phi}-A(\theta)])}$, we see that the Fisher-Neyman conditions are satisfied, so that ${\hat{\phi}}$ is indeed a sufficient statistic. $\Box$

Finally, we note (without proof) the same relationship as in the log-linear case to the gradient and Hessian of ${p(X_1,\ldots,X_n \mid \theta)}$ with respect to the model parameters:

Theorem 3Again let ${X_1,\ldots,X_n}$ be drawn from an exponential family distribution with parameter $\theta$. Then the gradient of ${p(X_1,\ldots,X_n \mid \theta)}$ with respect to $\theta$ is$n \times \left(\hat{\phi}-\mathbb{E}[\phi \mid \theta]\right) $

and the Hessian is

$n \times \left(\mathbb{E}[\phi \mid \theta]\mathbb{E}[\phi \mid \theta]^T - \mathbb{E}[\phi\phi^T \mid \theta]\right). $

This theorem provides an efficient algorithm for fitting the parameters of an exponential family distribution (for details on the algorithm, see the part near the end of the log-linear models post on parameter estimation).

## 3. Moments of an Exponential Family

If ${X}$ is a real-valued random variable, then the *${p}$th moment* of ${X}$ is ${\mathbb{E}[X^p]}$. In general, if ${X = [X_1,\ldots,X_n]^T}$ is a random variable on ${\mathbb{R}^n}$, then for every sequence ${p_1,\ldots,p_n}$ of non-negative integers, there is a corresponding moment ${M_{p_1,\cdots,p_n} := \mathbb{E}[X_1^{p_1}\cdots X_n^{p_n}]}$.

In exponential families there is a very nice relationship between the normalization constant ${A(\theta)}$ and the moments of ${X}$. Before we establish this relationship, let us define the *moment generating function* of a random variable ${X}$ as ${f(\lambda) = \mathbb{E}[\exp(\lambda^TX)]}$.

Lemma 4The moment generating function for a random variable ${X}$ is equal to$\sum_{p_1,\ldots,p_n=0}^{\infty} M_{p_1,\cdots,p_n} \frac{\lambda_1^{p_1}\cdots \lambda_n^{p_n}}{p_1!\cdots p_n!}. $

The proof of Lemma 4 is a straightforward application of Taylor’s theorem, together with linearity of expectation (note that in one dimension, the expression in Lemma 4 would just be ${\sum_{p=0}^{\infty} \mathbb{E}[X^p] \frac{\lambda^p}{p!}}$).

We now see why ${f(\lambda)}$ is called the moment generating function: it is the exponential generating function for the moments of ${X}$. The moment generating function for the sufficient statistics of an exponential family is particularly easy to compute:

Lemma 5If ${p(x \mid \theta) = h(x)\exp(\theta^T\phi(x)-A(\theta))}$, then ${\mathbb{E}[\exp(\lambda^T\phi(x))] = \exp(A(\theta+\lambda)-A(\theta))}$.

*Proof:*

$\begin{array}{rcl} \mathbb{E}[\exp(\lambda^Tx)] &=& \int \exp(\lambda^Tx) p(x \mid \theta) dx \\ &=& \int \exp(\lambda^Tx)h(x)\exp(\theta^T\phi(x)-A(\theta)) dx \\ &=& \int h(x)\exp((\theta+\lambda)^T\phi(x)-A(\theta)) dx \\ &=& \int h(x)\exp((\theta+\lambda)^T\phi(x)-A(\theta+\lambda))dx \times \exp(A(\theta+\lambda)-A(\theta)) \\ &=& \int p(x \mid \theta+\lambda) dx \times \exp(A(\theta+\lambda)-A(\theta)) \\ &=& \exp(A(\theta+\lambda)-A(\theta)), \end{array} $

where the last step uses the fact that ${p(x \mid \theta+\lambda)}$ is a probability density and hence ${\int p(x \mid \theta+\lambda) dx = 1}$. $\Box$

Now, by Lemma 4, ${M_{p_1,\cdots,p_n}}$ is just the ${(p_1,\ldots,p_n)}$ coefficient in the Taylor series for the moment generating function ${f(\lambda)}$, and hence we can compute ${M_{p_1,\cdots,p_n}}$ as ${\frac{\partial^{p_1+\cdots+p_n} f(\lambda)}{\partial^{p_1}\lambda_1\cdots \partial^{p_n}\lambda_n}}$. Combining this with Lemma 5 gives us a closed-form expression for ${M_{p_1,\cdots,p_n}}$ in terms of the normalization constant ${A(\theta)}$:

Lemma 6The moments of an exponential family can be computed as$M_{p_1,\ldots,p_n} = \frac{\partial^{p_1+\cdots+p_n} \exp(A(\theta+\lambda)-A(\theta))}{\partial^{p_1}\lambda_1\cdots \partial^{p_n}\lambda_n}. $

For those who prefer cumulants to moments, I will note that there is a version of Lemma 6 for cumulants with an even simpler formula.

**Exercise:** Use Lemma 6 to compute ${\mathbb{E}[X^6]}$, where ${X}$ is a Gaussian with mean ${\mu}$ and variance ${\sigma^2}$.

## 4. Conjugate Priors

Given a family of distributions ${p(X \mid \theta)}$, a *conjugate prior family* ${p(\theta \mid \alpha)}$ is a family that has the property that

$p(\theta \mid X, \alpha) = p(\theta \mid \alpha’) $

for some ${\alpha’}$ depending on ${\alpha}$ and ${X}$. In other words, if the prior over $\theta$ lies in the conjugate family, and we observe ${X}$, then the posterior over $\theta$ also lies in the conjugate family. This is very useful algebraically as it means that we can get our posterior simply by updating the parameters of the prior. The following are examples of conjugate families:

### Gaussian-Gaussian

Let ${p(X \mid \mu) \propto \exp((X-\mu)^2/2)}$, and let ${p(\mu \mid \mu_0, \sigma_0) \propto \exp((\mu-\mu_0)^2/2\sigma_0^2)}$. Then, by Bayes’ rule,

$\begin{array}{rcl} p(\mu \mid X=x, \mu_0, \sigma_0) &\propto \exp((x-\mu)^2/2)\exp((\mu-\mu_0)^2/2\sigma_0^2) \\ &= &\exp\left(\frac{(\mu-\mu_0)^2+\sigma_0^2(\mu-x)^2}{2\sigma_0^2}\right) \\ &\propto& \exp\left(\frac{(1+\sigma_0)^2\mu^2-2(\mu_0+\sigma_0^2x)\mu}{2\sigma_0^2}\right) \\ &\propto& \exp\left(\frac{\mu^2-2\frac{\mu_0+x\sigma_0^2}{1+\sigma_0^2}\mu}{2\sigma_0^2/(1+\sigma_0^2)}\right) \\ &\propto& \exp\left(\frac{(\mu-(\mu_0+x\sigma_0^2)/(1+\sigma_0^2))^2}{2\sigma_0^2/(1+\sigma_0^2)}\right) \\ &\propto& p\left(\mu \mid \frac{\mu_0+x\sigma_0^2}{1+\sigma_0^2}, \frac{\sigma_0}{\sqrt{1+\sigma_0^2}}\right). \end{array} $

Therefore, ${\mu_0, \sigma_0}$ parameterize a family of priors over ${\mu}$ that is conjugate to ${X \mid \mu}$.

### Beta-Bernoulli

Let ${X \in \{0,1\}}$, ${\theta \in [0,1]}$, ${p(X=1 \mid \theta) = \theta}$, and ${p(\theta \mid \alpha, \beta) \propto \theta^{\alpha-1}(1-\theta)^{\beta-1}}$. The distribution over ${X}$ given $\theta$ is then called a *Bernoulli distribution*, and that of $\theta$ given ${\alpha}$ and ${\beta}$ is called a *beta distribution*. Note that ${p(X\mid \theta)}$ can also be written as ${\theta^X(1-\theta)^{1-X}}$. From this, we see that the family of beta distributions is a conjugate prior to the family of Bernoulli distributions, since

$\begin{array}{rcl} p(\theta \mid X=x, \alpha, \beta) &\propto& \theta^x(1-\theta)^{1-x} \times \theta^{\alpha-1}(1-\theta)^{\beta-1} \\ &=& \theta^{\alpha+x-1}(1-\theta)^{\beta+(1-x)-1} \\ &\propto& p(\theta \mid \alpha+x, \beta+(1-x)). \end{array} $

### Gamma-Poisson

Let ${p(X=k \mid \lambda) = \frac{\lambda^k}{e^{\lambda}k!}}$ for ${k \in \mathbb{Z}*{\geq 0}}$. Let ${p(\lambda \mid \alpha, \beta) \propto \lambda^{\alpha-1}\exp(-\beta \lambda)}$. As noted before, the distribution for ${X}$ given ${\lambda}$ is called a _Poisson distribution*; the distribution for ${\lambda}$ given ${\alpha}$ and ${\beta}$ is called a *gamma distribution*. We can check that the family of gamma distributions is conjugate to the family of Poisson distributions.* Important note:* unlike in the last two examples, the normalization constant for the Poisson distribution actually depends on ${\lambda}$, and so we need to include it in our calculations:

$\begin{array}{rcl} p(\lambda \mid X=k, \alpha, \beta) &\propto& \frac{\lambda^k}{e^{\lambda}k!} \times \lambda^{\alpha-1}\exp(-\beta\lambda) \\ &\propto& \lambda^{\alpha+k-1}\exp(-(\beta+1)\lambda) \\ &\propto& p(\lambda \mid \alpha+k, \beta+1). \end{array} $

Note that, in general, a family of distributions will always have some conjugate family, as if nothing else the family of all probability distributions over $\theta$ will be a conjugate family. What we really care about is a conjugate family that itself has nice properties, such as tractably computable moments.

Conjugate priors have a very nice relationship to exponential families, established in the following theorem:

Theorem 7Let ${p(x \mid \theta) = h(x)\exp(\theta^T\phi(x)-A(\theta))}$ be an exponential family. Then ${p(\theta \mid \eta, \kappa) \propto h_2(\theta)\exp\left(\eta^T\theta-\kappa A(\theta)\right)}$ is a conjugate prior for ${x \mid \theta}$ for any choice of ${h_2}$. The update formula is ${p(\theta \mid x, \eta, \kappa) = p(\theta \mid \eta+\phi(x), \kappa+1)}$. Furthermore, ${\theta \mid \phi, \kappa}$ is itself an exponential family, with sufficient statistics ${[\theta; A(\theta)]}$.

Checking the theorem is a matter of straightforward algebra, so I will leave the proof as an exercise to the reader. Note that, as before, there is no guarantee that ${p(\theta \mid \eta, \kappa)}$ will be tractable; however, in many cases the conjugate prior given by Theorem 7 is a well-behaved family. See this Wikipedia page for examples of conjugate priors, many of which correspond to exponential family distributions.

## 5. Maximum Entropy and Duality

The final property of exponential families I would like to establish is a certain *duality property*. What I mean by this is that exponential families can be thought of as the maximum entropy distributions subject to a constraint on the expected value of their sufficient statistics. For those unfamiliar with the term, the *entropy* of a distribution over ${X}$ with density ${p(X)}$ is ${\mathbb{E}[-\log p(X)] := -\int p(x)\log(p(x)) dx}$. Intuitively, higher entropy corresponds to higher uncertainty, so a maximum entropy distribution is one specifying as much uncertainty as possible given a certain set of information (such as the values of various moments). This makes them appealing, at least in theory, from a modeling perspective, since they “encode exactly as much information as is given and no more”. (Caveat: this intuition isn’t entirely valid, and in practice maximum-entropy distributions aren’t always necessarily appropriate.)

In any case, the duality property is captured in the following theorem:

Theorem 8The distribution over ${X}$ with maximum entropy such that ${\mathbb{E}[\phi(X)] = T}$ lies in the exponential family with sufficient statistic ${\phi(X)}$ and ${h(X) = 1}$.

Proving this fully rigorously requires the calculus of variations; I will instead give the “physicist’s proof”. *Proof:* } Let ${p(X)}$ be the density for ${X}$. Then we can view ${p}$ as the solution to the constrained maximization problem:

$\begin{array}{rcl} \mathrm{maximize} && -\int p(X) \log p(X) dX \\ \mathrm{subject \ to} && \int p(X) dX = 1 \\ && \int p(X) \phi(X) dX = T. \end{array} $

By the method of Lagrange multipliers, there exist ${\alpha}$ and ${\lambda}$ such that

$\frac{d}{dp}\left(-\int p(X)\log p(X) dX - \alpha [\int p(X) dX-1] - \lambda^T[\int \phi(X) p(X) dX-T]\right) = 0. $

This simplifies to:

$-\log p(X) - 1 - \alpha -\lambda^T \phi(X) = 0, $

which implies

$p(X) = \exp(-1-\alpha-\lambda^T\phi(X)) $

for some ${\alpha}$ and ${\lambda}$. In particular, if we let ${\lambda = -\theta}$ and ${\alpha = A(\theta)-1}$, then we recover the exponential family with ${h(X) = 1}$, as claimed. $\Box$

## 6. Conclusion

Hopefully I have by now convinced you that exponential families have many nice properties: they have conjugate priors, simple-to-fit parameters, and easily-computed moments. While exponential families aren’t always appropriate models for a given situation, their tractability makes them the model of choice when no other information is present; and, since they can be obtained as maximum-entropy families, they are actually appropriate models in a wide family of circumstances.

## Comments

## Stefan

Very nice write up, extremely helpful, thank you!

## eddy

Very nice introduce of exponential families! Thank you for sharing it.

I think there is a flaw in your proof of Lemma 5.

The “\lambda^T x” should be “\lambda^T \phi(x)”

i.e. it is not moment generating function of X, but of the sufficient statistics \phi(x).

Thus, the following exercise works only because there is a component \phi(x) = x in Gaussian case, and the straightforward computation is still exhausting (if by hand), it is the 6-th derivative of exp(f(t)), where f(t) is a 2rd order polynomial about t.

## jsteinhardt

Thanks! If I’m correct the actual place where something goes wrong is Lemma 6, correct? (I.e. it is only the case that the moment formula I give is correct in the case where phi(x) = x, but the statement of Lemma 5 is in fact correct even though the proof is wrong.)

Best, Jacob

## Leave a Comment

Your email address will not be published. Required fields are marked *