This post is about entropy of discrete stochastic variables that are derived by quantizing continuous stochastic variables. A good introduction to the concept of entropy in Information Theory is in [1]. This post is in continuation to the previous post in [2].
Let x \in \mathbb{R} be a continuous stochastic variable with \text{Laplace}(0, b) distribution and let \hat{x} \in \hat{X} such that \hat{X} \subset \mathbb{R} and 0 \in \hat{X} be its uniformly quantized version with step size \Delta. We would like to find the entropy of the discrete stochastic variable \hat{x}, denoted by H_\hat{x}, which provides us an estimate of the average number of bits required to encode \hat{x}.
HALF-LAPLACE DISTRIBUTION
Half-Laplace distribution refers to the distribution obtained by folding the zero-mean distribution function along the center. The new distribution is equivalent to the distribution of the stochastic variable y = |x|, i.e. the absolute value of the original Laplace distributed variable.
Let \varphi(x) and \Phi(x) denote the probability distribution function and cumulative distribution function of the Laplace variable x. These are defined as follows.
\begin{align}\varphi(x) &= \frac{1}{2b} e^{-\frac{|x|}{b}} \\ \Phi(x) &= \begin{cases} \frac{1}{2} e^{\frac{x}{b}} & \text{if $x < 0$} \\ 1 - \frac{1}{2} e^{-\frac{x}{b}} & \text{if $x \geq 0$}\end{cases}\end{align}
The half-Laplace cumulative distribution function of y, denoted by \breve{\Phi}(y), is then given as follows.
\begin{align} \breve{\Phi}(y) &= \begin{cases} 0 & \text{if $y = 0$} \\ \Phi(y) - \Phi(-y) & \text{if $y > 0$}\end{cases} \\ &= 1 - e^{-\frac{y}{b}} \label{eqn:le} \end{align}
The expression in equation (\ref{eqn:le}) may be directly recognized as the cumulative distribution function of \text{Exponential}(1/b). Therefore, the entropy of half-Laplace distribution may be found according to the expressions in [2] with \lambda = 1/b.
LAPLACE DISTRIBUTION
Let f(\hat{x}) denote the probability mass function of the quantized variable \hat{x}. It is related to the probability density function \varphi(x) as follows.
\begin{align} f(\hat{x}) = \int_{\hat{x} - \frac{\Delta}{2}}^{\hat{x} - \frac{\Delta}{2}} \varphi(x) dx \label{eqn:lq} \end{align}
Further, let \hat{X}{}^+ \subset \hat{X} such that it contains the positive quantized values. The negative entropy -H_\hat{x} is given (by definition) as follows. Note that the simplification is possible due to the symmetry of Laplace distribution.
\begin{align} -H_\hat{x} &= \sum_{\hat{x} \in \hat{X}} f(\hat{x}) \log f(\hat{x}) \\ &= f(0) \log f(0) + 2 \sum_{\hat{x} \in \hat{X}{}^+} f(\hat{x}) \log f(\hat{x}) \end{align}
By using the definition from equation (\ref{eqn:lq}) we have the following:
\begin{align} -H_\hat{x} &= \left(1 - e^{-\frac{\Delta}{2b}}\right) \log \left[1 - e^{-\frac{\Delta}{2b}} \right] \nonumber\\ &\quad + \left(e^{\frac{\Delta}{2b}} - e^{-\frac{\Delta}{2b}}\right) \log \left[ \frac{1}{2} \left(e^{\frac{\Delta}{2b}} - e^{-\frac{\Delta}{2b}}\right) \right] v_1(b) \nonumber\\ &\quad + \left(e^{\frac{\Delta}{2b}} - e^{-\frac{\Delta}{2b}}\right) v_2(b) \end{align}
With the functions v_{1,2}(b) given by:
\begin{align} v_1(b) &= \sum_{\hat{x} \in \hat{X}{}^+} e^{-\frac{\hat{x}}{b}} \\ v_2(b) &= \sum_{\hat{x} \in \hat{X}{}^+} e^{-\frac{\hat{x}}{b}} \log e^{-\frac{\hat{x}}{b}} \end{align}
EXACT RESULT
The analytical expressions for v_{1,2}(b) may be computed starting from the following identities.
\begin{align} 1 &= \int_{-\infty}^{\infty} \varphi(x) dx = 2 \int_{0}^{\frac{\Delta}{2}} \varphi(x) dx + 2 \sum_{\hat{x} \in \hat{X}{}^+} \int_{\hat{x} - \frac{\Delta}{2}}^{\hat{x} + \frac{\Delta}{2}} \varphi(x) dx \\ -h_x &= \int_{-\infty}^{\infty} \varphi(x) \log \varphi(x) dx \nonumber\\ &= 2 \int_{0}^{\frac{\Delta}{2}} \varphi(x) \log \varphi(x) dx + 2 \sum_{\hat{x} \in \hat{X}{}^+} \int_{\hat{x} - \frac{\Delta}{2}}^{\hat{x} + \frac{\Delta}{2}} \varphi(x) \log \varphi(x) dx \end{align}
Where h_x is the differential entropy. Upon simplification, we obtain the following results.
\begin{align} v_1(b) &= \frac{e^{-\frac{\Delta}{2b}}}{e^{\frac{\Delta}{2b}} - e^{-\frac{\Delta}{2b}}} \\ v_2(b) &= \frac{-h_x - c_1 - \frac{\Delta}{2b} \left[ e^{\frac{\Delta}{2b}} + e^{-\frac{\Delta}{2b}} \right] v_1(b)}{e^{\frac{\Delta}{2b}} - e^{-\frac{\Delta}{2b}}}\end{align}
Where c_1 = log\left(\frac{1}{2b}\right) - 1 + \frac{\Delta}{2b}e^{-\frac{\Delta}{2b}}.
We see that the expressions are quite similar to the ones obtained for Exponential distribution in [2]. This is expected as Laplace and Exponential distributions are very closely related.
We see that the expressions are quite similar to the ones obtained for Exponential distribution in [2]. This is expected as Laplace and Exponential distributions are very closely related.
References:
[1] Thomas M. Cover, Joy A. Thomas, Elements of Information Theory, Second Edition, Wiley 2006.
[2] Aravindh Krishnamoorthy, Entropy of Uniformly Quantized Exponential Distribution, Applied Mathematics and Engineering Blog, October 2015.
No comments:
Post a Comment