Friday 29 December 2017

Stochastic Properties of Random Matrix Factors - 3

In this third post continuing QR and related Cholesky matrix decomposition techniques, we look at the probability density function (PDF) of a diagonal element of the QR decomposition of a complex-valued random matrix with circular Gaussian i.i.d elements.

Let $\mathbf{H}$ denote an $N\times N$ matrix with circular Gaussian independent and identically distributed (i.i.d) elements with zero mean and variance two i.e. $\mathcal{CN}(0,2)$. Furthermore let $\mathbf{R}$ denote the upper-triangular matrix obtained after QR decomposition ($\mathbf{H} = \mathbf{Q}\mathbf{R}$) such that the diagonal elements of $\mathbf{R}$ are non-negative. It is well-known that such a QR decomposition (QRP or QR with positive diagonal elements) of a matrix is unique. In case a QR decomposition algorithm results in negative diagonal elements, non-negativity may be arranged by multiplying those rows of $\mathbf{R}$ and the corresponding columns of $\mathbf{Q}$ by -1.

It is well known that the diagonal elements of $\mathbf{R}$ follow the distribution:
$$
\begin{bmatrix}
\chi_{2N} & \mathcal{CN}(0,2) & \cdots &  \mathcal{CN}(0,2)\\
 & \chi_{2(N-1)} & \ddots & \vdots\\
 & & \ddots & \mathcal{CN}(0,2)\\
 & & & \chi_{2}
\end{bmatrix},
$$
where $\chi_{k}$ denotes a Chi-distributed random variable with $k$ degress of freedom.

From the above, it follows that the distribution of a diagonal element $r$ of the matrix $\mathbf{R}$ may be given as the (equal) weighted sum of the pdfs of the diagonal elements as
$$
f_r(x) = \frac{1}{N} \sum_{n=1}^{N} \frac{2^{n}}{\Gamma(n)} x^{n-1} e^{-\frac{x}{2}},
$$
where $\Gamma(n)$ is the Gamma function.

The following graph shows the results of a MATLAB simulation for $N=4$ over 100000 iterations where the diagonal values of $\mathbf{R}$ were collected. The blue graph shows the histogram of the diagonal values. The red graph shows the pdf as obtained using the equation above. We notice that the two correspond well.


ARK

Thursday 23 November 2017

Generating and Sampling Two Signed Bernoulli Random Variables with Given Correlation

The following first constructs two signed $\{-1, +1\}$ Bernoulli variables $Y_1,$ $Y_2$ such that $p(Y_1 = -1) = p_1,$ $p(Y_2 = -1) = p2,$ and $E[Y_1Y_2] = r.$ Constraints: $0 < p1,p2 < 1, -1 < r < 1.$ 

Next, it draws $N$ realizations of the two variables in $Y(m,:), m = 1,2.$ The sample pdf and cross correlation converge as $N \to \infty.$

Example: generate $N$ realizations of variables with mean $0,$ variance $1,$ and correlation $0.3:$ Y = signed_bernoulli2(N,0.5,0.5,0.3).


% Resolve the joint pdf given by
% / p(1) if (+1, +1)
% p(Y_1, Y_2) = | p(2) if (+1, -1)
% | p(3) if (-1, +1)
% \ p(4) if (-1, -1)
%
% Conditions (i.e. how matrices A and b are generated):
% row 1: p(1) + p(2) + p(3) + p(4) = 1
% row 2: p(3) + p(4) = p1 -> marginal for Y_1
% row 3: p(2) + p(4) = p2 -> marginal for Y_2
% row 4: p(1) - p(2) - p(3) + p(4) = r -> cross correlation E[Y_1 Y_2]
A = [1,1,1,1;0,0,1,1;0,1,0,1;1,-1,-1,1] ;
b = [1;p1;p2;r] ;
p = pinv(A)*b ;
assert(all(p >= 0), sprintf(['The given p1=%f and p2=%f cannot result in '...
' a correlation r=%f with the given algorithm.\np1=p2=0.5 is a safe value.'],...
p1,p2,r)) ;
% Now that we have the joint pdf in "p" we can draw from this distribution
% as follows (based on conditional pdfs and inverse transform sampling):
% 1. Draw samples of Y_1 from Signed_Bernoulli(p1) in a standard fashion using
% an auxiliary uniformly distributed variable U_1 = U(1,:).
% 2. If the n-th sample of Y_1 is "-1", then draw for Y_2 from the conditional pdf
% p(Y_2|Y_1 = -1) using a second axiliary uniformly distributed
% variable U_2 = U(2,:).
% 2. If the n-th sample of Y_1 is "+1", then draw for Y_2 from the conditional pdf
% p(Y_2|Y_1 = +1) using the second axiliary uniformly distributed
% variable U_2 = U(2,:).
% The conditional pdfs are given as follows:
% p(Y_2|Y_1 = -1) = p(4)/p1 for -1, and 1-(p(4)/p1) for +1.
% p(Y_2|Y_1 = +1) = p(2)/(1-p1) for -1, and 1-(p(2)/(1-p1)) for +1.
%
% Note: for the rather straightforward proofs for the above steps, please contact
% the author.

% Auxiliary variable U
U = rand(2,N) ;
% Draw samples from p(Y_1)
Y(1,:) = (-1).^(U(1,:) < p1) ;
% Indices to help choose the conditional pdf
I1 = Y(1,:) == -1 ;
I2 = Y(1,:) == +1 ;
% Draw samples from the conditional pdfs
Y(2,I1) = (-1).^(U(2,I1) < p(4)/p1) ;
Y(2,I2) = (-1).^(U(2,I2) < p(2)/(1-p1)) ;

Friday 25 August 2017

Stochastic Properties of Random Matrix Factors - 2

In the second on the series of posts on stochastic properties of factors of random matrices, we look at a yet another simple case involving norm square of elements of QR decomposition of a scaled Rayleigh matrices. More below.

In applications such as Non-Orthogonal Multiple Access (NOMA), the so-called "path loss" plays an important role whereby a channel matrix $\mathbf{H}$ at a receiver is better modelled as a scaled Rayleigh matrix given by $$\mathbf{H} = \frac{1}{\sqrt{L}} \mathbf{G}$$ where $L > 0$ is the path-loss factor and $\mathbf{G}$ the conventional Rayleigh matrix with real and imaginary parts of the elements each $\mathcal{N}(0,1)$.

Let (by QR decomposition) $\mathbf{H} = \mathbf{Q}\mathbf{R}$ where $\mathbf{Q} \in \mathbb{C}^{M\times M}$ is a Unitary matrix, and $\mathbf{R} \in \mathbb{C}^{M\times N}$ is upper triangular. The distribution of the norm square of elements, $|[\mathbf{R}]_{ij}|^2$, of $\mathbf{R}$ is given by $$|[\mathbf{R}]_{ij}|^2 \sim \begin{cases} \Gamma(M+1-i,\frac{2}{L}) & \text{if $i = j$} \\ \Gamma(1,\frac{2}{L}) & \text{if $i < j$} \\ 0 & \text{otherwise,} \end{cases}$$ where $i=1,2,\cdots,M$, $j=1,2,\cdots,N$, and $\Gamma(k,\theta)$ denotes the Gamma distribution with probability density function $$f(x; k,\theta) = \frac{1}{\Gamma(k)\theta^k}x^{(k-1)}e^{-\frac{x}{\theta}}$$ where $\Gamma(k)$ is the complete Gamma function.

The stochastic properties of the diagonal elements of $\mathbf{R}$ are interesting as they can be used to derive insights, amongst others, about mutual information of "layers" (i.e. parallel streams of information, one per row) or the determinant of the covariance matrix of $\mathbf{H}$.

Figure below illustrates the distributions of elements of $\mathbf{R}$ for $M=4, N=2$ computed numerically using MATLAB. Normalized numerical quantities obtained over 100000 trials are plotted in the blue bar graph and the theoretical expectation is plotted in red. Numerical results closely match the theoretical expectation.


ARK