Application: Array Processing and Array Gain with Uniform Linear Arrays

Application: Array Processing and Array Gain with Uniform Linear Arrays


Hello, welcome to another module in this massive
open online course on probability and random variables for wireless communication. So,
in the previous modules, we have looked at the Gaussian random variables and properties
of Gaussian random variables and specially the linear combination of Gaussian random
variables which results in another Gaussian random variable. Let us look today at an application,
at a novel application of this property of Gaussian random variables from wireless communication
in the context of antenna arrays or array processing or also beamforming, okay. So, let us look at an application of our Gaussian
random variables in the context of antenna in the context of antenna in the context of
antenna arrays or array processing. So, what are we considering, let us again,
let me draw a schematic, let me consider a receiver with multiple antennas, so what I
am drawing here is I am drawing a wireless receiver with multiple with multiple antennas
here such that the spacing between antennas. So these antennas are arranged in a line or
these antennas are arranged linearly such that the spacing between antennas, so the
spacing between the antennas is constant. Spacing between antennas is equal to d. So,
this is the linear array of antenna in the sense that these antennas are arranged in
a line, in a linear fashion and the spacing between 2 antennas, that the spacing between
adjacent antennas d. That is each antenna is spaced at a distance
d from the next antenna, so the spacing between these antennas is uniform and this is an array
of antennas, this is known as uniform linear array. So, such a spacing of antenna, such
an antenna configuration, this is known as a uniform
this is known as the uniform linear array and in this new uniform linear array, let
us consider a signal which is arise arriving at an angle of, so we have a signal arriving
at an angle of theta. This angle theta, this is known as the angle
this is known as the angle of arrival. So, I am considering an antenna array, a uniform
linear array with antenna spacing d and the signal which is arriving at an angle of theta
with the vertical, this is known as the angle of arrival of the signal. Okay. And let Yi
denote the signal at the ith antenna, so which means Y1 is the signal at the 1st antenna,
Y2 is the signal at the 2nd antenna and so on if we have L antenna, so we are considering
a scenario with L we are considering that scenario with L antennas which implies that
YL is the signal at the Lth, so what I am saying is that Yi is the
signal received at the ith antenna and the spacing between the antennas is d. Theta is the angle of arrival, this is known
as the uniform linear array with antenna spacing d. There are L antennas and Yi is the signal
received at the ith, therefore we have signals Y1, Y2, YL which are received at the L antennas
in this uniform linear array architecture. Now let us write the system model for this
uniform linear array. So, the system model for this uniform linear array is given as
follows. So, the system model is given as follows the system model for the system model
for this uniform linear array is given as the signal received at the ith antenna is
given as the transmitted signal x Times e to the power of minus i minus 1phi times plus
Wi where Wi is the Gaussian noise. This is Gaussian noise, therefore now let
us substitute the different values of i that is i equal to1, 2, so on up to L, therefore
I have Y1 equals e to the power of minus i minus 1, that is when i equal to1, this is
1 Y1 equals X plus W1, Y 2 equals X Times e to the power of minus iphi plus W2, Y3 equals
X Times e to the power of minus j 2 phi plus W3 so on and so forth, I have YL equals x
e to the power of minus j L minus1phiplus WL. Okay, this is the system model.
We are saying the signal ah Yi received that ith antenna is Yi equals X, that is the transmitted
signal times e to the power of minus j i minus 1 phi plus Wi and therefore now you can see
I have Y1 equals X plus W1, Y2 equals x e to the power of minus jphi plus W2, Y3 equals
x plus e to the power of minus J2phi plus W3, therefore each the the received signal,
if you remove the noise, you see that the received signal at the 1st antenna is x e
to the X, 2nd antenna, it is x e to the power of minus Jphi and the 3rd antenna, it is x
e to the power of minor J2phi, so the signal at each successive receive antenna is delayed
by a phase factor e to the power of minus Jphi corresponding to the signal at the previous
receive antenna. Right ? So, we have signal with the 1st antenna which
is X, 2nd antenna, it is x e to the power of minus Jphi 3rd antenna, it is x e to the
power of minor J2phi, so this signal at each successive receive antenna is delayed by,
has an additional phase factor of e to the power of minus J phi. Therefore, this is also known as a phased
antenna arrays. This is a phased array, all right. So, this uniform linear array, because
each successive, the signal at each successive antenna has a phase with respect to the signal
at the previous antenna, this is known as the phased antenna array or this is simply
known as phased array, so this conceptÉ This is known as a phased array of antenna.
So, this is known as the phased this is known as the phased array and signal at each successive
antenna signal at each successive antenna has a has
a phase difference of e to the power of minus Jphi with respect to the previous antenna.
Has a phase difference of E to the power of minus Jphi with respect to the previous antenna,
therefore this is also known as the phased array or phased antenna array. Now, let us
look at the signal processing for this phased antenna array, how do we process the received
signal Y1, Y2, YL in this phase array or in this uniform linear array. So, I have the
signals. Now, before I write the system model, let
me describe to you what this angle phi is. This phi is related to the angle of arrival
theta as phi equals 2 pie FC d cosine theta divided by C, now C over FC is lambda which
is the wavelength, so this phi is also 2pie, so let me write this a little clearly, this
phi is equal to 2 pie d divided by lambda cosine theta where you probably remember that
this theta is the angle of arrival or theta is the angle between
the signal and the vertical line. That is, theta is also known as the angle of arrival
for this uniform linear array or this phased array. Okay, now let us look at signal processing
for this phased array. The system model, the vector system model for this phased array,
that is given as Y1 Y2 YL, this vector is equal to I have one e to the power of minus
Jphi, e to the power of minus J2 phi so on e to the power of minus jL minus 1 phi times
X plus W1 W2 so on up to WL. So, this I can denote by my received vector Y bar, this I
can call it as a vector H bar phi which depends on the angle phi and this, I can call it as
basically my noise vector W bar and this also I can write this H bar as a function of theta
also, I am writing it here as a function of phi, function Phi, this is known as the array
this is known as the array steering vector. So, H bar which is a function of phi, but
remember why is the function of the angle of arrival theta, so it different angle of
theta corresponds to a different different vector H bar, so this H bar of phi is also
known as the array steering vector. It is as if you are steering the array in
the direction of theta, that is the direction of the angle of arrival Theta. So, I so I
write this system model that is received signal vector Y bar consisting of Y1 Y2 YL Where
Yi is the signal received that the ith antenna, Y bar equals H bar phi times X which is the
transmitted signal plus W bar which is the noise vector where H bar phi, when phi is
in fact is the function of the angle of arrival Theta, this is known as the array steering
this is known as the array steering vector. I have expressed this system model in terms
of the array steering vector. Now let us write the system model now let
us look at the signal processing, so, I have Y bar equals H bar of phi times X plus W bar
where H bar of phi is the array steering vector. Further, now let us look at the properties
of noise as seen in the special case in previous module as seen
as seen in the previous module. Consider W1 W2 WL noise samples W1 W2 WL to be IID Gaussian,
that is independent identically distributed Gaussian, remember what is IID that is independent
identically distributed Gaussian with 0 mean. That is each Gaussian random variable, each
noise sample Wi has mean equal to 0. Further, let us assume that each noise sample
has power or variance expected magnitude Wi square equals Sigma Square, so this is basically
your 0 mean, this is the variance or noise power. Further we also assume that the noise
samples are not independent, that is they are uncorrelated. This is complex noise I
am writing Wi Wj conjugate is equal to 0 which means noise samples, different noise samples
the different noise samples on different antennas are uncore uncorrelated. These noise samples
are basically independent which also automatically mean they are uncorrelated.
However remember, only in the case of Gaussian, uncorrelated also means that they are independent.
So, were considering independent identically distributed Gaussian noise samples with 0
mean variance Sigma Square and covariance between the noise samples at 2 different antennas
is equal to 0. Now let us consider the combining of the received
samples Y1 Y2 up to YL, so what I am going to do now I have, consider the combiner
consider the combiner or what is also known as the beam former A bar equals A1 A2 so on
up to AL, this is a vector of size L, and now I am going to form Y tilde equals A1 conjugate
Y1 plus A2 conjugate Y 2 plus AL conjugate YL which I tried as later A1 conjugate I A2
conjugate so on AL conjugate times Y1 Y2 up to YL. Now you can see this is basically nothing
but my vector A bar Hermitian this is my vector A bar Hermitian this is vector A bar Hermitian
times Y bar, so, I am forming, so, what I am doing, I am combining these received samples
Y1 Y2 YL using the weights A1 A2 AL and performing A1 conjugate Y1 plus A2 conjugate Y 2 so on
AL conjugate YL which I can write as a vector A bar Hermitian Y bar, this combining vector,
this vector combination is also known as the beam former. And this process is also known
as beam forming. So, this vector, this combining vector A bar
this is also known as a beam former and this process is also known as this process is also
termed as this process is also termed as beamforming. So now, I have Y tilde, I have basically Y
tilde equals A bar Hermitian Y bar which is equal to remember, Y bar is the array steering
vector H bar of phi times X plus W bar which is equal to A bar Hermitian H bar phi times
X less A bar Hermitian times W bar. This remember, this is the signal part this is the signal
part this is the noise this is the noise part. In addition let us say that the transparent
signal power is P, that is expected value of magnitude X square, remember the power
in the transmitted signal is nothing but the variance of the signal, expected value of
magnitude X square equals P, let us assume that the transmitted signal power is P. Let us now look at what is the signal to noise
power ratio of this system alright. So, we are looking at a beam former, we are looking
at performing A bar Hermitian times Y bar, we have isolated the signal part, we have
isolated the noise part, let us now look at the signal-to-noise power ratio, which is
the power in the signal to the power in the noise.
So, let us now look at the signal to noise power ratio SNR which is equal to the ratio
of the signal power to the noise power which is equal to magnitude of A bar Hermitian H
bar phi magnitude square expected magnitude X square divided by the noise power, this
is the signal power is magnitude of A bar Hermitian H bar phi magnitude square times
expected magnitude X square which is P divided by the noise power. Now let us look at what is the noise power
of this system. Now previously we have seen and this is when our properties of Gaussian
will come handy, remember the noise power remember the noise is A bar Hermitian W bar,
that is a linear combination of Gaussian random variable. Previously we have seen and we have
previously previously we have seen that if W1, W2 WL
are Gaussian and inside if they are IID Gaussian with 0 mean, we have A bar Hermitian W bar
that is we are performing a linear combination of Gaussian noise samples W1, W2 up to WL
are Gaussian, therefore when we combine them with the coefficients A1, A2, AL, the resulting
noise that is A bar Hermitian W bar that is Gaussian in nature.
Further, if each of these samples W1 W2 WL is 0 mean, then resulting noise that is A
bar Hermitian W bar is also 0 mean, so expected value of A bar Hermitian W bar this is equal
to 0. So, this basically this is 0 this is 0 mean noise. Further the noise variance or the resulting
noise power that is magnitude of A bar Hermitian W bar this is equal to Sigma Square Times
norm of A bar square, we have seen this also in the previous module, that is when we perform
A bar Hermitian or we had seen previously in the context of real vectors, we had seen
later A bar transposed W bar, now this is a complex vector, so were considering A bar
Hermitian W bar, we are saying the noise is 0 mean and the variance of noise, since all
the noise samples are identical, that is have identical variance Sigma Square, the variance
of the noise at output is Sigma Square Times norm of A bar square. Therefore SNR becomes SNR remember previously
we had said SNR equals magnitude A bar Hermitian H bar phi whole square times expected magnitude
X square that is signal power that is P divided by the noise power which is Sigma Square norm
of A bar square, therefore my SNR SNR becomes magnitude A bar Hermitian H bar phi is array
steering vector square P which is the transmitted signal power divided by the noise power that
is Sigma Square Times norm of Sigma Square Times the norm of A bar square. Therefore
this is the this is the expression is now this the expression for general beam former
A, this is the expression this is the expression that we derive for the SNR at the this is
the expression we derive for the SNR at the output.
Now we want to find the beamforming vector A bar or the beamforming weights A1, A2, AL
which will maximise the SNR of at the output of this uniform linear array or this phased
array. That is the want to find the beam former A bar which will yield the maximum SNR so
that we enhance the signal to noise power ratio at the output. So, we want to find we want to find A bar
A bar which maximises, which maximises SNR which maximises SNR at the output. Now you
can see from this expression from the Cauchy Schwarz inequality, I have magnitude A bar
Hermitian H bar phi whole square less than or equal to norm of A bar square Times norm
of H bar phi whole square I have this, this follows from what is known as the Cauchy Cauchy
Schwarz, also known sometimes simply has the Schwarz inequality.
This follows from the Cauchy Schwarz inequality that the magnitude of A bar Hermitian H bar
phi magnitude square is less than equal to norm A bar square times norm of H bar phi
square. Therefore my SNR is less than or equal to magnitude A bar square magnitude H bar
phi whole square times P divided by the noise power which is Sigma Square norm A bar square. The norm A bar square in the numerator and
the norm A bar square in the denominator, these cancelled, therefore the maximum SNR
is given as norm H bar phi whole square P divided by Sigma Square. Remember, this is
the this is the this is the this is the maximum
SNR norm of H bar phi square P divided by Sigma Square. This is what we derived from
the this is what we derive from the Cauchy Schwarz inequality. Now let us see what is this maximum SNR, now
maximum SNR, remember maximum SNR, now H bar of phi is remember 1, e raised to the power
minus jphi so on e raised to the power minus jL minus 1 phi, therefore the maximum SNR, so norm of H bar
of phi whole square is one plus magnitude e power minus j phi whole square plus magnitude
e power minus j 2 phi whole square plus so on. Magnitude e power minus jL minus1 phi
whole square, all of these are face factors, so magnitude square is one. Which is basically
1 plus 1 plus 1 L Times this is the sum of 1 which is taken L Times and this is equal
to L. So, norm of H bar phi whole square equals
L, therefore the maximum SNR the maximum SNR is given as L Times P divided by Sigma Square
and this factor of L in this system provided in this by this array is also known as the
array gain of this system. So, what are we saying, we are saying something very interesting,
we are saying the maximum SNR is given as norm of H bar phi whole square times P divided
by Sigma Square. But norm of H bar phi square we have seen isÉ
So, the SNR at the output of the beam former is L Times P over Sigma Square. Remember,
without the array, the initial signal to noise power ratio is simply P over Sigma Square,
so using this phased array, using this uniform linear array, we are able to multiply this
signal to noise power ratio by a factor of L. So, this factor of L, where L is the number
of antennas, the factor of L gain in the signal to noise power ratio at the output of this
uniform linear array, this is known as the array gain. So, we are able to multiply the
initial SNR by the factor of L, this is known as the array gain of the system and therefore
the uniform linear array is a very important or is a very novel technology because it results
in the gain of a factor of L in the signal to noise power ratio at the receiver. So, what do we have, we have the SNR of the
phased array that is SNR of the uniform linear array equals L Times initial SNR. What do
I mean by the initial SNR? That is initial SNR is P by Sigma Square that is without the
signal array saying, without the phased array, therefore in this we have the gain of L and
this is the array gain. This is the array gain of the system. Now, how do we choose
the weights A1 A2 AL, remember from Cauchy Schwarz inequalityÉ if we look at the Cauchy Schwarz inequality,
we have A bar Hermitian H bar phi magnitude square is less than equal to norm A bar square
into H bar phi square. Equality occurs only when A bar, that is the combining vector is
proportional to the vector H bar. The equality occurs when A bar is proportional
to H bar. So, here equality occurs when A bar is proportional to H bar of phi and one
way to achieve this Cauchy Schwarz inequality therefore is to set A bar simply equal to
H bar of phi. So, therefore we can choose A bar simply equal to H bar phi. And therefore if we choose A bar is equal
to H bar phi, for maximum SNR for maximum SNR choose A bar equals H bar phi which is
basically 1, 1 e to the power of minus j phi, e to the power of minus j 2 phi, e to the
power of minus j L minus 1 phi which is basically simply your H bar phi, therefore now the optimal beam former will
be given or optimal combining will be given as Y tilde is equal to A1 conjugate Times
Y1 plus A2 conjugate Times Y2 plus so on AL conjugate Times YL which is equal to 1 times
Y1 which is Y1 plus e to the power of j phi times Y2 e to the power of j2 phi times Y3
plus e to the power of jL minus1 phi Times phi Times YL.
And therefore this is the optimal combiner, this basically, you remember this is equal
to this is also equal to A bar Hermitian time Y bar which is equal to A1 conjugate A2 conjugate
so on up to AL conjugate Times Y1 Y2 up to YL and that is equal to Y1 conjugate plus
E to the power of J phi Times Y2 plus e to the power of j2 phi times Y3 so on e to the
power of jL minus1 phi Times phi Times YL, this is also known as, this combiner, that
it is optimal combiner which results in the maximum SNR, remember what is the maximum
SNR, L Times P over Sigma Square, this is known as the maximal ratio combiner, since
it maximises the ratio of the signal power to the noise power at the receiver, this optimal
beam former which maximises the SNR at the receiver, this is also known as the maximal
ratio combiner for this array processing system or this is also the matched filter.
Remember, since A bar is equal to H bar, that is the filter, that is receiver filter is
matched to the array steering vector, this is also known as the specially matched filter.
This is also known as matched filter or this is also known as the maximal ratio combiner. So, this particular beam former where A bar
which is equal to H bar of phi, so this particular beam former that is A bar, this is equal to
H bar of phi, this is also known as the maximal ratio combiner, MRC or this is also known
as the is known as the, this is known as maximal ratio combiner or this is also known as the
matched, this is also known as the matched filter.
Alright, so this is a unique this is a unique application of this Gaussian properties of
Gaussian in the context of signal processing, in the context of wireless communications.
What we have said, we have considered a multiple receive antennas so, there are L receive antennas
with equal antenna spacing, that is in turns of a phased array, we have looked at a signal
which is arriving at an angle of Theta with respect to the vertical, that is angle of
arrival Theta, we have considered independent identically distributed noise elements at
the different receive antennas and we have demonstrated that using the maximal ratio
combiner or using the optimal beam former or the matched filter, one can enhance the
signal to noise power ratio in this uniform linear array or phased array system by a factor
of L, where L is the number of antennas, therefore this L is also known as the array gain of
the system. And this is basically po possible because
of the coherent combining of the coherent combining of the signals received across the
various receive antennas. That is, the member were performing Y1 plus e to the power of
j phi times Y2 e to the power of j 2 phi times Y3 and so on, that is where inverting the
phase of the received signal on each antennas combining them, that is the signal combines
coherently, where is the noise which is random, the Gaussian noise combines incoherently,
therefore we are able to enhance the signal to noise power ratio by a factor of L and
which is the array gain and this is an important property, this demonstrates an important application
of both signal processing in Wireless Systems and also ah also the principal of combination
of these Gaussian noise samples across the various receive antennas.
So, with this let us end this module here, we will look at other aspects in the subsequent
lectures. Thank you very much.

4 Replies to “Application: Array Processing and Array Gain with Uniform Linear Arrays”

Leave a Reply

Your email address will not be published. Required fields are marked *