Discrete Uniform Distribution

# Discrete Uniform Distribution

So here I am going to list out few standard
discrete and continuous random variable So these are all the standard one we are going
to use it in our course So the first one is discrete uniform distribution
or the random variable is discrete uniform distributed random variable Suppose I make
the random variable x is uniformly discrete uniformly distributed with the discrete point
x m That means that random variable takes the possible values x1 to x m and it has the
masses at the xi of equal mass for i is varying from 1 to n and all otherwise it is going
to be zero Then in that case we say the random variable is going to be called it as a discrete
uniform distribution That means it is going to satisfy the property
the summation of all the excise as going to be one and the probability of x equal to xi
is going to be greater or equal to zero that means for this xi it is going to be greater
than zero and all other point it is going to be zero Therefore it is satisfying the
probability mass function of the discrete random variable therefore this is the probability
mass function of the random variable of the discrete random variable x So the probability of x equal to xi is going
to be the probability mass function of the discrete random variable x The second one
the discrete case that is binomial distribution When we say the random variable x is going
to be call it as binomially distributed with the parameters n and p Then the probability
mass function for the random variable is going to be n c x p power x one minus p power n
minus x where x takes the value from 0 1 2 and so on That means this is the probability mass function
of a binomial distribution it takes a value 0 to n that means it has the jump points n
plus one jump points and this we call it as a binomial distribution if we put n equals
to one then that is going to be the Bernoulli distribution random variable and here the
p is nothing but the probability of success in each trials and you can create the binomial
trials by having a n independent Bernoulli trials and each trial the probability of success
is going to be p The third discrete random variable which we
are going to use that is geometric distribution When we see the random variable x is geometrically
distributed with the parameter p then the probability mass function of this random variable
is going to be one minus p power r minus one multiplied by p where r can take the value
from 1 2 and so on That means if you have any discrete random
variable and that random variable probability mass function is going to be of this form
then we say that random variable is geometrically distributed with the parameter p and here
the p can be treated as the probability of success in each trial and you can say what
is the probability that the r the trial getting the first success That is same as all the
trials are independent Therefore you have a r minus one trials you
have the success subsequently failure subsequently and you get the success first time in the
rth trial therefore you will end up one minus p power r minus one for all such failure or
all such non success r minus one trials and first success in the rth trial Next we are moving into the discrete continuous
random variables The first one is continuous uniform distribution when we say the random
variable x is continuous uniform distribution between the interval a to b then the probability
density function for the random variable x is going to be of the form 1 divided by b
minus a between the interval a to b and all other it is going to be zero That means the probability density function
for this random variable is they have the height a and if you treated this as b and
this height is one divided by b minus a that means if you find out the integration between
the range a to b of height one third b minus a then that is going to be one and this is
going to be greater than or equal to zero always Therefore this is going to be the probability
density function of the continuous random variable and for any continuous random variable
the probability density function is going to be one divided by length of the interval
in which it takes the value one divided by this much and all other it is zero then that
random variable is going to be call it as a continuous uniform distribution between
the interval a to b And If you see the CDF of this random variable
till a it is going to be zero and after a it is going to be increasing and at the point
b it reaches one That means you can come to the conclusion if any random variable CDF
is going to be between zero to one in the interval a to b with the standing line then
you can come out what is the point in which a and b and you can come to the find out what
is the random variable in which it is going to be continuous it is going to be a uniform
distribution between the interval a to b The second one is exponential distribution
When we say the continuous random variable x is going to be exponentially distributed
with the parameter lambda if the probability density function for that random variable
is going to be lambda times e power minus lambda x and between the axis going to be
greater than zero or it is going to be zero otherwise That means within the range of 0
to infinity and the F of x is going to be lambda times e power minus lambda x otherwise
it is going to be zero So if you see the probability density function
of that continuous random variable it is going to start from lambda and asymptotically it
touches zero So this is the probability density function of the exponential distribution And
if you see the CDF of this it reaches one at infinity So this exponential distribution
is going to be used in many of our problems later Therefore all the properties of the
exponential distribution that I will discuss when we discuss the stochastic process in
detail The third distribution is normal distribution
or Gaussian distribution So when we say the random variable is normally distributed with
the parameters mu and sigma square the probability density function is going to be one divided
square root of 2 pi sigma e power minus half times x minus mu by sigma whole square Here
the x can lie between minus infinity to infinity and the mu also can lie between minus infinity
to infinity and the sigma is a strictly positive quantity And the mu is nothing but the mean of normal
distribution and the sigma square is the variance of normal distribution and sigma is the standard
deviation The standard deviation is always strictly greater than zero and if you see
the probability density function of F of x asymptotically it starts So I made it with
mu is equal to zero and this is the probability density So it looks like a bell shaped So
this is going to be a normal distribution and you can always convert the normal distribution
into the standard normal by using this substitution z is equal to x minus mu by sigma So you will end up with a standard normal
one that is one divided by square root of 2 pi e power minus z square by two where z
lies between minus infinity to infinity So this is going to be a standard normal distribution
in which the mean is zero and the variance is one So other than the discrete standard distributions
we have discussed only the discrete uniform distribution and second we discuss the binomial
distribution and then we discuss geometric distribution The fourth one that is very important
that is a poisson distribution When we say the discrete random variable x is going to
be a poisson distribution with the parameter lambda if the probability mass function for
the random variable x is going to be of the form e power minus lambda lambda power x divided
by x factorial where x can take the value from 0 1 2 and so on So that means this is the discrete type random
variable in which it has the countably infinite masses and these are all the jump points and
the masses are going to be e power minus lambda lambda power x by x factorial Here the lambda
is strictly greater than zero That means if any discrete random variable has the probability
mass function of this form then we can say that that random variably is poisson distributed
with the parameter lambda And if you see the probability mass function
for the different values of x so whatever the lambda you have chosen so accordingly
it is going to be at zero it has some value and one it has some other value and two and
so on so that means for fixed lambda you can just draw the probability mass function and
this is going to have a countably infinite mass and if you add over the zero to infinity
that is going to be one and the masses are going to be always greater than zero and all
other points it is going to be zero And this is going to be the very important
distribution because using this we are going to create one stochastic process that is going
to be call it as a poisson process That means in the poisson process each random variable
is going to be poisson distributed So for that we should know what is the probability
mass function of the poission distribution and their properties and here the lambda is
same as if you find out the mean for this poisson distribution the mean is going to
be lambda and the variance is also going to be lambda So this is the one particular distribution
in which the mean variance is going to be same as the parameter lambda So in today’s
lecture what we have covered introduction of stochastic process by giving the motivation
by giving four different examples to motivate the stochastic process Then what we have covered
is what is the probability theory knowledge needed In that I have covered only the probability
space and the random variable and the discrete standard random variable as well as standard
continuous random variables There are some more standard discrete random
variables as well as there are some more discrete there are some more standard continuous random
variable that I have not covered here because it is a probability theory refresher and some
of the distribution if it is needed then we will be covered at the time of when we explain
the stochastic process itself So therefore with giving few standard discrete random variable
and few standard continuous random variable I will complete today’s lecture and the
next lecture I will cover some of the other probability theory concepts needed for the
stochastic process that I will cover it in the next lecture Then the third lecture onwards
I will start the stochastic process Thank you