Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
9 views9 pages

BS Module 3 Theory

The document provides an overview of probability theory, including definitions of key concepts such as probability, random experiments, events, and sample spaces. It explains different types of probability distributions, including binomial, Poisson, and normal distributions, along with their properties and applications. Additionally, it outlines fundamental rules of probability, such as the rules of addition, multiplication, and subtraction.

Uploaded by

sanithsanju1902
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views9 pages

BS Module 3 Theory

The document provides an overview of probability theory, including definitions of key concepts such as probability, random experiments, events, and sample spaces. It explains different types of probability distributions, including binomial, Poisson, and normal distributions, along with their properties and applications. Additionally, it outlines fundamental rules of probability, such as the rules of addition, multiplication, and subtraction.

Uploaded by

sanithsanju1902
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

PROBABILITY DISTRIBUTION MODULE 3

MEANING OF PROBABILITY
Probability is the likelihood that an event will occur and is calculated by dividing the number of
favourable outcomes by the total number of possible outcomes.
Ex : There are only two possible outcomes when a coin is flipped, the result is either heads or tails .

RANDOM EXPERIMENT
In probability theory, an experiment or trial is any procedure that can be infinitely repeated and has a
well-defined set of possible outcomes, known as the sample space. An experiment is said to be random
if it has more than one possible outcome, and deterministic if it has only one.
Trial
Any particular performance of a random experiment is called a trial. By experiment or trial in the subject
of probability, we mean a Random experiment unless otherwise specified.
Each trial results in one or more outcomes.

EVENT or OUTCOME
Something that results or a result that is caused by some previous action.
The results or outcomes or observations of an experiment are called events. They are generally
represented by the capital letters of English Aplhabets.Though we know the possible outcomes of a
random experiment, we cannot predict which of these will occur/ happen in a conduct of the
experiment/trial.
It is a subset of the sample space,
Events are denoted by A, B, C for example when throwing a die ,
A = {1, 3, 5}

EXPERIMENT
An operation or an activity which in a definite outcome is called an experiment.
Ex: Tossing a coin is an experiment, if it shows head (H) or tail (T) on falling.

SAMPLE SPACE
The set of all possible outcomes of a random experiment is the sample space. The sample space is
denoted by S.
The outcome of the random experiment are sample points or outcomes or cases.
Ex ; In tossing of two coins the outcomes are Head and Tail
Hence sample space S = {HH, HT, TT, TH}

1 Department of Management Studies, JNNCE, Shivamogga


PROBABILITY DISTRIBUTION MODULE 3

EXHAUSTIVE CASES / COLLECTIVELY EXHAUSTIVE CASES


The total number of possible outcomes of a random experiment is called the exhaustive cases of
experiment.

MUTUALLY EXCLUSIVE EVENTS


A set of outcomes is referred to as event. When two events cannot occur at the same time, then we say
that the events are mutually exclusive else the events are said to be not mutually exclusive.
For example: when we flip a coin then either heads can come or tails. Both heads and tails cannot be
outcomes simultaneously.

EQUALLY LIKELY EVENTS


If outcomes are equally likely, then the probability of an event occurring is the number in the event
divided by the number in the sample space.
For Example : The probability of rolling a six on a single roll of a die is 1/6 because there is only 1 way
to roll a six out of 6 ways it could be rolled.

INDEPENDENT EVENTS
When two events are said to be independent of each other, what this means is that the probability that
one event occurs in no way affects the probability of the other event occurring.
An example of two independent events is as follows; say you rolled a die and flipped a coin.

DEPENDENT EVENTS
If the occurrence of one event does affect the probability of the other occurring, then the events are
dependent.
Ex ; In tossing of two coins the outcomes are Head and Tail
Hence sample space S = {HH, HT, TT, TH}

RULES OF PROBABILITY

1. RULE OF SUBTRACTION

The probability that event A will occur is equal to 1 detriment the probability that event A will not occur.
P(A) = 1 - P(AI).
Example : Probability that James will graduate from college is 0.80. What is the probability that he will
not graduate from college? Based on the rule of subtraction, the probability that James will not graduate
is 1.00 - 0.80 or 0.20.

2 Department of Management Studies, JNNCE, Shivamogga


PROBABILITY DISTRIBUTION MODULE 3

2. MULTIPLICATION THEOREM OF PROBABILITY

The rule of multiplication applies to the situation when the probability of the intersection of two events is
to be known that is, the probability that two events (Event A and Event B) both occur.

Example: The probability that Events A and B both occur is equal to the probability that Event A occurs
times the probability that Event B occurs, given that A has occurred.

This theorem states that if two events A and B are independent, the probability that they both will occur
is equal to the product of their individual probabilities.

For independent events, P(A n B) = P(A). P(B)

3. RULE OF ADDITION

The addition theorem of probability states that if two events A and B are mutually exclusive, the
probability of occurrence of either A or B is sum of the individual probability of A and B.

The rule of addition applies to the following situation. When there are two events, the probability that
either event occurs is to be known.

The probability that Event A or Event B occurs is equal to the probability that Event A occurs plus the
probability that Event B occurs minus the probability that both Events A and B occur.

For mutually exclusive events:


P(A U B) = P(A) + P(B)
Similarly, P(A U BUC) = P(A) + P(B) + P(C) and so on

For not mutually exclusive events:


P(A U B) = P(A) + P(B) – P(A∩B)

PROBABILITY DISTRIBUTION

Probability Distribution maps out likelihood of multiple outcomes in a table or an equation. In the
example of flipping a coin which had the probability outcome of head or tail, But, if the coin is flipped
twice in a row, there are four possible outcomees (heads-heads, heads-tails, tails-heads and tails-tails). So
there are a series of potential outcomes.

DEFINITIONS

1. Two events are mutually exclusive or disjoint if they cannot occur at the same time.
2. The probability that Event A occurs, given that Event B has occurred, is called aconditional
probability. The conditional probability of Event A, given Event B, is denoted by the symbol P
(A│B).
3. The complement of an event is the event not occurring. The probability that Event A will not
occur is denoted by P (A').

3 Department of Management Studies, JNNCE, Shivamogga


PROBABILITY DISTRIBUTION MODULE 3

UNIVARIATE PROBABILITY DISTRIBUTIONS


1. The Binomial Distribution
2. Poisson Distribution
3. Normal Distribution

I. BINOMIAL DISTRIBUTION
In probability theory and statistics, the binomial distribution with parameters n and p is the discrete
probability distribution of the number of successes in a sequence of n independent yes/no experiments,
each of which yields success with probability p. A success/failure experiment is also called a Bernoulli
experiment or Bernoulli trial; when n = 1, the binomial distribution is a Bernoulli distribution.
 The binomial distribution is the basis for the popular binomial test of statistical significance.
 The binomial distribution is frequently used to model the number of successes in a sample of size
n drawn with replacement from a population of size N .

The binomial distribution describes the behaviour of a count variable X if the following conditions
apply:
1. The number of observations „n‟ is fixed or finite.
2. Each observation is independent.
3. Each observation represents one of two outcomes ("success" or "failure").
4. The probability of "success" „p‟ is the same for each outcome.
If a group of patients is given a new drug for the relief of a particular condition, then the
proportion p being successively treated can be regarded as estimating the population treatment
success rate π.

The sample proportion p is analogous to the sample mean 𝑥, in that if we score zero for those s
patients who fail on treatment and unity for those r who succeed, then p=r/n, where n=z+s is the
total number of patients treated. Thus p also represents a mean.

Data which can take only a 0 or 1 response, such as treatment failure or treatment success, follow
the binomial distribution provided the underlying population response rate does not change. The
binomial probabilities are calculated fromProbability of obtaining exactly T successes in a given
number of 'n' Bernoullitrials is
F(r) = P(X=r) =n 𝑪𝒓 pr qn-r
For r 1, 2, .....n
Where p → probability of success on a single trial.
q → probability of failure on a single trial.
n→ number of (Bernoulli) trials
r→ number of successes in 'n' trials.
Mean Binomial distribution = np
Variance of Binomial distribution ( σ2 ) = npq

4 Department of Management Studies, JNNCE, Shivamogga


PROBABILITY DISTRIBUTION MODULE 3

Binomial Distribution Curve

II. POISSON DISTRIBUTION


 The Poisson distribution is a probability distribution of a discrete random variable that stands for
the number (count) of statistically independent events, occurring within a unit of time or space.
 The Poisson distribution is actually a limiting case of a Binomial distribution when the number of
trials, n, gets very large and p, the probability of success, is small.
 It was derived by French mathematician Simeon D. Poisson in 1837.

Following conditions:
i) n, the number of trials is indefinitely large i.e., n → ∞.
ii) p, the constant probability of success for each trial is infinitely small i.e., p → 0.
iii) np = m, is finite.
Poisson is a much better model of binomial probabilities when nn is large and pp is small than the
normal distribution. The reason is that the normal distribution is symmetric. Intuitively, when pp is
small, the distribution should be bunched up near 0, and since probabilities are non-negative, the normal
distribution is not a great solution to approximating binomial probabilities (see the plots below - quite
non-normal looking).
Examples of Poisson distribution:
Whether one observes patients arriving at an emergency room, cars driving up to a gas station, decaying
radioactive atoms, bank customers coming to their bank, or shoppers being served at a cash register, the
streams of such events typically follow the Poisson process. The underlying assumption is that the events
are statistically independent and the rate, μ, of these events (the expected number of the events per time
unit) is constant. The list of applications of the Poisson distribution is very long. To name just a few
more:
 The number of soldiers of the Prussian army killed accidentally by horse kick per year
 The number of mutations on a given strand of DNA per time unit
 The number of bankruptcies that are filed in a month
 The number of arrivals at a car wash in one hour

5 Department of Management Studies, JNNCE, Shivamogga


PROBABILITY DISTRIBUTION MODULE 3

 The number of network failures per day


 The number of file server virus infection at a data center during a 24-hour period.
 The number of Airbus 330 aircraft engine shutdowns per 100,000 flight hours.
 The number of asthma patient arrivals in a given hour at a walk-in clinic
 The number of hungry persons entering McDonald's restaurant. The number of work
related accidents over a given production time,
 The number of birth, deaths, marriages, divorces, suicides, and homicides over a given
period of time
 The number of customers who call to complain about a service problem per month
 The number of visitors to a Web site per minute
 The number of calls to consumer hot line in a 5-minute period
 The number of telephone calls per minute in a small business.
 The Poisson distribution is used to describe discrete quantitative data such counts in which the
populations size n is large, the probability of an individual event * is small, but the expected
number of events, n is moderate say five or more.
 Poisson distribution is a discrete probability distribution developed by French Mathematician, S.
Poisson. Poisson distribution is used when the chance of any individual event being a success is
small. The distribution is given by the following formula

P(r) n = e-mmr/r!
Where = 0, 1, 2, 3, 4, .........
e = 2.7183 (the base of natural logarithms)
m= mean of Poisson distribution = np
(average no. of occurrences of an event)
 The poisson Distribution is a discrete distribution with a single parameter 'm'. As 'm' increases,
the distribution shifts to the right. All Poisson probability distribution are skewed to right.
Poisson distribution is a distribution of rate events (the probabilities tend to be high for small
numbers of occurrences). Mean = variance = m (for a Poisson distribution).

6 Department of Management Studies, JNNCE, Shivamogga


PROBABILITY DISTRIBUTION MODULE 3

III. NORMAL DISTRIBUTION


In probability theory, the normal (or Gaussian) distribution is a very common continuous probability
distribution. Normal distributions are important in statistics and are often used in the natural and social
sciences to represent real-valued random variables whose distributions are not known.
The normal distribution is useful because of the central limit theorem. In its most general form, under
some conditions (which include finite variance), it states that averages of random variablesindependently
drawn from independent distributions converge in distribution to the normal, that is, become normally
distributed when the number of random variables is sufficiently large. Physical quantities that are
expected to be the sum of many independent processes (such as measurement errors) often have
distributions that are nearly normal. Moreover, many results and methods (such as propagation of
uncertainty and least squares parameter fitting) can be derived analytically in explicit form when the
relevant variables are normally distributed.
The normal distribution is sometimes informally called the bell curve. However, many other
distributions are bell-shaped (such as the Cauchy, Student's t, and logistic distributions). The terms
Gaussian function and Gaussian bell curve are also ambiguous because they sometimes refer to multiples
of the normal distribution that cannot be directly interpreted in terms of probabilities.

THE STANDARD NORMAL DISTRIBUTION


The standard normal distribution is a normal distribution with a mean of zero and standard deviation of
The standard normal distribution is centred at zero and the degree to which a given measurement
deviates from the mean is given by the standard deviation. For the standard normal distribution, 68% of
the observations lie within 1 standard deviation of the mean; 95% lie within two standard deviation of
the mean; and 99.9% lie within 3 standard deviations of the mean. To this point, we have been using "X"
to denote the variable of interest (e.g., X=BMI, X=height, X=weight). However, when using a standard
normal distribution, we will use "Z" to refer to a variable in the context of a standard normal distribution
Since the area under the standard curve = 1, we can begin to more precisely define the probabilities of
specific observation. For any given Z-score we can compute the area under the curve to the left of that Z-
score. The table in the frame below shows the probabilities for the standard normal distribution. Examine
the table and note that a "Z" score of 0.0 lists a probability of 0.50 or 50%, and a "Z" score of 1, meaning
one standard deviation above the mean, lists a probability of 0.8413 or 84%. That is because one
standard deviation above and below the mean encompasses about 68% of the area, so one standard
deviation above the mean represents half of that of 34%. So, the 50% below the mean plus the 34%
above the mean gives us 84%. The following formula converts an X value into a Z score, also called a

standardized score:

7 Department of Management Studies, JNNCE, Shivamogga


PROBABILITY DISTRIBUTION MODULE 3

where μ is the mean and σ is the standard deviation of the variable X.

PROPERTIES OF NORMAL DISTRIBUTION

a. The normal curve is 'bell-shaped' and symmetrical in its appearance. If the curves were folded along
its vertical axis, the two halves would coincide. The number of cases below the mean is equal to that
above the mean.

b. The height of the normal curve is at maximum at the mean. Hence, the mean andmode of the normal
distribution coincide. Thus, for a normal distribution, mean =mode = median.

c. There is one maximum point curve which occurs at the mean. The height of the curve declines as we
go in either direction from the mean. The curve approaches nearer and nearer to the base but it never
touches it. i.e., the curve is asymptotic.

d. Since there is only one maximum point, the normal curve is unimodal, ie, it has only one mode.

e. The points of inflexion i.e., where the change in curveature occurs are 𝑥 ± 𝜎.

f. The first and third quartiles are equidistant from the median.

g. The variable distributed according to the normal curve in a continuous one.

h. The area under the normal curve is distributed as follows,


a) Mean ± 1σ covers 68.27% area.
b) Mean ± 2 σ covers 95.45% area.
c) Mean ± 3 σ covers 99.73% area.

It is often the case with medical data that the histogram of a continuous variable obtained from a single
measurement on different subjects will have a characteristie bell-shaped distribution known as a Normal
distribution. One such example is the histogram of the birth weight in kilograms of the 3,226 new born
babies

8 Department of Management Studies, JNNCE, Shivamogga


PROBABILITY DISTRIBUTION MODULE 3

BAYES' THEOREM

In probability theory and statistics, Bayes‟ theorem (alternatively Bayes‟ law or Bayes' rule) describes
the probability of an event, based on prior knowledge of conditions that might be related to the event.
For example, if cancer is related to age, then, using Bayes‟ theorem, a person‟s age can be used to more
accurately assess the probability that they have cancer, compared to the assessment of the probability of
cancer made without knowledge of the person's age.

One of the many applications of Bayes‟ theorem is Bayesian inference, a particular approach to
statistical inference. When applied, the probabilities involved in Bayes‟ theorem may have different
probability interpretations. With the Bayesian probability interpretation the theorem expresses how a
subjective degree of belief should rationally change to account for availability of prior related evidence.
Bayesian inference is fundamental to Bayesian statistics.

Bayes‟ theorem is named after Rev. Thomas Bayes (/ˈbeɪz/; 1701–1761), who first provided an equation
that allows new evidence to update beliefs. It was further developed by Pierre-Simon Laplace, who first
published the modern formulation in his 1812 “Théorie analytique des probabilités.” Sir Harold Jeffreys
put Bayes‟ algorithm and Laplace's formulation on an axiomatic basis. Jeffreys wrote that Bayes‟
theorem “is to the theory of probability what the Pythagorean theorem is to geometry.

Bayes' theorem is stated mathematically as the following equation

where A and B are events and P(B) ≠ 0.


 P(A) and P(B) are the probabilities of observing A and B without regard to each other.
 P(A | B), a conditional probability, is the probability of observing event A given that B is true.
 P(B | A) is the probability of observing event B given that A is true.

9 Department of Management Studies, JNNCE, Shivamogga

You might also like