# Mutually Exclusive Events in Statistics

These are my notes on mutually exclusive events in statistics.

**Mutually Exclusive Events**

Independence and mutually exclusive events are two relationships that often get

mixed up. Independence describes events occurring that do not affect each

other's probability while mutually exclusive describes events with no shared

outcomes. Though these two relationships mean very different things, they are

linked in a way. Suppose we flip a coin. The events heads and tails are clearly

mutually exclusive. If you get one, you cannot get the other at the same time.

That means if you get heads for the flip, you have no chance at getting tails on

the same flip. In other words, since there is non shared outcome, getting one

outcome makes it impossible to get the other. What this implies is that all

mutually exclusive events are never independent, which in turn means that

independent events can never be mutually exclusive.**Probability Rules**

Complements: The probability of the complement of an event A is given by

\(P(A') = 1-P(A)\)

Union: Addition rule where the probability of the union of two events A and B is

given by \(P(A \cup B) = P(A) + P(B) - P(A \cap B)\). If the events A and B are

disjoint, then \(P(A \cap B)=0\) and \(P(A \cup B)=P(A)+(PB)\).

Intersection multiplication rule: For events A and B defined in a sample space

S, \(P(A \cap B)=P(A)*P(B|A)=P(B)*P(A|B)\). If the events A and B are independent

of each other, then \(P(A|B)=P(A), P(B|A)=P(B)< and P(A \cap B)=P(A)*P(B)\).

Conditional Probabilities(Bayes Theorem): The probability of A given B is

\(P(A|B)=\frac{P(A \cap B)}{P(B)}\).

Independence: Two events A and B are independent if and only if

\(P(A \cap B)=P(A)*P(B)\).

**Random Variables and their Probability**

A variables is a quantity who value varies from subject to subject.

A probability experiment is an experiment whose possible outcomes may be known

but whose exact outcome is a random event and cannot be predicted with certainty

in advance. If the outcome of a probability experiment takes a numerical value,

then the outcome is a random variable. Random variables are usually denoted

using capital letters. Sometimes two or more variables are denoted using the

same letter but different subscripts.

There are two types of random variables, discrete and continuous.

A discrete random variable is a quantitative variable that takes a countable

number of values.

Note that between any two possible values of a discrete random variable, there is a countable number of possible values.

A continuous random variable is a quantitative variable that can take all the

possible values in a given range. A person's weight is a good example. A person

can weigh 150 pounds or 155 pounds or anything in between. Other examples are:

altitude of a plane, amount of rainfall in a city in a day, amount of gasoline

pumped into a car's gas tank, weight of a newborn baby, or the amount of water

flowing through a dam per hour. **Probability Distributions of Discrete Random Variables**

A probability distribution of a discrete random variable or a discrete

probability distribution is a table, list, graph, or formula giving all possible

values taken by a random variable and their corresponding probabilities. **Mean of a Discrete Random Variable**

The mean \(\mu\) of a discrete random variable X is also known as the expected

value. It is denoted by \(E(X)\) and is computed by multiplying each value of

the random variable by its probability and then adding over the sample space.

\(\mu_{x} = E(X) = \sum{X}P(X{1})\)

The variance of a discrete random variable is defined as the sum of the product

of squared deviations of the values of the variable from the mean and the

corresponding probabilities:

\(\sigma^2 = \sum(x_{i}-\mu)^2 P(x_{i})\)

Remember that standard deviation is simply the square root of variance. Standard

deviation is our expected value for how much any given data point will vary from

the mean.**Combinations**

A combination is the number of ways r items can be selected out of n items if

the order of selection is not important. It is denoted by \(\frac{n}{r}\), which

reads as "no choose r", and is computed as \(\frac{n}{r} = \frac{n!}{r!(n-r)!}\)

For any integer \(n\geq0,n!\) is read as "n factorial" and is computed as \(n! =

n(n-1)(n-2)(n-3)...(3)(2)(1)\). For example, \(3! = (3)(2)(1) = 6\) and \(5! =

(5)(4)(3)(2)(1) = 120\). Note that \(0!=1\) and \(1!=1\).

Why do we use combinations? We do not care when in the sequence our x successes

occur. We just want there to be x successes out of n trials. There are

\(\frac{n}{x}\) ways to get x out of n successes.

P^x is the probability of getting x successes. If the probability of getting one

success is p, then the probability of getting two successes is p*p. Similarly,

(1-p) must be the probability of not getting a success. The probability of

getting one failure is (1-p)*(1-p).

\(P(A\text{ and }B) = P(A)*P(B)\) if A and B are mutually exclusive.**Binomial Distributions**

One example of a distribution of discrete random variables is the binomial

distribution. A binomial distribution occurs in an experiment that possesses the

following properties:

1. There are n repeated trials of a number fixed in advance

2. Each trial has two possible outcomes, known as success and failure.

3. All trials are identical and independent, thus the probability for success

remains the same for each trial.

The binomial variable X:

X = the number of successes in n trials = 0,1,2,...n

\(P(X=x)=\frac{n}{x}p^x(1-p)^{n-x}\)

Mean of a binomial random variable (how many times do you expect to succeed?)

\(\mu=np\)

Variance of a binomial random variable (how much do you expect your number of

successes to vary from sample to sample):

\(\sigma^2 = np(1-p)\)

Some examples of binomial random variables:

1. A quality control inspector takes a random sample of 20 items from a large

lot, inspects each item, classifies each as defective or nondefective, and

counts the number of defective items in the sample.

2. A telephone survey asks 400 area residents, selected at random, whether they

support the new gasoline tax increase. The answers are recorded as yes or no.

The number of persons answering yes is counted.

3. A random sample of families is taken, and for each family with three

children, the number of girls out of the three children is recorded.

4. A certain medical procedure is performed on 15 patients who are not related

to each other. The number of successful procedures is counted.

5. A homeowner buys 20 azalea plants from a nursery. The number of plants that

survive at the end of the year is counted.

The shape of the binomial distribution depends on the values of n and p. The

distribution spreads from 0 to n. **Geometric Distribution**

Another example of a distribution of discrete random variables is the geometric

distribution. The geometric distribution occurs in an experiment where repeated

trials possess the following properties:

1. There are n repeated trials.

2. Each trial has two possible outcomes, success or failure.

3. Trials are repeated until a predetermined number of successes is reached.

4. All trials are identical and independent, thus the probability for success

remains the same for each trial.

The geometric random variable X:

X = the number of trials required to obtain the first success = 0,1,2...

P(x trials needed until to obtain the first success is observed) =

\((1-p)^{x-1}p\).

Mean of the geometric random variable: \(\mu = E(X) = \frac{1}{p}\)

Variance of the geometric random variable: \(\sigma^2 = Var(X)=\frac{1-p}{p^2}\)

Some examples of the geometric random variable:

1. A worker opening oysters to look for pearls counts the number of oysters he

has to open until he finds the first pearl.

2. A supervisor at the end of an assembly line counts the number of nondefective

items produced until he finds the first defective one.

3. An electrician inspecting cable one yard at a time for defects counts the

number of yards he inspects before he finds a defect.

You can think about the mean of a geometric random variable intuitively. If p

gets bigger, the mean number of trials until the first success goes down. If

something happens often, it is very unlikely that you will have to wait very

long for it to occur.

In the binomial distribution, the number of trials is fixed, and the number of

successes is a random event. In the geometric distribution, the number of

successes is fixed, but the number of trials required to get the success is a

random event.