Independence

One of the most important concepts in probability theory is that of “independence.” The events A and B are said to be (stochastically) independent if P(B|A) = P(B), or equivalently if


The intuitive meaning of the definition in terms of conditional probabilities is that the probability of B is not changed by knowing that A has occurred. Equation (7) shows that the definition is symmetric in A and B.
It is intuitively clear that, in drawing two balls with replacement from an urn containing r red and b black balls, the event “red ball on the first draw” and the event “red ball on the second draw” are independent. (This statement presupposes that the balls are thoroughly mixed before each draw.) An analysis of the (r + b)2 equally likely outcomes of the experiment shows that the formal definition is indeed satisfied.

In terms of the concept of independence, the experiment leading to the binomial distribution can be described as follows. On a single trial a particular event has probability p. An experiment consists of n independent repetitions of this trial. The probability that the particular event occurs exactly i times is given by equation (3).
Independence plays a central role in the law of large numbers, the central limit theorem, the Poisson distribution, and Brownian motion.

Post a Comment

Previous Post Next Post