Random Variables

Set theory :

A set is a collection of well defined objects, either concrete or abstract. In the study of probability we are particularly interested in the set of all outcomes of a random experiment and its subsets.

Definition 1 (Union of sets) :

The Union of two sets E  &  FE \; \& \; F is the set of all elements that are in atleast one of the sets EE or FF.

EF={x:xE or xF}E \bigcup F = \{ x : x \in E \text{ or } x \in F\}

Definition 2 (Intersection of sets) :

The Intersection of two sets E  &  FE \; \& \; F is the set of all elements that are common to both sets EE and FF.

EF={x:xE and xF}E \bigcap F = \{ x : x \in E \text{ and } x \in F\}

Definition 3 (Compliment of a set) :

The Compliment of a set EE with respect to a universe Ω\Omega is the set of all elements that are not in EE.

EC={x:xΩ and xE}E^{C} = \{ x : x \in \Omega \text{ and } x \notin E\}, where Ω=\Omega = Universe

Definition 4 (Difference of two sets) :

The difference of two sets E  &  FE \; \& \; F is the set of all elements that are in EE but not in FF. It is denoted by EFE - F or E\FE \backslash F.

E\F={x:xE and xF}E \backslash F = \{ x : x \in E \text{ and } x \notin F\}

Definition 5 (Exclusive-or of two sets) :

The Exclusive-or of two sets E  &  FE \; \& \; F is the set of all elements that are either in EE or in FF but not in both. It is defined as EF=(EF)(FE) \begin{align*} E \oplus F = (E-F) \cup (F-E) \end{align*}

Definition 6 (DeMorgan's laws) :

For any two sets E  &  FE \; \& \; F,

  • (EF)C=ECFC(E \cup F)^{C} = E^{C} \cap F^{C}

  • (EF)C=ECFC(E \cap F)^{C} = E^{C} \cup F^{C}

Definition 7 (Partition of a set) :

Given any set EE, an nn-partition of a EE consists of a sequence of sets EiE_i, ii = 1, 2, 3, \cdots, n such that i=1nEi=E,  &  EiEj=ϕ,  ij \begin{align*} \bigcup_{i=1}^{n} E_i = E, \; \& \; E_i \cap E_j = \phi, \; \forall i \neq j \end{align*}

Definition 8 (Equality of sets) :

Two sets E  &  FE \; \& \; F are said to be equal if every element of in EE is in FF and vice versa. E=F if EF  &  FE \begin{align*} E = F \text{ if } E \subseteq F \; \& \; F \subseteq E \end{align*}

Definition 9 (Disjoint sets) :

Two sets E  &  FE \; \& \; F are said to be disjoint if EF=ϕE \cap F = \phi.

Definition 10 (Subset of a set) :

A set AA is called a subset of a set BB, denoted ABA \subseteq B, if every element of AA is also an element of BB. Formally, this can be written as:

AB    x(xAxB) A \subseteq B \iff \forall x (x \in A \rightarrow x \in B)

where \forall denotes "for all" and \rightarrow denotes "implies".

If AA is a subset of BB but AA is not equal to BB, then AA is called a proper subset of BB, denoted ABA \subset B. This can be formally written as:

AB    (ABAB) A \subset B \iff (A \subseteq B \wedge A \neq B)

where \wedge denotes "and".

Probability :

Probability theory is a mathematical framework that allows us to describe and analyze a random experiment whose outcomes we cannot predict with certainity. It helps us to predict how likely or unlikely an event of interest will occur. Let AA be an event, and the chance of AA occuring is pp. The occurrence or non occurence of AA depends upon a chain of circumstances involved. This chain is called an experiment of trial. The result of the experiment is called its outcome.

Any experiment involving randomness can be modelled as a probability space. A probability space is a mathematical model of a random experiment. The space comprises of

  • Ω\Omega (Sample space): Set of possible outcomes of the experiment
  • F\mathcal{F} (Signma algebra) : Set of events
  • P\mathbb{P} (Probability measure)

Definition 1 (Sample space) :

The set of all possible outcomes of an experiment is called the sample space, denoted by Ω\Omega.

Example 1 : In the scenario of a coin being tossed, Ω={H,T}\Omega = \{ H, T\}

Example 2 : In the scenario of a dice being rolled, Ω={1,2,3,4,5,6}\Omega = \{ 1, 2, 3, 4, 5, 6\}

An event can be defined as a subset of the appropriate sample space Ω\Omega. If Ω={H,T}\Omega = \{ H, T\}, then an event AA cab be {H}\{H\} or {H}C\{H\}^{C} or {H}{T}\{H\} \cap \{T\} or else if Ω={1,2,3,4,5,6}\Omega = \{ 1, 2, 3, 4, 5, 6 \}, then AA can be {2,4,6}\{2, 4, 6\} or {1,2,3}\{1, 2, 3\} or {2}C\{2\}^{C}.

  • ϕ\phi is said to be the impossible event
  • Ω\Omega is said to be the certain event since some member of Ω\Omega will ceetainly occur.

All the subsets of Ω\Omega need not be events.

Definition 2 (Field / Sigma Algebra) :

A collection of events, as a subcollection F\mathcal{F} of the set of all subsets of Ω\Omega, which satisfy the following properties

  1. If A  &  BFA \; \& \; B \in \mathcal{F}, then ABFA \cup B \in \mathcal{F}
  2. If AFA \in \mathcal{F}, then ACFA^{C} \in \mathcal{F}
  3. ϕF\phi \in \mathcal{F}

is called as a Field. From the definition of a Field, if Ai,A2,,AnFA_i, A_2, \cdots, A_n \in \mathcal{F}, then i=1nAiF\bigcup_{i=1}^{n}A_i \in \mathcal{F}. F\mathcal{F} is closed under finite unions and finite intersections also.

When Ω\Omega is infinite, we define σ\sigma- field or σ\sigma-algebra F\mathcal{F} by modifiying (1) as

  • If A1,A2,FA_1, A_2, \cdots \in \mathcal{F}, then i=1AiF\bigcup_{i=1}^{\infty}A_i \in \mathcal{F}

Every experiment is associated with a pair (Ω,F)(\Omega, \mathcal{F} ). We call AA to be an event of the experiment if AFA \in \mathcal{F}

Definition 3 (Probability measure) :

A probability measure P\mathbb{P} on (Ω,F)(\Omega, \mathcal{F} ) is a function P:F[0,1]\mathbb{P} : \mathcal{F} \to [0, 1] satisfying

  1. P(Ω)=1\mathbb{P}(\Omega) = 1
  2. If A1,A2,A_1, A_2, \cdots is a collection of disjoint members of F\mathcal{F}, in that AiAj=ϕA_i \cap A_j = \phi for all pairs i,ji, j satisfying iji \neq j then

P(i=1Ai)=i=1P(Ai)\begin{align*} \mathbb{P} ( \bigcup_{i = 1}^{\infty} A_i ) = \sum_{i = 1}^{\infty} \mathbb{P}(A_i) \end{align*} .

This triple (Ω,F,P)(\Omega, \mathcal{F}, \mathbb{P}) is called as a probability space.

Example 3 : A coin, possibly biased, is tossed once. We can take Ω={H,T}\Omega = \{H, T\} and F={ϕ,Ω,H,T}\mathcal{F} = \{\phi, \Omega, H, T\}. A possible probability measure P:F[0,1] \mathbb{P} : \mathcal{F} \to [0, 1] is given by

  • P(ϕ)=0\mathbb{P}(\phi) = 0
  • P(Ω)=1\mathbb{P}(\Omega) = 1
  • P(H)=p\mathbb{P}(H) = p
  • P(T)=1p\mathbb{P}(T) = 1-p where p[0,1]p \in [0,1]. If p = 0.5, then we can say that the coin is fair.

Important properties of a typical probability space :

  • P(AC)=1P(A)\mathbb{P}(A^{C}) = 1- \mathbb{P}(A)
  • If ABA \subseteq B , then P(B)=P(A)+P(BA)P(A)\mathbb{P}(B) = \mathbb{P}(A) + \mathbb{P}(B|A) \geq \mathbb{P}(A)
  • P(AB)=P(A)+P(B)P(AB)\mathbb{P}(A \cup B) =\mathbb{P}(A) + \mathbb{P}(B) - \mathbb{P}(A \cap B)
  • More generally, if A1,A2,,AnA_1, A_2, \cdots, A_n are events, then

P(i=1nAi)=iP(Ai)i<jP(AiAj)+i<j<kP(AiAjAk)+(1)n+1P(A1A2A3An) \begin{align*} \mathbb{P}( \bigcup_{i=1}^{n} A_i) = \sum_{i} \mathbb{P}(A_i) - \sum_{i<j}\mathbb{P}(A_i \cap A_j) + \sum_{i<j<k}\mathbb{P}(A_i \cap A_j \cap A_k) \cdots + (-1)^n+1 \mathbb{P}(A_1 \cap A_2 \cap A_3 \cdots \cap A_n) \end{align*} , where, for Example, i<j\sum_{i<j} sums over all unordered pairs (i,j)(i,j) with iji \neq j.

An event AA is called null event if P(A)=0\mathbb{P}(A) = 0, and if P(A)=1\mathbb{P}(A) = 1, we say that the event A occurs almost surely. Null events should not be confused with the impossible event ϕ\phi. Impossible event is null, but null events need not be impossible.

Definition 4 (Conditional Probability) :

If P(B)>0\mathbb{P}(B) > 0, then the conditional probability that AA occurs given that BB occurs is defined as P(AB)=P(AB)P(B) \begin{align*} \mathbb{P}(A|B) = \frac{\mathbb{P}(A \cap B)}{\mathbb{P}(B)} \end{align*}

Independence

In general, the occurence of some event BB changes the probability that another event AA occurs, where the original probability P(A)\mathbb{P}(A) being replaced by P(AB)\mathbb{P}(A|B). If the original probability remains unchanged, then we say that the two events A  &  BA \; \& \; B are independent.

P(AB)=P(AB) \begin{align*} \mathbb{P}(A|B) =\mathbb{P}(A|B) \end{align*}

Definition 5 (Independence) :

Events A  &  BA \; \& \; B are called independent events if P(AB)=P(A)P(B)\mathbb{P}(A \cap B) = \mathbb{P}(A) * \mathbb{P}(B). More generally, a family of events defined as {Ai:iI}\{A_i : i \in I\} are independent if P(iJAi)=iJP(Ai), \begin{align*} \mathbb{P}(\bigcap_{i \in J}^{}A_i) = \prod_{i \in J} \mathbb{P}(A_i), \end{align*} for all finite subsets JJ of II.

Common mistake : If A  &  BA \; \& \; B are independent, then we may assume that AB=ϕA \cap B = \phi. This is the case when A  &  BA \; \& \; B are disjoint not when A  &  BA \; \& \; B are independent.

If the family of events {Ai:iI}\{A_i : i \in I\} has the property that P(AiAj)=P(Ai)P(Aj)ij\mathbb{P}( A_i\cap A_j) = \mathbb{P}(A_i) * \mathbb{P}(A_j) \forall i \neq j then it is called pairwaise independent set of events. Let CC be an event with P(C)>0\mathbb{P}(C) > 0, then the two events A  &  BA \; \& \; B are called conditionally independent given CC if

P(ABC)=P(AC)P(BC) \begin{align*} \mathbb{P}(A \cap B|C) =\mathbb{P}(A|C) * \mathbb{P}(B|C) \end{align*}

Definition (Complete Space) :

A probability space (Ω,F,P)(\Omega, \mathcal{F}, \mathbb{P}) is called a complete space if all subsets of null sets are events.