Random Variables

Set theory :

A set is a collection of well defined objects, either concrete or abstract. In the study of probability we are particularly interested in the set of all outcomes of a random experiment and its subsets.

Definition 1 (Union of sets) :

The union of two sets E  &  FE \; \& \; F is the set of all elements that are in atleast one of the sets EE or FF.

EF={x:xE or xF}E \bigcup F = \{ x : x \in E \text{ or } x \in F\}

Definition 2 (Intersection of sets) :

The intersection of two sets E  &  FE \; \& \; F is the set of all elements that are common to both sets EE and FF.

EF={x:xE and xF}E \bigcap F = \{ x : x \in E \text{ and } x \in F\}

Definition 3 (Complement of a set) :

The complement of a set EE with respect to a universe Ω\Omega is the set of all elements that are not in EE.

EC={x:xΩ and xE}E^{C} = \{ x : x \in \Omega \text{ and } x \notin E\}, where Ω=\Omega = Universe

Definition 4 (Difference of two sets) :

The difference of two sets E  &  FE \; \& \; F is the set of all elements that are in EE but not in FF. It is denoted by EFE - F or E\FE \backslash F.

E\F={x:xE and xF}E \backslash F = \{ x : x \in E \text{ and } x \notin F\}

Definition 5 (Symmetric Difference of two sets) :

The symmetric difference of two sets E  &  FE \; \& \; F is the set of all elements that are either in EE or in FF but not in both. It is defined as

EΔF=(E\F)(F\E) \begin{align*} E \Delta F = (E\backslash F) \cup (F\backslash E) \end{align*}

Definition 6 (DeMorgan's laws) :

For any two sets E  &  FE \; \& \; F,

  • (EF)C=ECFC(E \cup F)^{C} = E^{C} \cap F^{C}
  • (EF)C=ECFC(E \cap F)^{C} = E^{C} \cup F^{C}

Definition 7 (Partition of a set) :

Given any set EE, an nn-partition of a EE consists of a sequence of sets EiE_i, ii = 1, 2, 3, \cdots, n such that

i=1nEi=E,  &  EiEj=ϕ,  ij \begin{align*} \bigcup_{i=1}^{n} E_i = E, \; \& \; E_i \cap E_j = \phi, \; \forall i \neq j \end{align*}

Definition 8 (Equality of sets) :

Two sets E  &  FE \; \& \; F are said to be equal if every element of in EE is in FF and vice versa.

E=F if EF  &  FE \begin{align*} E = F \text{ if } E \subseteq F \; \& \; F \subseteq E \end{align*}

Definition 9 (Disjoint sets) :

Two sets E  &  FE \; \& \; F are said to be disjoint if EF=ϕE \cap F = \phi.

Definition 10 (Subset of a set) :

A set AA is called a subset of a set BB, denoted ABA \subseteq B, if every element of AA is also an element of BB. Formally, this can be written as:

AB    (xAxB) A \subseteq B \iff (x \in A \Rightarrow x \in B)

where \Rightarrow denotes "implies".

If AA is a subset of BB but AA is not equal to BB, then AA is called a proper subset of BB, denoted ABA \subset B. This can be formally written as:

AB    (ABAB) A \subset B \iff (A \subseteq B \wedge A \neq B)

where \wedge denotes "and".

Probability :

Probability theory is a mathematical framework that allows us to describe and analyze a random experiment whose outcomes we cannot predict with certainty. It helps us to predict how likely or unlikely an event of interest will occur. Let AA be an event, and the chance of AA occuring is pp. The occurrence or non-occurence of AA depends upon the exact outcome of the random experiment.

Any experiment involving randomness can be modelled as a probability space. A probability space is a mathematical model of a random experiment. The space comprises of

  • Ω\Omega (Sample space): Set of possible outcomes of the experiment.
  • F\mathcal{F} (Event space) : Set of events.
  • P\mathbb{P} (Probability measure).

Definition 1 (Sample space) :

A sample space is the set of all possible outcomes of an experiment, denoted by Ω\Omega.

Example 1 : In the scenario of a coin being tossed, Ω={H,T}\Omega = \{ H, T\}.

Example 2 : In the scenario of a dice being rolled, Ω={1,2,3,4,5,6}\Omega = \{ 1, 2, 3, 4, 5, 6\}.

An event can be defined as a subset of the appropriate sample space Ω\Omega. If Ω={H,T}\Omega = \{ H, T\}, then an event AA cab be {H}\{H\} or {H}C\{H\}^{C} or {H}{T}\{H\} \cap \{T\} or else if Ω={1,2,3,4,5,6}\Omega = \{ 1, 2, 3, 4, 5, 6 \}, then AA can be {2,4,6}\{2, 4, 6\} or {1,2,3}\{1, 2, 3\} or {2}C\{2\}^{C}.

  • ϕ\phi is said to be the impossible event.
  • Ω\Omega is said to be the certain event since some member of Ω\Omega will ceetainly occur.

All the subsets of Ω\Omega need not be events.

Definition 2 (Event space) :

An event space is a collection F\mathcal{F} of events (subsets of Ω\Omega), which satisfy the following properties

  1. If A1,A2,FA_1, A_2, \cdots \in \mathcal{F}, then i=1AiF\bigcup_{i=1}^{\infty}A_i \in \mathcal{F}
  2. If AFA \in \mathcal{F}, then ACFA^{C} \in \mathcal{F}
  3. ϕF\phi \in \mathcal{F}

any collection satisfying these properties is called a σ\sigma field

If AFA \in \mathcal{F} then AA is said to be an event.

Definition 3 (Probability measure) :

A probability measure P\mathbb{P} on (Ω,F)(\Omega, \mathcal{F} ) is a function P:F[0,1]\mathbb{P} : \mathcal{F} \to [0, 1] satisfying

  1. P(Ω)=1\mathbb{P}(\Omega) = 1
  2. If A1,A2,A_1, A_2, \cdots is a collection of disjoint members of F\mathcal{F}, in that AiAj=ϕA_i \cap A_j = \phi for all pairs i,ji, j satisfying iji \neq j then

P(i=1Ai)=i=1P(Ai) \begin{align*} \mathbb{P} ( \bigcup_{i = 1}^{\infty} A_i ) = \sum_{i = 1}^{\infty} \mathbb{P}(A_i) \end{align*}

This triple (Ω,F,P)(\Omega, \mathcal{F}, \mathbb{P}) is called a probability space.

Example 3 : A coin, possibly biased, is tossed once. We can take Ω={H,T}\Omega = \{H, T\} and F={ϕ,Ω,{H},{T}}\mathcal{F} = \{\phi, \Omega, \{H\}, \{T\}\}. A possible probability measure P:F[0,1] \mathbb{P} : \mathcal{F} \to [0, 1] is given by

  • P(ϕ)=0\mathbb{P}(\phi) = 0
  • P(Ω)=1\mathbb{P}(\Omega) = 1
  • P({H})=p\mathbb{P}(\{H\}) = p
  • P({T})=1p\mathbb{P}(\{T\}) = 1-p where p[0,1]p \in [0,1]. If p = 0.5, then we can say that the coin is fair.

Important properties of a typical probability space :

  • P(AC)=1P(A)\mathbb{P}(A^{C}) = 1- \mathbb{P}(A)
  • If ABA \subseteq B , then P(B)=P(A)+P(BA)P(A)\mathbb{P}(B) = \mathbb{P}(A) + \mathbb{P}(B|A) \geq \mathbb{P}(A)
  • P(AB)=P(A)+P(B)P(AB)\mathbb{P}(A \cup B) =\mathbb{P}(A) + \mathbb{P}(B) - \mathbb{P}(A \cap B)
  • More generally, if A1,A2,,AnA_1, A_2, \cdots, A_n are events, then

P(i=1nAi)=iP(Ai)i<jP(AiAj)+i<j<kP(AiAjAk)+(1)n+1P(A1A2A3An), \begin{align*} \mathbb{P}( \bigcup_{i=1}^{n} A_i) = \sum_{i} \mathbb{P}(A_i) - \sum_{i<j}\mathbb{P}(A_i \cap A_j) + \sum_{i<j<k}\mathbb{P}(A_i \cap A_j \cap A_k) \cdots + (-1)^n+1 \mathbb{P}(A_1 \cap A_2 \cap A_3 \cdots \cap A_n), \end{align*}

where, for example, i<j\sum_{i<j} sums over all (i,j)(i,j) with iji \neq j.

An event AA is called null event if P(A)=0\mathbb{P}(A) = 0, and if P(A)=1\mathbb{P}(A) = 1, we say that the event A occurs almost surely. Null events should not be confused with the impossible event ϕ\phi. Impossible event is null, but null events need not be impossible.

Definition 4 (Conditional Probability) :

If P(B)>0\mathbb{P}(B) > 0, then the conditional probability that AA occurs given that BB occurs is defined as

P(AB)=P(AB)P(B) \begin{align*} \mathbb{P}(A|B) = \frac{\mathbb{P}(A \cap B)}{\mathbb{P}(B)} \end{align*}

Independence

In general, the occurence of some event BB changes the probability that another event AA occurs, i.e. P(A)P(A) and P(AB)P(A|B) can be different. If the probability remains unchanged, P(A)=P(AB)P(A) = P(A|B) then we say that the two events A  &  BA \; \& \; B are independent.

Definition 5 (Independence) :

Events A  &  BA \; \& \; B are called independent events if P(AB)=P(A)P(B)\mathbb{P}(A \cap B) = \mathbb{P}(A) \mathbb{P}(B). More generally, the events Ai,iIA_i, i \in I are independent if

P(iJAi)=iJP(Ai), \begin{align*} \mathbb{P}(\bigcap_{i \in J}^{}A_i) = \prod_{i \in J} \mathbb{P}(A_i), \end{align*}

for all finite subsets JJ of II.

If the events Ai,iIA_i, i \in I satisfy the property that P(AiAj)=P(Ai)P(Aj)ij\mathbb{P}( A_i\cap A_j) = \mathbb{P}(A_i) \mathbb{P}(A_j) \forall i \neq j then it is called pairwaise independent events.

Let CC be an event with P(C)>0\mathbb{P}(C) > 0, then the two events A  &  BA \; \& \; B are called conditionally independent given CC if

P(ABC)=P(AC)P(BC) \begin{align*} \mathbb{P}(A \cap B|C) =\mathbb{P}(A|C) \mathbb{P}(B|C) \end{align*}