Random Variables
Set theory :
A set is a collection of well defined objects, either concrete or abstract. In the study of probability we are particularly interested in the set of all outcomes of a random experiment and its subsets.
Definition 1 (Union of sets) :
The union of two sets E&F is the set of all elements that are in atleast one of the sets E or F.
E⋃F={x:x∈E or x∈F}
Definition 2 (Intersection of sets) :
The intersection of two sets E&F is the set of all elements that are common to both sets E and F.
E⋂F={x:x∈E and x∈F}
Definition 3 (Complement of a set) :
The complement of a set E with respect to a universe Ω is the set of all elements that are not in E.
EC={x:x∈Ω and x∈/E}, where Ω= Universe
Definition 4 (Difference of two sets) :
The difference of two sets E&F is the set of all elements that are in E but not in F. It is denoted by E−F or E\F.
E\F={x:x∈E and x∈/F}
Definition 5 (Symmetric Difference of two sets) :
The symmetric difference of two sets E&F is the set of all elements that are either in E or in F but not in both. It is defined as
EΔF=(E\F)∪(F\E)
Definition 6 (DeMorgan's laws) :
For any two sets E&F,
- (E∪F)C=EC∩FC
- (E∩F)C=EC∪FC
Definition 7 (Partition of a set) :
Given any set E, an n−partition of a E consists of a sequence of sets Ei, i = 1, 2, 3, ⋯, n such that
i=1⋃nEi=E,&Ei∩Ej=ϕ,∀i=j
Definition 8 (Equality of sets) :
Two sets E&F are said to be equal if every element of in E is in F and vice versa.
E=F if E⊆F&F⊆E
Definition 9 (Disjoint sets) :
Two sets E&F are said to be disjoint if E∩F=ϕ.
Definition 10 (Subset of a set) :
A set A is called a subset of a set B, denoted A⊆B, if every element of A is also an element of B. Formally, this can be written as:
A⊆B⟺(x∈A⇒x∈B)
where ⇒ denotes "implies".
If A is a subset of B but A is not equal to B, then A is called a proper subset of B, denoted A⊂B. This can be formally written as:
A⊂B⟺(A⊆B∧A=B)
where ∧ denotes "and".
Probability :
Probability theory is a mathematical framework that allows us to describe and analyze a random experiment whose outcomes we cannot predict with certainty. It helps us to predict how likely or unlikely an event of interest will occur. Let A be an event, and the chance of A occuring is p. The occurrence or non-occurence of A depends upon the exact outcome of the random experiment.
Any experiment involving randomness can be modelled as a probability space. A probability space is a mathematical model of a random experiment. The space comprises of
- Ω (Sample space): Set of possible outcomes of the experiment.
- F (Event space) : Set of events.
- P (Probability measure).
Definition 1 (Sample space) :
A sample space is the set of all possible outcomes of an experiment, denoted by Ω.
Example 1 : In the scenario of a coin being tossed, Ω={H,T}.
Example 2 : In the scenario of a dice being rolled, Ω={1,2,3,4,5,6}.
An event can be defined as a subset of the appropriate sample space Ω. If Ω={H,T}, then an event A cab be {H} or {H}C or {H}∩{T} or else if Ω={1,2,3,4,5,6}, then A can be {2,4,6} or {1,2,3} or {2}C.
- ϕ is said to be the impossible event.
- Ω is said to be the certain event since some member of Ω will ceetainly occur.
All the subsets of Ω need not be events.
Definition 2 (Event space) :
An event space is a collection F of events (subsets of Ω), which satisfy the following properties
- If A1,A2,⋯∈F, then ⋃i=1∞Ai∈F
- If A∈F, then AC∈F
- ϕ∈F
any collection satisfying these properties is called a σ field
If A∈F then A is said to be an event.
Definition 3 (Probability measure) :
A probability measure P on (Ω,F) is a function P:F→[0,1] satisfying
- P(Ω)=1
- If A1,A2,⋯ is a collection of disjoint members of F, in that Ai∩Aj=ϕ for all pairs i,j satisfying i=j then
P(i=1⋃∞Ai)=i=1∑∞P(Ai)
This triple (Ω,F,P) is called a probability space.
Example 3 :
A coin, possibly biased, is tossed once. We can take Ω={H,T} and F={ϕ,Ω,{H},{T}}. A possible probability measure P:F→[0,1] is given by
- P(ϕ)=0
- P(Ω)=1
- P({H})=p
- P({T})=1−p
where p∈[0,1]. If p = 0.5, then we can say that the coin is fair.
Important properties of a typical probability space :
- P(AC)=1−P(A)
- If A⊆B, then P(B)=P(A)+P(B∣A)≥P(A)
- P(A∪B)=P(A)+P(B)−P(A∩B)
- More generally, if A1,A2,⋯,An are events, then
P(i=1⋃nAi)=i∑P(Ai)−i<j∑P(Ai∩Aj)+i<j<k∑P(Ai∩Aj∩Ak)⋯+(−1)n+1P(A1∩A2∩A3⋯∩An),
where, for example, ∑i<j sums over all (i,j) with i=j.
An event A is called null event if P(A)=0, and if P(A)=1, we say that the event A occurs almost surely. Null events should not be confused with the impossible event ϕ. Impossible event is null, but null events need not be impossible.
Definition 4 (Conditional Probability) :
If P(B)>0, then the conditional probability that A occurs given that B occurs is defined as
P(A∣B)=P(B)P(A∩B)
Independence
In general, the occurence of some event B changes the probability that another event A occurs, i.e. P(A) and P(A∣B) can be different. If the probability remains unchanged, P(A)=P(A∣B) then we say that the two events A&B are independent.
Definition 5 (Independence) :
Events A&B are called independent events if P(A∩B)=P(A)P(B). More generally, the events Ai,i∈I are independent if
P(i∈J⋂Ai)=i∈J∏P(Ai),
for all finite subsets J of I.
If the events Ai,i∈I satisfy the property that P(Ai∩Aj)=P(Ai)P(Aj)∀i=j then it is called pairwaise independent events.
Let C be an event with P(C)>0, then the two events A&B are called conditionally independent given C if
P(A∩B∣C)=P(A∣C)P(B∣C)