Signal representation and orthogonality

Vectors and dot products

To understand orthogonality of signals, we recall the familiar notion of vectors in 3D space and their representation. If i \mathbf{i} , j \mathbf{j} , and k \mathbf{k} denote the unit vectors along the three perpendicular axes of the reference coordinate system, any arbitrary vector v \mathbf{v} can be represented as

v=ai+bj+ck \mathbf{v} = a \mathbf{i} + b \mathbf{j} + c \mathbf{k}

In the Cartesian coordinate system, the unit vectors i,j, \mathbf{i}, \mathbf{j}, and k \mathbf{k} correspond to the points (1,0,0), (0,1,0), and (0,0,1) respectively. The dot product of any two vectors v_1=(a1,b1,c1) \mathbf{v}\_1 = (a_1,b_1,c_1) and v_2=(a2,b2,c2) \mathbf{v}\_2 = (a_2,b_2,c_2) is defined as

v_1,v_2=v_1v_2=a1a2+b1b2+c1c2 \langle\mathbf{v}\_1,\mathbf{v}\_2\rangle = \mathbf{v}\_1 \cdot \mathbf{v}\_2 = a_1a_2 + b_1b_2 + c_1c_2

By this definition, we can see that any two vectors from the set {i,j,k} \{\mathbf{i}, \mathbf{j}, \mathbf{k}\} have zero dot product. In general, any two perpendicular vectors will have zero dot product and are said to be orthogonal to each other. Additionally, when orthogonal unit vectors are used as basis to represent vectors, the coefficients can be easily found. For example, when {i,j,k} \{\mathbf{i}, \mathbf{j}, \mathbf{k}\} are used as basis,

a=vi,b=vj,c=vk a = \mathbf{v} \cdot \mathbf{i}, \quad b = \mathbf{v} \cdot \mathbf{j}, \quad c = \mathbf{v} \cdot \mathbf{k}

Dot product of signals and orthogonality

In an analogous way, we can define the notion of dot product (and hence orthogonality) for signals. Signals can be thought of as vectors with infinite dimension. For continuous-time signals x_1(t) \mathbf{x}\_1 (t) and x_2(t) \mathbf{x}\_2 (t) , their dot product is defined as

x1(t),x_2(t) =t=t=x_1(t)x_2(t)dt \langle\mathbf{x}_1 (t),\mathbf{x}\_2 (t)\rangle~ = \int_{t=-\infty}^{t=\infty} \mathbf{x}\_1 (t) \mathbf{x}\_2 (t) dt

Two signals x_1(t) \mathbf{x}\_1 (t) and x_2(t) \mathbf{x}\_2 (t) are said to be orthogonal if their dot product, as defined above, is zero, i.e., x_1(t),x_2(t) =0 \langle\mathbf{x}\_1 (t),\mathbf{x}\_2 (t)\rangle ~= 0 . In other words, the product signal y(t)=x_1(t)x_2(t) \mathbf{y}(t) = \mathbf{x}\_1 (t) \mathbf{x}\_2 (t) has equal amount of positive and negative area.

For periodic signals x_1(t) \mathbf{x}\_1 (t) and x_2(t) \mathbf{x}\_2 (t) with same period T, their orthogonality can be verified by computing the integral of the product within a single period T. Thus, periodic signals x_1(t) \mathbf{x}\_1 (t) and x_2(t) \mathbf{x}\_2 (t) with period T are orthogonal if

x1(t),x_2(t) =t=0t=Tx_1(t)x_2(t)dt=0 \langle\mathbf{x}_1 (t),\mathbf{x}\_2 (t)\rangle~ = \int_{t=0}^{t=T} \mathbf{x}\_1 (t) \mathbf{x}\_2 (t) dt = 0

As an example, consider the signals x_1(t)=sin(2πt) \mathbf{x}\_1 (t) = \sin(2\pi t) and x_2(t)=cos(4πt) \mathbf{x}\_2 (t) = \cos (4\pi t) . We can verify that their dot product is zero as seen by the product signal and the shaded areas in Fig.1 and Fig.2 respectively.

drawing

Fig.1: Product of sine and cosine signals

drawing

Fig.2: Area getting cancelled due to the product of signals

The notion of dot product and orthogonality can be extended to complex signals. If x_1(t) \mathbf{x}\_1 (t) and x_2(t) \mathbf{x}\_2 (t) are complex-valued signals, their dot product is defined as

Error in LaTeX ' \langle\mathbf{x}_1 (t),\mathbf{x}\_2 (t)\rangle ~= \int_{t=-\infty}^{t=\infty} \mathbf{x}\_1^{\*}(t) \mathbf{x}\_2 (t) dt ': KaTeX parse error: Undefined control sequence: \* at position 97: …\mathbf{x}\_1^{\̲*̲}(t) \mathbf{x}…

where Error in LaTeX ' \mathbf{x}\_1^{\*}(t) ': KaTeX parse error: Undefined control sequence: \* at position 17: …\mathbf{x}\_1^{\̲*̲}(t) denotes complex conjugate of x_1(t) \mathbf{x}\_1 (t) . For example, the signals ejπt e^{j \pi t} and ej2πt e^{j 2\pi t} are orthogonal signals.

Discrete-time signals

In a similar fashion, for discrete-time signals, dot product can be defined as

x1[n],x_2[n] =n=n=x_1[n]x_2[n] \langle\mathbf{x}_1 [n],\mathbf{x}\_2 [n]\rangle ~= \sum_{n=-\infty}^{n=\infty} \mathbf{x}\_1 [n] \mathbf{x}\_2 [n]

For finite N N -length signals, dot product is defined as

x1[n],x_2[n] =n=0n=N1x_1[n]x_2[n] \langle\mathbf{x}_1 [n],\mathbf{x}\_2 [n]\rangle ~= \sum_{n=0}^{n=N-1} \mathbf{x}\_1 [n] \mathbf{x}\_2 [n]

Fourier Series

As in vectors, orthogonal signals are extensively used in representation of signals. They often form the building blocks for various signal representations. For example, in trigonometric Fourier series analysis, a periodic signal x(t) \mathbf{x}(t) is represented using sinusoids as follows:

x(t)=a0+k=1akcos(2πkf0t)+bksin(2πkf0t), \mathbf{x}(t) = a*0 + \sum*{k=1}^{\infty} a_k \cos(2\pi k f_0 t) + b_k \sin(2\pi k f_0 t),

where T=1f0 T = \frac{1}{f*0} is the period of x(t) \mathbf{x}(t) . The set of signals {1,cos(2πkf0t),sin(2πkf0t)}k=1,2,... \{1, \cos(2\pi k f_0 t), \sin(2\pi k f_0 t)\}*{k = 1,2,...} are the building blocks in Fourier series representation of periodic signals, where 1 1 denotes the constant signal. All the signals in this set have the common period of T T . We can verify that any two signals in this set are orthogonal.

Haar Wavelet

Orthogonality is a recurring feature of many other signal representations, for example wavelet decomposition. In wavelet theory, a mother wavelet function is used to generate the building blocks by performing scaling and translation of this function. The set of functions (wavelets) thus generated form a pairwise orthogonal set of signals. These building blocks can be used to represent almost any arbitrary signal with finite support.

As an example, consider the Haar mother wavelet ψ(t) \psi(t) and its scaled and translated version with scale factor n n and shift k k given by,

ψ(t)={10t<12112t<10otherwise. \psi(t) = \left\{\begin{matrix} 1 \quad 0\leqslant t < \frac{1}{2}\\ -1 \quad \frac{1}{2}\leqslant t < 1\\ 0 \quad \text{otherwise.} \end{matrix}\right.

ψ_n,k(t)=2n/2ψ(2ntk),tR. \psi \_{n,k}(t)=2^{n/2}\psi (2^{n}t-k),\quad t\in \mathbb {R}.

drawing

Fig.3: Haar mother wavelet

We can verify that the scaled and shift versions of the Haar wavelet are orthogonal to each other. As a special case, orthogonality of the wavelets ψ0,0(t) \psi*{0,0}(t) and ψ1,0(t) \psi*{1,0}(t) can be seen below.

drawing

Fig.4: Orthogonality of scaled Haar wavelet

Advantages of orthogonality

Finding the representation coefficients becomes easy when a signal is decomposed into a set of orthogonal signals. Consider the trigonometric Fourier series representation given above. We can find the coefficients {a0,ak,bk}k=1,2... \{a*0, a_k, b_k\}*{k=1,2...} by computing the dot product of the given periodic signal x(t) \mathbf{x}(t) with the signal associated with each of these components. Thus,

a0 =1T0Tx(t) dt a*0 ~= \frac{1}{T} \int*{0}^{T} \mathbf{x}(t)~ dt

ak =1T0Tx(t)cos(2πkf0t) dt a*k ~= \frac{1}{T} \int*{0}^{T} \mathbf{x}(t) \cos(2\pi k f_0 t)~ dt

bk =1T0Tx(t)sin(2πkf0t) dt b*k ~= \frac{1}{T} \int*{0}^{T} \mathbf{x}(t) \sin(2\pi k f_0 t)~ dt

Coefficients for other signal representations can be obtained in a similar way.