^{1}

^{1}

^{*}

This paper proposes the continuous-time singular value decomposition (SVD) for the impulse response function, a special kind of Green’s functions, in order to find a set of singular functions and singular values so that the convolutions of such function with the set of singular functions on a specified domain are the solutions to the inhomogeneous differential equations for those singular functions. A numerical example was illustrated to verify the proposed method. Besides the continuous-time SVD, a discrete-time SVD is also presented for the impulse response function, which is modeled using a Toeplitz matrix in the discrete system. The proposed method has broad applications in signal processing, dynamic system analysis, acoustic analysis, thermal analysis, as well as macroeconomic modeling.

Impulse response function (IRF) has been considerably used for signal processing, dynamic system analysis, acoustic analysis, thermal analysis, and macroeconomic modeling. The significance of such function is that for any input, the output can be calculated in terms of the input and the impulse response. To determine an output directly in the time domain requires the convolution of the input with the impulse response, which is very complicated. Therefore, it is usually to calculate the Laplace transform of a system’s output by the multiplication of the Laplace transform of the IRF with the input’s Laplace transform in the frequency domain at first; and then find the output in the time domain through an inverse Laplace transform.

This study proposes a direct method to calculate the system output through the singular value decomposition (SVD) algorithm. As a popular technique of factorizing a matrix, a variety of forms of SVD have been developed recently and applied for solving a wide range of scientific and engineering problems. For exam, Ramirez-Velarde et al. used SVD on the video correlation matrix to perform the principal component analysis [

Given an impulse response function (Green’s function) of the form:

h ( t , τ ) = e − ( t − τ )

The analysis below shows how to find a set of “singular” functions f ( j , τ ) and singular values D j ( j = 0 , 1 , 2 , ⋯ ) such that:

∫ 0 t h ( t , τ ) f ( j , τ ) d τ = D j f ( j , T − t ) , t ≤ T (1)

The left hand side of Equation (1) is Duhamel’s integral [

Conjecture 1: f ( j , τ ) = sin ( ω j τ + ϕ j )

Proof

If conjecture 1 is correct, Equation (1) then becomes:

∫ 0 t e τ sin ( ω j τ + ϕ j ) d τ = e t D j sin [ ω j ( T − t ) + ϕ j ] , t ≤ T (2)

The complete solution process for Equation (2) includes three steps: 1) simplification of left hand side of Equation (2); 2) simplification of right hand side of Equation (2); 3) solve the simplified Equation (2).

Step 1: Simplifying ∫ 0 t e τ sin ( ω j τ + ϕ j ) d τ

The integral on left hand side of Equation (2) can be solved as:

∫ 0 t e τ sin ( ω j τ + ϕ j ) d τ = e t sin ( ϕ j + ω j t ) − ω j e t cos ( ϕ j + ω j t ) 1 + ω j 2 − sin ( ϕ j ) − ω j cos ( ϕ j ) 1 + ω j 2 (3)

The first term on the right hand side of Equation (3) can be simplified as:

e t sin ( ϕ j + ω j t ) − ω j e t cos ( ϕ j + ω j t ) 1 + ω j 2 = e t sin ( ϕ j + t ω j − atan ( ω j ) ) 1 + ω j 2 (4)

In order to find ϕ j to satisfy Equation (3), the second term on the right hand side of that equation can be set to zero:

sin ( ϕ j ) − ω j cos ( ϕ j ) = 0 (5)

Solutions of the above equation are:

ϕ j = atan ( ω j ) − n π (6)

Substituting Equations (4) and (5) into (3), a more compact form can be obtained as:

∫ 0 t e τ sin ( ω j τ + ϕ j ) d τ = e t sin ( ϕ j + ω j t − atan ( ω j ) ) 1 + ω j 2 (7)

Replace ω j with ϕ j using Equation (6) and the left hand side of Equation (2) can be eventually simplified as:

∫ 0 t e τ sin ( ω j τ + ϕ j ) d τ = e t sin ( ω j t − n π ) 1 + ω j 2 = e t ( − 1 ) n sin ( ω j t ) 1 + ω j 2 (8)

Step 2: Simplifying e t D j sin [ ω j ( T − t ) + ϕ j ]

Substituting Equation (6) on the right hand side of Equation (2) and using the identity sin ( ω j t − n π ) = ( − 1 ) n sin ( ω j t ) , we can have:

e t D j sin [ ω j ( T − t ) + ϕ j ] = e t D j ( − 1 ) n sin [ ω j ( T − t ) + atan ( ω j ) ] (9)

Step 3: Solve Equation (2)

In order to satisfy Equation (2), ϕ j , ω j and D j must meet some requirements. This step completely solves Equation (2) and discusses the characteristics of ϕ j , ω j and D j from the final solution. Apply the obtained simplified terms to the original problem by substituting Equations (8) and (9) into Equation (2), we can have:

e t ( − 1 ) n sin ( ω j t ) 1 + ω j 2 = e t D j ( − 1 ) n sin [ ω j ( T − t ) + atan ( ω j ) ] (10)

The term e t ( − 1 ) n can be canceled from both sides, which implies that the integer n will not affect the final results. Equation (10) becomes

sin ( ω j t ) 1 + ω j 2 = D j sin [ ω j ( T − t ) + atan ( ω j ) ] = D j sin [ − ω j t + atan ( ω j ) + ω j T ] (11)

Let φ j = atan ( ω j ) + ω j T , then the term sin ( − ω j t + atan ( ω j ) + ω j T ) in Equation (11) can be expressed expanded as:

sin ( − ω j t + atan ( ω j ) + ω j T ) = sin ( − ω j t + φ j ) = cos ( ω j t ) sin ( φ j ) − sin ( ω j t ) cos ( φ j ) (12)

Because the left hand side of Equation (11) is a constant times sin ( ω j t ) , so if that equation holds true, the right hand side should also be represented as a constant multiplies sin ( ω j t ) . Inspecting Equation (12) and we can find that that equation can be represented as sin ( ω j t ) times a constant if and only if φ j = j π (j is an integer). In other words we have atan ( ω j ) + ω j T = j π , which yields:

tan ( j π − ω j T ) = ω j → − tan ( ω j T ) = ω j for an integer j (13)

Equation (13) is a transcendental equation with infinite roots so it is reasonable to label the roots in ascending order such that: ω 0 < ω 1 < ω 2 < ⋯ etc. Thus, Equation (11) can be further reduced to:

sin ( ω j t ) 1 + ω j 2 = D j [ − sin ( ω j t ) cos ( j π ) ] (14)

which yields

D j = ( − 1 ) j 1 + ω j 2 (15)

For an integer j.

Since atan ( ω j ) + ω j T = j π , we have atan ( ω j ) = j π − ω j T and Equation (6) becomes:

ϕ j = ( j − n ) π − ω j T (16)

As can be seen from Equations (13) and (15), the value n does not affect the results for either D j or ω j , for convenience we set n = j to further simplify the solution (Equation (16)) as:

ϕ j = − ω j T (17)

Thus, Equation (2) has been completely solved with solutions provided through Equations (13), (15), and (17). Nevertheless, ω j = 0 is excluded even it is a solution of Equation (13) because it will lead to a trivial solution for Equations (2) and (17). Thus, the solved singular functions can be represented as f ( j , τ ) = sin ( ω j τ − ω j T ) and conjecture 1 is proved.

An illustrative example is presented here to validate the developed method.

In Equation (13), we assume T = 10 and the roots of that equation can be found using Mathcad, which are the intersections between function curves Func 1 = − tan ( 10 ω ) and Func 2 = ω .

From

Equations (15) and (17) provide values for D j and φ j . Let j = 0 , 1 , 2 , 3 , substitute j and ω j solved from

f 1 ( t , j ) = ∫ 0 t e − ( t − τ ) sin ( ω j τ + ϕ j ) M ( j ) d τ (18)

f 2 ( t , j ) = D i sin [ ω j ( T − t ) + ϕ j ] M ( j ) (19)

where f 1 ( t , j ) comes from the left hand side of original Equation (2) and f 2 ( t , j ) come from the right hand side of that equation. M ( j ) is the magnitude of ∫ 0 T sin ( ω j τ + ϕ j ) d τ :

M ( j ) = ∫ 0 T sin 2 ( ω j τ + ϕ j ) d τ (20)

It deserves to be mentioned that sin(ωt) is an orthogonal function because for i = 1 and j = 2 we have ∫ 0 T sin ( ω i t ) sin ( ω j t ) d t = 0 . Note that T is not the period of the sine waves.

It needs to be noted that the numerical example in this paper was modeled and solved using Mathcad, the math software for engineering calculations.

As a counter part of the continuous-time SVD, a discrete-time SVD is also conducted for the Green’s function e − ( t − τ ) . In discrete system, such function is represented as a Toeplitz matrix. The entire SVD process is presented as follows.

Given a matrix A with the following property:

R ∗ A ∗ L T = L ∗ A T ∗ R T (21)

where R and L are suitable orthogonal matrices. It is obvious that the product R ⋅ A ⋅ L T is a symmetric matrix because R ∗ A ∗ L T = L ∗ A T ∗ R T = ( R ∗ A ∗ L T ) T . Equation (21) can be rewritten as: L T ∗ R ∗ A = A T ∗ R T ∗ L . Let Q = L T ∗ R so that equation becomes Q ∗ A = A T ∗ Q T = ( Q ∗ A ) T . This shows that if Equation (21) is satisfied, an orthogonal matrix Q can be found from there and Q ∗ A is symmetric.

It is always possible to transform a square matrix into a symmetric matrix as the SVD can convert the matrix into a diagonal form, which is shown from Equations (22) to (24).

Consider the eigenvalue decomposition of the real symmetric matrix R ∗ A ∗ L T and if:

R ∗ A ∗ L T = S T ∗ Λ ∗ S (22)

where matrix S contains an orthonormal set of eigenvectors of R ∗ A ∗ L T and the diagonal matrix Ʌ contains the corresponding eigenvalues.

From Equation (22), A can be solved as:

A = R T ∗ S T ∗ Λ ∗ S ∗ L (23)

Let U = R T ∗ S T , V = L T ∗ S T and D = Λ , Equation (23) follows that

A = U ∗ D ∗ V T (24)

Equation (24) is the SVD of A.

Let P be a permutation matrix used to reverse the order of the rows. Then P must be an orthogonal matrix with ones along the main antidiagonal line and zeros elsewhere. Thus if the matrix A is a Toeplitz matrix, then the product P ∗ A must be a Hankel matrix. In Equation (22), if we assume R = P , A is a Toeplitz matrix, and L is the identity matrix L = I , then according to SVD theory, the matrix S should be a matrix obtained by assembling orthonormal eigenvectors from the Hankel matrix P ∗ A and Λ is a diagonal matrix with the corresponding eigenvalues of P ∗ A along the diagonal line. This shows that the singular values and the right singular vectors of a lower triangular Toeplitz matrix are just the eigenvalues and eigenvectors of the Hankel matrix obtained by flipping upside-down the lower triangular Toeplitz matrix. However, it needs to mention that even the eigenvalues of P ∗ A can be positive or negative, the singular values of A must be positive.

The following process explains how to find U and V in Equation (24) for the lower triangular Toeplitz matrix A.

The eigenvalue and eigenvector for the Hankel matrix P ∗ A is:

( P ∗ A ) ( X i ) = λ i ( X i ) (25)

Since λ i can be either positive or negative but the singular value s i must positive, we have:

V i = X i | X i | , U i = s i g n ( λ i ) ∗ P ∗ ( X i | X i | ) and s i = | λ i | (26)

For example, if matrix A is:

A = [ 5 0 0 0 0 4 5 0 0 0 3 4 5 0 0 2 3 4 5 0 1 2 3 4 5 ]

SVD of that matrix can be completed by finding U, V and D (Λ) from Equation (26) (please see Appendix for Mathcad codes).

In this section, the illustrative example solved in Section 3 will be resolved by the discrete-time SVD algorithm. For a discrete decomposition, the original continuous function e − ( t − τ ) is first discretized as 100 discrete values, from which a Toeplitz matrix is formed and the validated SVD process presented in Equations (22) to (26) is applied to find finding U, V and D for such Toeplitz matrix. Since the SVD algorithm has been implemented into Mathcad, so the entire process is solved using that software and the detailed codes are presented below, where the embedded figure shows the discretized function y = e − t , which is displayed through 100 points.

In _{1} (Equation (18))

for i from 0 to 3. ( U 〈 i 〉 ) i 1 is the ith column extracted from the matrix U, which

is obtained following the discrete-time SVD process presented in

In this section, the discrete-time SVD is performed to solve the problem raised in Equation (2) in a discrete system. From the solution it can be seen that the presented method can accurately yield the results, which is the ith column of the SVD matrix U. The critical technique employed here is to flip the lower triangle Toeplitz matrix upside down to convert it to a Hankel matrix through a permutation matrix. Comparing to the Toeplitz matrix, which is usually used to model the IRF, the Hankel matrix is easier to be decoupled therefore making computing effort easier and more efficient. The same “flipping upside-down” technique is also applied for the continuous-time SVD (the function term f ( j , T − t ) in Equation (1)) to simplify the solution process.

Section 2 presents continuous-time SVD for the Green’s function e − ( t − τ ) . In reality, the continuous-time SVD can be used to process a wealth of IRF’s and in this section the method displayed in Section 2 is pushed one-step forward by discussing the continuous-time SVD of e − a ( t − τ ) .

Back to Equation (1) and if h ( t , τ ) = e − a ( t − τ ) , where a is a real positive number, Equation (2) becomes:

∫ 0 t e a τ f ( j , τ ) d τ = e a t D j f ( j , T − t ) (27)

Let t p = a t and τ p = a τ , then it follows that d τ p = a d τ and the integration limits become 0 ≤ τ p ≤ t p . Substituting them back into the integral Equation (27) and we obtain:

1 a ∫ 0 t p e τ p f ( j , τ p a ) d τ p = e t p D j f ( j , T ⋅ a − t p a ) (28)

Let f ( j , t ) = sin ( ω t ) and Equation (28) then becomes:

∫ 0 t p e τ p sin ( ω τ p a ) d τ p = a e t p D j sin ( ω T ⋅ a − t p a ) (29)

Now we define ω p = ω a , T R p = T ∗ a and D p = D ∗ a and Equation (29) can be simplified as:

∫ 0 t p e τ p sin ( ω p τ p ) d τ p = e t p D p sin [ ω p ( T p − t p ) ] (30)

Equation (30) follows the same form as Equation (2) and then it can be solved following the same approach discussed in Section 2 (Equations (3) to (17)).

This study presented and validated continuous-time and discrete-time SVD for the impulse response function, a special kind of Green’s functions. For both the continuous-time and discrete-time SVD, the “flipping upside-down” strategy is employed to simplify the solution process. Illustrative example confirms the accuracy and efficiency of the present methods. Even though this paper mainly focuses on the SVD of the Green’s functions, it is obvious that the solution can follow a similar approach. Comparing to existing methods, one advantage of the presented method is that by using such method, the time-domain output can be directly determined from the convolution of the input with the IRF without going through Laplace and inverse Laplace transforming. Considering the diversity of impulse response functions involved in multidisciplinary engineering problems, the method presented in this study can be extensively developed for broad applications. In the future, the SVD approach can be further modified to absorb some features of Adomian decomposition method [

The authors declare no conflicts of interest regarding the publication of this paper.

Luck, R. and Liu, Y. (2021) Continuous-Time and Discrete-Time Singular Value Decomposition of an Impulse Response Function. Applied Mathematics, 12, 336-347. https://doi.org/10.4236/am.2021.124024