4.2. Linear Algebraic Methods¶
These inversion algorithms are based on a decomposition of geometry matrix and emissivity independent regularisation matrix \(\mathbf{H}\). This matrix can be computed using the following formula:
where \(\mathbf{D}_{R/z}\) is the discrete derivation matrix normalised by the distance of node centers. Various numerical schemes can be used for computation of derivation matrices. The regularisation matrix computation is simplified compared to MFR and does not include the nonlinear term. This enables estimation of regularisation parameter directly from the decomposed matrices.
All methods share the same baseline that is used by the Phillips-Tikhonov regularisation scheme. First a decomposition using SVD or GEV scheme is performed resulting in multiple matrices that can be modified to fit the pattern needed to solve the Phillips-Tikhonov scheme using series expansion as shown below. The resulting emissivity \(\mathbf{g}(\alpha)\) is dependent on the regularisation parameter and can be computed using the following formula for reconstruction of \(N_\mathrm{n}\) nodes and \(N_\mathrm{c}\) channels [L1]
where \(\mathbf{U} \in \mathbb{R}^{N_\mathrm{n},N_\mathrm{c}}\), \(\mathbf{S} \in \mathbb{R}^{N_\mathrm{c},N_\mathrm{c}}\) is a diagonal matrix, \(\tilde{\mathbf{V}} \in \mathbb{R}^{N_\mathrm{c}, N_\mathrm{c}}\) and \(k_{i}(\alpha)\) are coefficients dependent on the value of the regularisation parameter computed by the following equation
4.2.1. Singular Value Decomposition¶
Based on [L1]. As the name suggest, this method uses singular value decomposition to obtain matrices used in equation (1). The first step is to compute a Cholesky decomposition of derivative matrix \(\mathbf{H}\)
A lower triangular matrix \(\mathbf{L}\) is obtained and an arbitrary matrix \(\mathbf{A}\) is defined using it together with geometry matrix
The next step involves computing singular value decomposition of arbitrary matrix \(\mathbf{A}\).
The decomposed matrix \(\mathbf{V}\) is then modified for use in the series expansion (1).
4.2.2. Generalised Eigenvalue Decomposition¶
This method is an alternative to the SVD method described above. The computation of matrices to be used in equation (1) is based on generalised eigenvalue decomposition.
Below an implementation based on [L2] is described. First a symmetric geometry matrix \(\mathbf{C}\) is defined as
This matrix is then used as the matrix for investigation in eigenvalue problem. The diagonal matrix to be used in equation (ref{eq:lame-series}) is directly determined by solving generalised eigenvalue problem
Here the \(\mathbf{V}\) matrix is containing eigenvectors in columns. Attention must be paid to sorting of eigenvalues as described algorithm expects eigenvalues and appropriate eigenvectors to be sorted from largest to smallest. This is explicitly stated because some numeric libraries contain implementations that return eigenvalues sorted from smallest to largest.
The remaining matrices for use in series expansion (ref{eq:lame-series}) are then calculated as
Exponent of matrix \(\mathbf{D}\) is used element wise.
4.2.3. Implementation¶
Structure of classes is based on algorithms proposed by T. Odstrcil in [L1]. However, they are currently implemented only for dense matrices without sparse optimization. If properly implemented using sparse matrices this class of algorithms can provide fast results with reasonable accuracy.
Several decomposition schemes can be used, therefore, a base class for these algorithms called Algebraic was created. The regularisation parameter can be found using a variety of methods that can be simple or more complex. Currently. only fast estimation based on decompositions is implemented in a subclass FastAlgebraic.
Singular value decomposition (SVD) and generalised eigenvalue decomposition (GEV) methods are subclasses of FastAlgebraic.