4.1. Minimum Fisher Regularization¶
Minimum Fisher information Regularisation algorithm based on [M1] and further improvements described in [M2] and building on other work such as [M3].
This algorithm relies on Tikhonov regularisation to compensate under-determined nature of the tomography problem and consist of two loops. The inner loop searches for the optimal value of regularisation parameter based on the intermediate result and signal. The outer loop regularises the problem using the optimal values of regularisation parameter found by the inner loop that determines the strength of a-priori information imposed by smoothing matrix. It is based on the intermediate result and therefore the method is nonlinear.
The inner loop searches for the regularisation parameter \(\alpha\) iteratively. It is tested against the quality of reconstruction, that is determined by Pearson \(\chi^2\) test with each iteration. The residuum \(\chi^2\) is determined by the following equation:
where \(N_\mathrm{c}\) is the number of channels and \(\mathbf{\sigma}_i\) is estimated noise measured by i-th channel. The optimal value is considered to be one. In such case the error of reconstruction is approximately the same as estimated error of measured data.
The outer loop uses following equation for inversion. It can be obtained by applying Tikhonov regularisation to tomography problem base equation [L2]:
where \(\alpha\) is a regularisation parameter and \(\mathbf{H}\) is regularisation matrix computed as:
where \(\mathbf{D_1}\), \(\mathbf{D_2}\) are matrices of numerical derivatives along locally orthogonal directions, \(c_1\), \(c_2\) anisotropy coefficients and \(\mathbf{w}\) is a matrix with weights for individual nodes. Elements of the weight matrix are computed as \(1/\mathbf{g}_0\) for non zero elements of \(\mathbf{g}_0\), otherwise, a predefined value \(w_{\mathrm{max}}\) is used. The sum of anisotropy coefficients should be equal to 1. The value of the regularisation parameter is tested using Pearson \(\chi^2\) test. The optimal value of the regularisation parameter is then used to update the regularisation matrix and the inner loops repeats until the maximum number of outer cycles is reached.
4.1.1. Implementation¶
There are two version of the MFR algorithm currently implemented. One relies on scipy.sparse library which is more commonly available but it is slower. The other uses cholesky decomposition cholmod from scikit sparse library for solving the regularised problem (1). This method is significantly faster for large grids.
4.1.2. References¶
[M1] |
|
[M2] |
|
[M3] |
|