You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Nonconvex Total Generalized Variation Model for Image Inpainting

Abstract

It is a challenging task to prevent the staircase effect and simultaneously preserve sharp edges in image inpainting. For this purpose, we present a novel nonconvex extension model that closely incorporates the advantages of total generalized variation and edge-enhancing nonconvex penalties. This improvement contributes to achieve the more natural restoration that exhibits smooth transitions without penalizing fine details. To efficiently seek the optimal solution of the resulting variational model, we develop a fast primal-dual method by combining the iteratively reweighted algorithm. Several experimental results, with respect to visual effects and restoration accuracy, show the excellent image inpainting performance of our proposed strategy over the existing powerful competitors.

1Introduction

The research of image inpainting is an important and challenging topic in image processing and computer vision. As is well known to all, it has played a very significant role in the fields of artwork restoration, redundant target removal, image segmentation and video processing.

The objective of image inpainting is to reconstruct the missing or damaged portions of image. For solving this inverse problem, there have emerged numerous models based on the variational, partial differential equation (PDE), wavelet, as well as Bayesian methods. Notice that the terminology of digital inpainting was initially introduced by Bertalmio et al. (2000), who proposed the typical third-order nonlinear PDE inpainting approach. Subsequently, Chan and Shen (2001) developed a novel PDE model based on curvature driven diffusion, and the total variation (TV) model (Chan and Shen, 2002). The authors in Masnou and Morel (1998), Chan et al. (2002) investigated the Euler’s elastica and curvature based variational inpainting models. Moreover, considering inpainting in the transformed domain, the works (Chan et al.20062009) discussed the TV minimization wavelet domain models for image inpainting. Among these models, one of the remarkable variational solvers based on Rudin et al. (1992) is the TV inpainting model

(1)
minu{TV(u)+λ2ΩD(uf)2dx},
where Ω denotes the complete image domain, D is the missing or damaged region to be inpainted, f and u are the degraded image and the unknown true data respectively, TV(u)=Ω|u|dx is the total variation function, and λ means a regulable parameter.

As demonstrated in various applications, TV framework (Rudin et al.1992; Chan and Shen, 2002; Prasath, 2017) can preserve the geometric features well, but the obtained results often suffer from the piecewise constant in smooth regions. To eliminate the unexpected staircase effect, several powerful regularizer techniques, such as the higher-order derivatives (Chan et al.2000; Lysaker et al.2003), nonlocal TV (Gilboa and Osher, 2008; Liu and Huang, 2014) and total generalized variation (TGV) (Bredies et al.2010; Knoll et al.2011; Liu, 2018, 2019) based schemes have been widely researched with great success. It is noteworthy that the concept of TGV regularizer was originally introduced by Bredies et al. (2010). In practical applications, considering that images can be well approximated by the affine functions, the second-order TGV models are particularly favoured by numerous researchers. Applied to deal with the problem of image inpainting, the TGV regularized model can be written as

(2)
minu{TGVα2(u)+λ2ΩD(uf)2dx},
where the weight α=(α0,α1), with α0, α1 being two positive parameters. This technique reduces the blocky artifacts efficiently, but it sometimes causes the edge blurring.

With the aim of maintaining the sharp and neat edges, the studies (Black and Rangarajan, 1996; Roth and Black, 2009) demonstrate that the introduction of nonconvex potential functions is the right choice. Thus, on the basis of TV model, nonconvex TV regularizer methods (Nikolova et al., 2010, 2013; Bauss et al.2013) have attracted much attention of authors and become a hot research issue. It is worth noting that this solver has the superiority in preserving sharp discontinuities, but it leads to the serious staircase artifacts in smooth regions, even more than the TV based techniques. In view of the foregoing, the preliminary articles stated in Ochs et al. (2013, 2015), Zhang et al. (2017), which provide the combination of TGV regularizer and nonconvex prior, have achieved the reasonable and smooth denoising results with sharp discontinuities.

As for image inpainting, this paper aims to overcome the shortcomings of existing inpainting models, and constructs a novel nonconvex TGV (NTGV) regularization strategy. By intimately combining the advantages of TGV regularizer and edge-preserving nonconvex function, the developed scheme is formulated as the following concise form

(3)
minu{NTGVα2(u)+λ2ΩD(uf)2dx}.
It is noteworthy that the concrete formulation will be detailed in the next section.

The main contributions of the current article are listed as follows. First of all, we propose a novel nonconvex regularization model that closely integrates the superiorities of TGV regularizer and nonconvex logarithmic function. The usage of nonconvex penalizers in the TGV seminorm helps to obtain a more realistic image with sharp edges and no staircasing. Secondly, to optimize the resulting nonconvex model, this paper presents in detail a modified primal-dual framework by combining the iteratively reweighted minimization algorithm. All numerical simulations consistently illustrate the superiority of the introduced method for image inpainting over the related efficient solvers, with respect to both visual and measurable comparisons.

Finally, we give a briefly outline of the following sections. Section 2 is devoted to the overview of some basic mathematical preliminaries, and the proposal of a new nonconvex inpainting model. In Section 3, we minutely describe the process of deducing the designed optimization algorithm: primal-dual method. Several experimental simulations and comparisons, which are detailed in Section 4, aim to demonstrate the outstanding performance of the proposed strategy. In conclusion, we end this article with some summative remarks in Section 5.

2Proposed Model

In this section, we firstly give a brief overview of several necessary definitions and notations, and then put forward a new nonconvex image inpainting model. For later convenience, we begin with the definition of total variation as follows.

Let ΩRd denote a bounded open domain, and let u be a real valued function on Ω such that uL1(Ω). Then the total variation of u is defined by

(4)
TV(u)=sup{Ωudivϑdx|ϑCc1(Ω,Rd),ϑ1}.
As a generalization of TV, the second-order TGV takes the following form
(5)
TGVα2(u)=sup{Ωudiv2ϑdx|ϑCc2(Ω,Sd×d),ϑα0,divϑα1},
where α=(α0,α1)>0 stands for a positive weight, and Sd×d denotes the space of all symmetric d×d tensors. The respective definitions of the divergence operators and infinity norms can be formulated as (divϑ)i=j=1dϑijxj, 1id, div2ϑ=i,j=1d2ϑijxixj, and
ϑ=supxΩ(i,j=1d|ϑij(x)|2)1/2,divϑ=supxΩ(i=1d|(divϑ)i(x)|2)1/2.
More details regarding the concept of TGV are reported in Bredies et al. (2010). Therefore, the primal formulation of the second-order TGV can be defined as
(6)
TGVα2(u)=minv{α1Ω|uv|dx+α0Ω|ε(v)|dx},
where v=(v1,v2)T and ε(v)=12(v+vT) represents the symmetric derivative. More explicitly, the operators u and ε(v) have the following formations
u=xuyu,ε(v)=xv112(yv1+xv2)12(yv1+xv2)yv2.
The regularizer (6), together with the fidelity term, leads to the TGV based image inpainting model as
(7)
minu,v{α1Ω|uv|dx+α0Ω|ε(v)|dx+λ2ΩD(uf)2dx}.

Furthermore, choosing a nonconvex potential function F(|t|)=log(1+β|t|) and acting on the above TGV regularizer, this results in our nonconvex TGV inpainting model as follows:

(8)
minu,v{α1Ωlog(1+β|uv|)dx+α0Ωlog(1+β|ε(v)|)dx+λ2ΩD(uf)2dx}
with β being an adjustable weighting parameter.

3Optimization Algorithm

This section is devoted to the proposal of our resulting numerical algorithm in detail, which is tailed for solving the optimization problem (8), by artfully combining the classical iteratively reweighted 1 algorithm and primal-dual technique.

It is common knowledge that, for tackling the nonconvex functions, the so-called iteratively reweighted 1 algorithm (Candès et al.2008; Ochs et al.2015) has been demonstrated to be a standard solver. By this method, solving our nonconvex model amounts to minimizing the following surrogate convex optimization problem

(9)
minu,v{α1w1kuv1+α0w0kε(v)1+λ2i,jΩD(ui,jfi,j)2},
where two weights w1k and w0k are calculated in the latest k-th iteration, and endowed with the following concise formulas
(10)
w1k=β1+β|uk|,w0k=β1+β|ε(vk)|.

To obtain a fast and global optimal solution of (9), we resort to the popular primal-dual method, as proposed in Chambolle and Pock (2011), Esser et al. (2010). This technique has shown the superior capability of solving large-scale convex optimization problems in image processing and computer vision. Thanks to the Legendre-Fenchel transform, the nonsmooth problem (9) can be transformed into a convex-concave saddle-point formulation as follows

(11)
minu,vmaxpP,qQ{uv,p+ε(v),q+λ2i,jΩD(ui,jfi,j)2},
with the introduced two dual variables p and q. Their feasible sets related with two variables are characterized by
(12)
P={p=(p1,p2)T|pα1w1k},
(13)
Q=q=q11q12q21q22|qα0w0k,
where the induced infinity norm of p is defined as p=maxi,j|pi,j| with |pi,j|=(p1i,j)2+(p2i,j)2, and the similar manipulation applies to the infinity norm of q.

First of all, the solutions with respect to the dual variables p and q are formulated as

(14)
pk+1=PP(pk+δ(u˜kv˜k)),qk+1=PQ(qk+δε(v˜k)),
where PP(t) is the Euclidean projection of t onto the convex set P. For numerical computation, the projection operators PP and PQ for p and q are equipped with the forms of
(15)
PP(p˜k)=p˜kmax(1,|p˜k|/α1w1k),PQ(q˜k)=q˜kmax(1,|q˜k|/α0w0k).

Subsequently, we turn our attention to the solution of the primal variable u. Notice that the resolvent operator relating with the fidelity term has a simple quadratic framework, we have

(16)
uk+1=uk+τ(div(pk+1)+λf)1+τλ,if(i,j)ΩD,uk+τdiv(pk+1),if(i,j)D.
Likewise, the solution to the primal variable v is trivially given by
(17)
vk+1=vk+τ(divh(qk+1)+pk+1),
where div and divh denote two divergence operators, subject to div= and divh=ε with A being the adjoint of A. This, together with the definition of divergence, leads to
(18)
div(p)=xp1+yp2,divh(q)=xq11+yq12xq21+yq22.

Finally, given the relaxation parameter θ[0,1], the updates for u˜k+1 and v˜k+1 read as

(19)
u˜k+1=uk+1+θ(uk+1u˜k),
(20)
v˜k+1=vk+1+θ(vk+1v˜k).

Putting the above pieces together, we achieve a highly efficient primal-dual method, which is designed to deal with the resulting objective function. More precisely, starting with the initial setups u0, u˜0, v0, v˜0, p0, q0, δ, τ and θ, the optimization problem (9) is calculated according to the following framework

(21)
pk+1=PP(pk+δ(u˜kv˜k)),qk+1=PQ(qk+δε(v˜k)),uk+1=uk+τ(div(pk+1)+λf)1+τλ|ΩD+(uk+τdiv(pk+1))|D,vk+1=vk+τ(divh(qk+1)+pk+1),u˜k+1=uk+1+θ(uk+1u˜k),v˜k+1=vk+1+θ(vk+1v˜k).

It is noteworthy that, as for the computational complexity, the computation costs for the dual variables p, q and the primal variables u, v are all linear, namely they need O(mn) operations for an m×n image. Furthermore, similarly to the discussions (Chambolle and Pock, 2011), the convergence properties of the proposed algorithm are also guaranteed.

4Numerical Results

Our purpose in this section is to show the visual and quantitative evaluations of the developed nonconvex strategy for image inpainting. We also evaluate the inpainting performance compared to several state-of-the-art convex counterparts, in terms of both visual quality and restoration accuracy. It is worth noticing that the compared models are performed by using the primal-dual algorithm. All experimental simulations are implemented in MATLAB R2011b running on a PC with an Intel(R) Core(TM) i5 CPU at 3.20 GHz and 4 GB of memory under Windows 7.

The steps δ, τ and the parameter θ used in our numerical experiments are chosen as L=12, δ=10/L, τ=0.1/L and θ=1, this setting usually results in good convergence. The iterations of all tested methods are terminated when the condition uk+1uk2/uk2<3×104 is met. After recovering the image, the commonly used peak signal-to-noise ratio (PSNR) index is employed as a measure of image restoration quality. The criterion can be defined as

(22)
PSNR=10log10(2552mnuu˜22),
with u and u˜ representing the clean and recovered images respectively, and m×n being the size of an image. Meanwhile, we use the Pratt’s figure of merit (FOM) criterion (Pratt, 2001), which is borrowed to evaluate the edge-preserving ability of different approaches. Besides, the structural similarity (SSIM) (Wang et al.2004) and feature similarity (FSIM) (Zhang et al.2011) indexes are also employed for image structure information assessment. Generally speaking, the larger the PSNR, FOM, SSIM and FSIM values, the better the performance.

Fig. 1

Inpainting results obtained by using three different models. (a) original image, (b1)–(b2) damaged images with 30%, 50% missing lines, (c1)–(c2) TV model, (d1)–(d2) TGV model, (e1)–(e2) our scheme.

Inpainting results obtained by using three different models. (a) original image, (b1)–(b2) damaged images with 30%, 50% missing lines, (c1)–(c2) TV model, (d1)–(d2) TGV model, (e1)–(e2) our scheme.

Figure 1 illustrates the efficiency of our model for image inpainting compared to two recently developed methods, i.e. the TV and TGV based convex models. The original Peppers image is 256 × 256 pixels wide with 8-bit gray levels. Figures 1(b1) and 1(b2) represent the damaged images with 30% and 50% missing lines, where the lost lines have been chosen randomly. Subsequently, the second row of Fig. 1 indicates the restorations of 30% lost lines by three different models, while the last row corresponds to the outcomes of 50% missing lines. We remark that two damaged images are processed by our strategy with the equivalent parameters λ=180, α0=1 and β=0.8. The different values of parameter α1 are set as α1=0.6 and 0.8 for two degraded images. Moreover, we present in Table 1 the quantitative comparisons coming from three different methods.

To further show the superiority, we take Lena image sized by 256 × 256 pixels as an example for image inpainting. The original image is corrupted by an imposed text, and noisy because of Gaussian noise with standard deviation σ=10. This results in the degraded version shown in Fig. 2(b). As we already have mentioned, the inpainted results obtained by our proposed strategy are compared to the ones by the TV, TGV based methods. The intuitive comparison and measurable evaluation are provided in detail in Fig. 2 and Table 2, respectively. It is worthwhile to point out that our strategy is implemented by setting the parameters λ=19 and α1=0.8. The values of other unmentioned parameters are exactly the same as in the previous experiment.

As far as the inpainting of high resolution image is concerned, here we select Man image of size 1024 × 1024 as an instance. The damaged image, which is presented in Fig. 3(b), is deteriorated by an imposed mask, and noisy because of Gaussian noise with standard deviation 15. Regarding this degeneration, the performance of our nonconvex model is demonstrated by comparing with those of the TV and TGV methods. This results in the visual inpainted outcomes, which are shown in the second row of Fig. 3 in turn. Meanwhile, the quantitative evaluations of different approaches are also detailed in Table 3. A point worth emphasizing is that all parameters are valued equivalently just as in the second simulation, except for the coefficient λ is changed to 16 in this situation.

Table 1

Comparison of the recovered results obtained using three different methods on Peppers image.

Missing linesMethodIterTime (s)FOMSSIMFSIM
30%TV851.16080.95720.94590.9580
TGV1424.25260.95090.95130.9668
Ours1795.66330.95870.95750.9701
50%TV1301.64960.90200.89850.9119
TGV2437.47890.88990.91370.9353
Ours2557.90730.90360.91650.9371
Fig. 2

Inpainting results obtained by using three different models. (a) original image, (b) damaged noisy image with Gaussian noise ( σ=10), (c) TV model, (d) TGV model, (e) our scheme.

Inpainting results obtained by using three different models. (a) original image, (b) damaged noisy image with Gaussian noise (
σ=10), (c) TV model, (d) TGV model, (e) our scheme.
Table 2

Comparison of the recovered results obtained using three different methods.

FigureMethodIterTime (s)PSNRFOMSSIMFSIM
LenaTV771.041028.73800.89430.85090.8851
TGV1304.259528.90330.89820.86030.9062
Ours1414.414129.41320.91760.87420.9173

Finally, we extend the application of our developed strategy for colour image inpainting. Here colour Turtle image has dimensions of 500 × 318 pixels. To generate two test destroyed images, we add an imposed text to the clean image, and then corrupt it by Gaussian noise with standard deviation 10 and 20, respectively. More specifically, Fig. 4 intuitively displays the inpainting performance of the TV, TGV based convex models and the proposed scheme. We remark that, in the case of standard deviation 10, the introduced new scheme is performed with the experimental setup as λ=16, α1=0.6, α0=1 and β=1.3. As for the counterpart of standard deviation 20, we tune the parameter λ to be 7, and leave other settings unchanged. Meanwhile, Table 4 shows the measurable comparison between two efficient competitors and our novel strategy in terms of PSNR, FOM, SSIM and FSIM values.

To summarise, as can be observed from Figs. 14, TV solver preserves the edge details well but it tends to yield the typical staircase artifacts. We have observed that although the TGV model can suppress the blocky effect, it has the sometimes undesirable edge blurring. As expected, our results exhibit no staircasing in homogeneous regions and simultaneously possess sharp edges. Moreover, the quantitative comparisons listed in Tables 14, with respect to the larger PSNR, FOM, SSIM and FSIM values, consistently demonstrate the outstanding performance of our proposed model for image inpainting over other compared methods.

Fig. 3

Inpainting results obtained by using three different models. (a) original image, (b) damaged noisy image with Gaussian noise ( σ=15), (c) TV model, (d) TGV model, (e) our scheme.

Inpainting results obtained by using three different models. (a) original image, (b) damaged noisy image with Gaussian noise (
σ=15), (c) TV model, (d) TGV model, (e) our scheme.
Table 3

Comparison of the recovered results obtained using three different methods.

FigureMethodIterTime (s)PSNRFOMSSIMFSIM
ManTV22342.767827.15660.81490.73930.9410
TGV437217.435127.65350.81770.75950.9562
Ours468226.929327.71250.84300.76790.9580
Fig. 4

Inpainting results obtained by using three different models. (a) original image, (b1)–(b2) damaged noisy images with Gaussian noise ( σ=10,20), (c1)–(c2) TV model, (d1)–(d2) TGV model, (e1)–(e2) our scheme.

Inpainting results obtained by using three different models. (a) original image, (b1)–(b2) damaged noisy images with Gaussian noise (
σ=10,20), (c1)–(c2) TV model, (d1)–(d2) TGV model, (e1)–(e2) our scheme.
Table 4

Comparison of the recovered results obtained using three different methods on Turtle image.

Noise levelModelIterTime (s)PSNRFOMSSIMFSIM
σ=10TV422.893234.11900.87530.90960.9058
TGV7110.860835.25670.90730.92800.9326
Ours9715.136535.53730.91960.93150.9376
σ=20TV433.128532.46410.83680.87720.8821
TGV7711.935832.66750.84160.87930.8927
Ours10115.856132.95680.87360.88070.9039

5Conclusion

By introducing a nonconvex potential function into the total generalized variation regularizer, this paper constructs a novel nonconvex model for image inpainting. This aims to capture the small features and prevent the distortion phenomenon. To optimize the resulting variational model, we develop in detail an extremely efficient primal-dual method by integrating the classical iteratively reweighted 1 algorithm. Note that the inclusion of nonconvex constraint increases the amount of calculation, this yields little additional computation cost to handling the minimization. However, in terms of overcoming the staircase effect, maintaining edges, and improving restoration accuracy, extensive numerical experiments consistently illustrate the competitive superiority of the newly developed model for image inpainting.

References

1 

Bauss, F., Nikolova, M., Steidl, G. ((2013) ). Fully smoothed 1-TV models: bounds for the minimizers and parameter choice. Journal of Mathematical Imaging and Vision, 48: (2), 295–307.

2 

Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C. ((2000) ). Image inpainting. Computer Graphics, SIGGRAPH, 2000: , 417–424.

3 

Black, M.J., Rangarajan, A. ((1996) ). On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. International Journal of Computer Vision, 19: (1), 57–91.

4 

Bredies, K., Kunisch, K., Pock, T. ((2010) ). Total generalized variation. SIAM Journal on Imaging Sciences, 3: (3), 492–526.

5 

Candès, E.J., Wakin, M.B., Boyd, S.P. ((2008) ). Enhancing sparsity by reweighted 1 minimization. Journal of Fourier Analysis and Applications, 14: (5–6), 877–905.

6 

Chambolle, A., Pock, T. ((2011) ). A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40: (1), 120–145.

7 

Chan, T., Marquina, A., Mulet, P. ((2000) ). High-order total variation-based image restoration. SIAM Journal on Scientific Computing, 22: (2), 503–516.

8 

Chan, T.F., Shen, J. ((2001) ). Nontexture inpainting by curvature driven diffusion (CDD). Journal of Visual Communication and Image Representation, 12: (4), 436–449.

9 

Chan, T.F., Shen, J. ((2002) ). Mathematical models for local nontexture inpaintings. SIAM Journal on Applied Mathematics, 62: (3), 1019–1043.

10 

Chan, T.F., Kang, S.H., Shen, J. ((2002) ). Euler’s elastica and curvature-based inpainting. SIAM Journal on Applied Mathematics, 63: (2), 564–592.

11 

Chan, T.F., Shen, J., Zhou, H.M. ((2006) ). Total variation wavelet inpainting. Journal of Mathematical Imaging and Vision, 25: (1), 107–125.

12 

Chan, R., Wen, Y., Yip, A. ((2009) ). A fast optimization transfer algorithm for image inpainting in wavelet domains. IEEE Transactions on Image Processing, 18: (7), 1467–1476.

13 

Esser, E., Zhang, X., Chan, T. ((2010) ). A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM Journal on Imaging Sciences, 3: (4), 1015–1046.

14 

Gilboa, G., Osher, S. ((2008) ). Nonlocal operators with applications to image processing. Multiscale Modeling and Simulation, 7: (3), 1005–1028.

15 

Knoll, F., Bredies, K., Pock, T., Stollberger, R. ((2011) ). Second order total generalized variation (TGV) for MRI. Magnetic Resonance in Medicine, 65: (2), 480–491.

16 

Liu, X. ((2018) ). A new TGV-Gabor model for cartoon-texture image decomposition. IEEE Signal Processing Letters, 25: (8), 1221–1225.

17 

Liu, X. ((2019) ). Total generalized variation and shearlet transform based Poissonian image deconvolution. Multimedia Tools and Applications, 78: (13), 18855–18868.

18 

Liu, X., Huang, L. ((2014) ). A new nonlocal total variation regularization algorithm for image denoising. Mathematics and Computers in Simulation, 97: , 224–233.

19 

Lysaker, M., Lundervold, A., Tai, X.-C. ((2003) ). Noise removal using fourth order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Transactions on Image Processing, 12: (12), 1579–1590.

20 

Masnou, S., Morel, J.M. ((1998) ). Level lines based disocclusion. In: Proceedings of IEEE International Conference on Image Processing (ICIP 98), Vol. 3: , pp. 259–263.

21 

Nikolova, M., Ng, M.K., Tam, C.-P. ((2010) ). Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction. IEEE Transactions on Image Processing, 19(12): , 3073–3088.

22 

Nikolova, M., Ng, M.K., Tam, C.-P. ((2013) ). On 1 data fitting and concave regularization for image recovery. SIAM Journal on Scientific Computing, 35: (1), A397–A430.

23 

Ochs, P., Dosovitskiy, A., Brox, T., Pock, T. ((2013) ). An iterated 1 algorithm for non-smooth non-convex optimization in computer vision. In: IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, pp. 1759–1766.

24 

Ochs, P., Dosovitskiy, A., Brox, T., Pock, T. ((2015) ). On iteratively reweighted algorithms for nonsmooth nonconvex optimization. SIAM Journal on Imaging Sciences, 8: (1), 331–372.

25 

Prasath, V.B.S. ((2017) ). Quantum noise removal in X-ray images with adaptive total variation regularization. Informatica, 28: (3), 505–515.

26 

Pratt, W.K. ((2001) ). Digital Image Processing. 3rd ed. Jhon Wiley & Sons, New York.

27 

Roth, S., Black, M.J. ((2009) ). Fields of experts. International Journal of Computer Vision, 82: (2), 205–229.

28 

Rudin, L., Osher, S., Fatemi, E. ((1992) ). Nonlinear total variation based noise removal algorithms. Physica D, 60: (1–4), 259–268.

29 

Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P. ((2004) ). Image quality assessment: from error measurement to structural similarity. IEEE Transactions on Image Processing, 13: (4), 600–612.

30 

Zhang, L., Zhang, L., Mou, X., Zhang, D. ((2011) ). FSIM: a feature similarity index for image qualtiy assessment. IEEE Transactions on Image Processing, 20: (8), 2378–2386.

31 

Zhang, H., Tang, L., Fang, Z., Xiang, C., Li, C. ((2017) ). Nonconvex and nonsmooth total generalized variation model for image restoration. Signal Processing, 143: (1), 69–85.