Vol. 36 ( 6 ) No. 6 J. of Math. (PRC) HS, (, 454) :. HS,. HS. : ; HS ; ; Nesterov MR() : 9C5; 65K5 : O. : A : 55-7797(6)6-9-8 ū R n, A R m n (m n), b R m, b = Aū. ū,,., ( ), l ū min u s.t. Au = b, (.) u R n l. l NP, l,, []., l,, l l, []. (.) l min u s.t. Au = b. (.) u R n l,, [3, 4]. (.)., A, (.). A, (.) l ( [5]) min λ u + u R n Au b, (.3) : 5--6 : 6-4-5 : (6; 368); (7A3); (B). : (98 ),,,, :.
9 Vol. 36 λ. u u, (.3),. (.3), l (.). [6], Nesterov, u f τ (u) = n H(u(i)), i= H(y) = y, τ y τ; y τ, y > τ, Hüber, τ, [3]. f τ (u), f τ (u) Lipchitz, f τ (u(i)) = u(i), τ u(i) τ, sign(u(i)), u(i) > τ (i =,,, n),, x > ; sign(x) =, x = ;, x <. F (u) = λf τ (u) + Au b, (.3) min F (u), (.) u Rn F (u), F (u) = λ f τ (u) + A T (Au b) Lipchitz., g k = F (u k ), y k = F (u k+ ) F (u k ) = g k+ g k. (.), u k+ = u k + α k d k, (.) g k, k = ; d k = (.3) g k + β k d k, k, α k, d k, β k. β k, β k βk HS = gt k y k d T k y, βk F R = g k k g k, βdy k = g k d T k y, βk P RP = gt k y k k g k. β k, [7, 8]. [8], PRP, HS, PRP
No. 6 : HS 93. FR, DY., β k. HS (.),, HS d k = g k, k = ; g k + β k d k θ k g k, k, (.4) β k = gt k y k d T k y k, θ k = gt k y k gk T d k d T k y k g k, y k = g k g k = F (u k ) F (u k ).., (.4), g T k d k = g k. (.5) (.4), gt k y k gt k y k gk T d k = gk T g k + d T k y gk T d k k d T k y k g T k d k g k gt k g k = g T k g k. g T k d k = g k. d k F (u) u k,, g T k d k =, θ k =, HS HS.. (.) HS : u R n, ɛ, τ =, < ρ < ρ <, k := ; g k = F (u k ), g k < ɛ, τ k ɛ, ;, ; d k, (.4) ; 3 α k : F (u k + α k d k ) F (u k ) + ρ α k F (u k ) T d k ; (.6) F (u k + α k d k ) T d k ρ F (u k ) T d k ; (.7) 4 : u k+ = u k + α k d k, τ k+ = 4 τ k; 5 : k := k +,. 3 HS (.),..,. H Ω = x R n F (x) F (x )}.
94 Vol. 36 H Ω N, F (x), F (x) = g(x) Lipschitz, L > H H, M > g(x) g(y) L x y, x, y N. (3.) g(x) M, x Ω. (3.) [8, 9],. 3. [9] H H,. (.6) F (x k )}, k α k g T k d k < +, (3.) lim α k d k =. (3.3) k 3. [8] H H, (.), d k gk T d k < α k (.6) (.7), (3.4) (.5) k k (g T k d k) d k < +. (3.4) g k 4 < +. (3.5) d k 3.3 H H,. u k }, r > d T k y k r, (3.6) lim k inf g k =. (.4) d k (3.) (3.) (3.6) d k g k + g k y k d k d T k y k M + MLα k d k d r k. (3.3) t (, ), k, k k k > k ML α k d k t. r d k M + t d k M + t(m + t d k ) M( + t + t + + t k k ) + t k k d k k M + d t k. } m = max d, d,, d k, M + d t k, d k m, k. (3.7)
No. 6 : HS 95 (3.5) (3.7) g k 4 < +, k lim inf g k =. k, min u : Au = b},. [5],. 3.4 l min u : Au = b},. u k }, u τ k, lim u k = u, u = arg min u : k Au = b}. k F k (u) u k, u k = arg min F k(u). u R n Nesterov f τk (u k ) f τk (u k ) u k nτ k, λ u k λf τk (u k ) + λnτ k λf τk (u k ) + Au k b + λnτ k = f k (u k ) + λnτ k. F k (u), Lipschitz, [5].3, Lipschitz F k (u k ) F k (u k) + τ k L k, λ u k F k (u k) + τ k L k + λnτ k. (3.8) τ k, (3.8) u k }. u, [5] 3.3, u, (.) KKT. (.), u (.). 4, ρ =., ρ =.7, ɛ = 8. A m n, u exact n k, Au exact = b, u, Au b b, u uexact u exact.. A 5 4, λ =., A 56 5, 5 4, 4 48, 48 496. 4.: λ λ..698.698.5 5.66e-4 76 3.59. 9s 96s 77s 67s 4.: m n 56 5 5 4 4 48 48 496 8.9e-4 5.66e-4.79e-4.545e-4.37.. 6.5435e-4 s 67s 9s 93s
96 Vol. 36 4.3: m n.8.4 9s 8 56..73 s.5.53 3s 8.586e-4.3 s 56 5 9.3935e-4.66 s.786e-4.58 5s 4.4897e-4. 66s 5 4 5.68e-4.7 7s 7.973e-4.3 57s.766e-4.766e-4 7s 4 48 3.764e-4 8.9788e-4 5s 3.8776e-4 7.746e-4 3s..... 4 6 8. 4 6 8 λ = λ =... 4 6 8. 4 6 8 λ = λ =. 4.: λ 5 Nesterov HS,, λ,,, ; HS HS PRP
No. 6 : HS 97... 3 4 5 6. 4 6 8 56 5 5 4... 5 5 5. 5 5 5 3 35 4 45 4 48 48 496 4.: 5 4 5 4 Objective function 3 Objective function 3 3 4 5 Iterations 3 4 5 cputime.9.9 relative errors.7 relative errors.7.5 3 4 5 Iterations.5 3 4 5 cputime relative residual relative residual.. 3 4 5 Iterations 3 4 5 cputime 4.3: HS ( ) HS ( ) PRP ( ),.
98 Vol. 36, HS.. [, ]. [] Zhu H, Xiao Y, Wu S Y. Large sparse signal recovery by conjugate gradient algorithm based on smoothing technique[j]. Comp. Math. Appl., 3, 66(): 4 3. [] Donoho D L. For most large underdetermined systems of linear equations the minimal -norm solution is also the sparsest solution[j]. Commun. Pure Appl. Math., 6, 59(6): 797 89. [3] Candes E J, Tao T. Near Optimal Signal Recovery From Random Projections And Universal Encoding Strategies[J]. IEEE Trans. Inform. The., 7, 5(): 546 545. [4] Donoho L D. Compressed sensing[j]. IEEE Trans. Inform. The., 6, 5(4): 89 36. [5] Aybat S N, Iyengar G. A first-order smoothed penalty method for compressed sensing[j]. SIAM J. Optim.,, (): 87 33. [6] Nesterov Y. Smooth minimization of non-smooth functions[j]. Math. Prog., 5, 3(): 7-5. [7] Al Baali M. Descent property and global convergence of the Fletcher-Reeves method with inexact line search[j]. IMA J. Numer. Anal., 985, 5(): 4. [8] Ioannis E,Livieris,Panagiotis Pintelas. Globally convergent modified Perry s conjugate gradient method[j]. Appl. Math. Comp.,, 8(8): 997 97. [9] Zhang L, Zhou W, Li D H. A descent modified Polak-Ribiere-Polyak conjugate gradient method and its global convergence[j]. Ima. J. Numer. Anal., 6, 6(4): 69 64. [],. BFGS [J]., 5, 35(3): 77 734. [],. [J].,, 3(4): 685 694. THE APPLICATION OF A MODIFIED HS CONJUGATE GRADIENT METHOD FOR LARGE-SCALE SIGNAL RECONSTRUCTION PROBLEM CHEN Feng-hua, LI Shuang-an (Teaching Department of the Public Infrastructure, Zhengzhou Technology and Business University, Zhengzhou 454, China) Abstract: In this paper we study application about the compressed sensing in large-scale signal recovery problem. By the modified HS conjugate gradient method and smoothing technique, the algorithm which possesses better reconstruction effect is obtained. Preliminary numerical results show that our algorithm is suitable for solving large-scale sparse signal recovery problems. Keywords: compressed sensing; modified HS conjugate gradient method; sparse signal; Nesterov s smoothing technique MR Subject Classification: 9C5; 65K5