Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Proximal Splitting Methods in Signal Processing
Description: Proximal Splitting Methods in Signal Processing Many signal processing problems can in fine be formulated as convex optimization problems of the form.

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


Chapter 10

Proximal Splitting Methods in Signal Processing
Patrick L
...
This tool, which plays a central
role in the analysis and the numerical solution of convex optimization problems, has
recently been introduced in the arena of inverse problems and, especially, in signal
processing, where it has become increasingly important
...
These proximal splitting
methods are shown to capture and extend several well-known algorithms in a unifying framework
...

Key words: Alternating-direction method of multipliers, backward–backward algorithm, convex optimization, denoising, Douglas–Rachford algorithm, forward–
backward algorithm, frame, Landweber method, iterative thresholding, parallel
computing, Peaceman–Rachford algorithm, proximal algorithm, restoration and reconstruction, sparsity, splitting
...
L
...
e-mail: plc@math
...
fr
J
...
Pesquet
Laboratoire d’Informatique Gaspard Monge, UMR CNRS 8049, Universit´ Paris-Est, 77454
e
Marne la Vall´ e Cedex 2, France
...
fr
e
∗ This work was supported by the Agence Nationale de la Recherche under grants ANR-08BLAN-0294-02 and ANR-09-EMER-004-03
...
Combettes and Jean-Christophe Pesquet

10
...
With the development of nonlinear analysis in mathematics in the late 1950s and early 1960s (see the bibliographies of [6,142]) and the availability of faster computers, nonlinear techniques have
slowly become prevalent
...

Many signal processing problems can in fine be formulated as convex optimization problems of the form
minimize f1 (x) + · · · + fm (x),
x∈RN

(10
...
, fm are convex functions from RN to ]−∞, +∞]
...
In this paper, we describe a class of efficient convex optimization algorithms
to solve (10
...
These methods proceed by splitting in that the functions f1 ,
...
They are
called proximal because each nonsmooth function in (10
...
Although proximal methods, which can be traced back to the work
of Martinet [98], have been introduced in signal processing only recently [46, 55],
their use is spreading rapidly
...
The power and flexibility of proximal methods will be
emphasized
...
g
...
In this respect,
the proximal formalism provides a unifying framework for analyzing and developing a broad class of convex optimization algorithms
...

The paper is organized as follows
...
2, where we also discuss their main properties and provide examples
...
3 and 10
...
In Section 10
...
Composite problems involving linear transformations of the variables are addressed in Section 10
...
The algorithms
discussed so far are designed for m = 2 functions
...
7, we discuss paral-

10 Proximal Splitting Methods in Signal Processing

5

lel variants of these algorithms for problems involving m ≥ 2 functions
...
8
...
1
...
Standard definitions and notation from convex analysis will be
used [13, 87, 114]
...
Γ0 (RN ) is the class of lower semicontinuous convex functions
from RN to ]−∞, +∞] such that dom f = ∅
...
The conjugate of f is
the function f ∗ ∈ Γ0 (RN ) defined by
f ∗ : RN → ]−∞, +∞] : u → sup x⊤u − f (x),

(10
...


(10
...
The indicator function of C is

ιC : x →

0,
if x ∈ C;
+∞, if x ∈ C,
/

(10
...
5)

x∈C

the distance from x ∈ RN to C is dC (x) = infy∈C x − y , and the relative interior of
C (i
...
, interior of C relative to its affine hull) is the nonempty set denoted by riC
...


10
...
This algorithm is
employed to recover/synthesize a signal satisfying simultaneously several convex
constraints
...
1) by
letting each function fi be the indicator function of a nonempty closed convex set Ci
modeling a constraint
...
1) to the classical convex feasibility problem [31, 42, 44, 86, 93, 121, 122, 128, 141]

6

Patrick L
...


(10
...
It is governed by the updating rule
xn+1 = PC1 · · · PCm xn
...
7)

When m Ci = ∅ the sequence (xn )n∈N thus produced converges to a solution to
i=1
(10
...
Projection algorithms have been enriched with many extensions of this
basic iteration to solve (10
...
Variants have also been proposed to
solve more general problems, e
...
, that of finding the projection of a signal onto an
intersection of convex sets [22,47,137]
...
1)
...

The projection PC x of x ∈ RN onto the nonempty closed convex set C ⊂ RN is the
solution to the problem
minimize ιC (y) +
y∈RN

1
x − y 2
...
8)

Under the above hypotheses, the function ιC belongs to Γ0 (RN )
...
8) is replaced by an arbitrary function f ∈ Γ0 (RN )
...
1 (Proximity operator) Let f ∈ Γ0 (RN )
...
9)
N
2
y∈R
admits a unique solution, which is denoted by prox f x
...

Let f ∈ Γ0 (RN )
...
10)

p = prox f x



x − p = ∇ f (p)

(10
...
Proximity operators have very attractive properties that make
them particularly well suited for iterative minimization algorithms
...
e
...
12)

and its fixed point set is precisely the set of minimizers of f
...
1), mimicking to some extent the way convex feasibility
algorithms employ the projection operators (PCi )1≤i≤m to solve (10
...
As shown in
Table 10
...
One will find in
Table 10
...
g
...

From a signal processing perspective, proximity operators have a very natural
interpretation in terms of denoising
...
13)

where w ∈ RN models noise
...
9), where
· −y 2/2 plays the role of a data fidelity term and where f models a priori knowledge about x
...


10
...
1), one of which is
smooth
...
2 Let f1 ∈ Γ0 (RN ), let f2 : RN → R be convex and differentiable with
a β -Lipschitz continuous gradient ∇ f2 , i
...
,
(∀(x, y) ∈ RN × RN )

∇ f2 (x) − ∇ f2 (y) ≤ β x − y ,

(10
...
Suppose that f1 (x) + f2 (x) → +∞ as x → +∞
...

(10
...
2 admits at least one solution and that, for
any γ ∈ ]0, +∞[, its solutions are characterized by the fixed point equation
x = proxγ f1 x − γ ∇ f2 (x)
...
16)

8

Patrick L
...
1 Properties of proximity operators [27, 37, 53–55, 102]: ϕ ∈ Γ0 (RN ); C ⊂ RN is
nonempty, closed, and convex; x ∈ RN
...
17)

forward step

for values of the step-size parameter γn in a suitable bounded interval
...
The forward–backward algorithm finds its roots in
the projected gradient method [94] and in decomposition methods for solving variational inequalities [99, 119]
...
Let us note that, on the one hand, when f1 = 0,
(10
...
18)

for minimizing a function with a Lipschitz continuous gradient [19,61]
...
17) reduces to the proximal point algorithm
xn+1 = proxγn f1 xn

(10
...
The forward–
backward algorithm can therefore be considered as a combination of these two basic
schemes
...

Algorithm 10
...

 γ ∈ [ε , 2/β − ε ]
 n
 y = x − γ ∇ f (x )
 n
n
n
2 n

 λn ∈ [ε , 1]
xn+1 = xn + λn(proxγn f1 yn − xn )
...
20)

Proposition 10
...
3 converges to a solution to Problem 10
...

The above forward–backward algorithm features varying step-sizes (γn )n∈N but
its relaxation parameters (λn )n∈N cannot exceed 1
...

Algorithm 10
...

 y = x − β −1∇ f (x )
n
2 n
 n

 λn ∈ [ε , 3/2 − ε ]
xn+1 = xn + λn (proxβ −1 f1 yn − xn )
...
21)

Proposition 10
...
5 converges to a solution to Problem 10
...

Although they may have limited impact on actual numerical performance, it
may be of interest to know whether linear convergence rates are available for the
forward–backward algorithm
...
Combettes and Jean-Christophe Pesquet

setting of Example 10
...
3 fails [9, 139]
...

Another type of convergence rate is that pertaining to the objective values
( f1 (xn ) + f2 (xn ))n∈N
...
3 have been developed to improve it [15, 16, 84, 104,
105,131,136] in the spirit of classical work by Nesterov [106]
...
The proximal gradient
method proposed in [15, 16] assumes the following form
...
7 (Beck-Teboulle proximal gradient algorithm)
Fix x0 ∈ RN , set z0 = x0 and t0 = 1
For
 n = 0, 1,
...


(10
...
7, the O(1/n2 ) rate of convergence of the objective function they achieve
is optimal [103], although the practical impact of such property is not always manifest in concrete problems (see Figure 10
...

Proposition 10
...
2
...
7
satisfies
(∀n ∈ N

{0})

f1 (xn ) + f2 (xn ) ≤ f1 (x) + f2 (x) +

2 β x0 − x
(n + 1)2

2


...
23)

Other variations of the forward–backward algorithm have also been reported to
yield improved convergence profiles [20, 70, 97, 134, 135]
...
2 and Proposition 10
...
For the sake of illustration, let us provide a
few examples
...
3, which
reduces the updating rule to (10
...

Example 10
...
2, suppose that f1 = ιC , where
C is a closed convex subset of RN such that {x ∈ C | f2 (x) ≤ η } is nonempty and
bounded for some η ∈ R
...
2 Proximity operator of φ ∈ Γ0 (R); α ∈ R, κ > 0, κ > 0, κ > 0, ω > 0, ω < ω , q > 1,
τ ≥ 0 [37, 53, 55]
...
1)
1
x−α +
2(1 + τ )

if x < 1/ω
if x > 1/ω
otherwise

|x − α |2 + 4κ (1 + τ )

p>0
such that p3 + (α − x)p2 − κ p = ω
p>0
such that qω pq + p2 − xp = κ

p ∈ ]ω , ω [
such that p3 − (ω + ω + x)p2 +
ωω − κ − κ + (ω + ω )x p = ωω x − ωκ − ωκ

12

Patrick L
...
10
...


The proximity operator thresholds over the interval [1/ω , 1/ω ], and saturates at −∞ and +∞ with
asymptotes at ω and ω , respectively (see Table 10
...
xiii and [53])
...

x∈C

(10
...
1
...


(10
...


10 Proximal Splitting Methods in Signal Processing

13

Example 10
...
9, setting f2 : x → Lx −
y 2 /2, where L ∈ RM×N {0} and y ∈ RM , yields the constrained least-squares
problem
1
Lx − y 2
...
26)
minimize
x∈C
2
Since ∇ f2 : x → L⊤ (Lx − y) has Lipschitz constant β = L 2 , (10
...


(10
...

Example 10
...
Consider the problem
minimize f (x) + g(x),
x∈RN

(10
...
1
...
This is a special case of Problem 10
...
Since ∇ f2 : x → x − proxg x has Lipschitz constant β = 1 [55, 102], Proposition 10
...
29)
converges to a solution to (10
...
Detailed analyses of this scheme can be found
in [1, 14, 48, 108]
...
12 (alternating projections) In Example 10
...
Then (10
...
e
...

(10
...
29) yields the alternating projection method
xn+1 = PC (PD xn ),
(10
...
Signal processing applications can
be found in the areas of spectral estimation [80], pulse shape design [107], wavelet
construction [109], and signal synthesis [140]
...
13 (iterative thresholding) Let (bk )1≤k≤N be an orthonormal basis of
RN , let (ωk )1≤k≤N be strictly positive real numbers, let L ∈ RM×N {0}, and let
y ∈ RM
...
Combettes and Jean-Christophe Pesquet

D

x0

x1 x∞
C

D

y0
y1
z0 = x0

y2
x∞

x2 z2
z1 = x1

C

Fig
...
2 Forward-backward versus Beck-Teboulle : As in Example 10
...
30) of finding a point x∞ in C at minimum distance
2
from D
...
Top: The forward–backward algorithm with γn ≡ 1
...
As seen in Example 10
...
31)
...


10 Proximal Splitting Methods in Signal Processing
N

minimize
x∈RN

15

1

∑ ωk |x⊤ bk | + 2

k=1

Lx − y 2
...
32)

This type of formulation arises in signal recovery problems in which y is the observed signal and the original signal is known to have a sparse representation in the
basis (bk )1≤k≤N , e
...
, [17, 20, 56, 58, 72, 73, 125, 127]
...
32) is a
special case of (10
...


(10
...
1
...
2
...
4 that the sequence (xn )n∈N generated
by the iterative thresholding algorithm
N

xn+1 =

∑ ξk,n bk ,

where

k=1

ξk,n = soft[−γn ωk ,γn ωk ] xn + γn L⊤ (y − Lxn )
ε ≤ γn ≤ 2/ L 2 − ε ,



bk

(10
...
32)
...


10
...
3 requires that one of the functions
be differentiable, with a Lipschitz continuous gradient
...

Problem 10
...
35)

and f1 (x) + f2 (x) → +∞ as x → +∞
...

x∈RN

(10
...
The method

16

Patrick L
...
For further developments, see [48, 49, 66]
...
14 admits at least one solution and, for any γ ∈ ]0, +∞[, its solutions
are characterized by the two-level condition [52]
x = proxγ f2 y
proxγ f2 y = proxγ f1 (2proxγ f2 y − y),

(10
...

Algorithm 10
...

 x = prox y
 n
γ f2 n

 λ n ∈ [ε , 2 − ε ]
yn+1 = yn + λn proxγ f1 2xn − yn − xn
...
38)

Proposition 10
...
15
converges to a solution to Problem 10
...

Just like the forward–backward algorithm, the Douglas–Rachford algorithm operates by splitting since it employs the functions f1 and f2 separately
...

However, this observation must be weighed against the fact that it may be more
demanding numerically as it requires the implementation of two proximal steps at
each iteration, whereas only one is needed in the forward–backward algorithm
...
10
...

Applications of the Douglas–Rachford algorithm to signal and image processing
can be found in [38, 52, 62, 63, 117, 118, 123]
...
Its convergence requires additional assumptions (for instance, that f2 be strictly convex and real-valued) [49]
...
5 Dykstra-like splitting
In this section we consider problems involving a quadratic term penalizing the deviation from a reference signal r
...
17 Let f and g be functions in Γ0 (RN ) such that dom f ∩ dom g = ∅,
and let r ∈ RN
...
10
...
12, let C and D be two
closed convex sets and consider the problem (10
...
Let us set f 1 = ιC and f 2 = dD /2
...
As seen in Example 10
...
31)
...
Table 10
...
xii yields prox f1 = PC
and Table 10
...
vi yields prox f2 : x → (x + PD x)/2
...
15
reduces to xn = (yn + PD yn )/2 and yn+1 = PC (2xn − yn ) + yn − xn = PC (PD yn ) + yn − xn
...
Combettes and Jean-Christophe Pesquet

minimize f (x) + g(x) +
x∈RN

1
x − r 2
...
39)

It follows at once from (10
...
17 admits a unique solution,
namely x = prox f +g r
...
To compute it iteratively, we can observe that
(10
...
36) in Problem 10
...
However, in this Douglas–Rachford framework, the additional
qualification condition (10
...
In the present setting we require
only the minimal feasibility condition dom f ∩ dom g = ∅
...
18 (Dykstra-like proximal algorithm)
Set x0 = r, p0 = 0, q0 = 0
For
 n = 0, 1,
...


(10
...
19 [12] Every sequence (xn )n∈N generated by Algorithm 10
...
17
...
20 (best approximation) Let f and g be the indicator functions of
closed convex sets C and D, respectively, in Problem 10
...
Then the problem is to
find the best approximation to r from C ∩ D, i
...
, the projection of r onto C ∩ D
...

Example 10
...
If f and g are functions in
Γ0 (RN ) promoting certain properties of x, adopting a least-squares data fitting objective leads to the variational denoising problem (10
...


10
...

Problem 10
...
The problem is
to
minimize f (x) + g(Lx)
...
41)
x∈RN

10 Proximal Splitting Methods in Signal Processing

19

Our assumptions guarantee that Problem 10
...

To find such a solution, several scenarios can be contemplated
...
6
...
22 g is differentiable with a τ -Lipschitz continuous
gradient (see (10
...
Now set f1 = f and f2 = g ◦ L
...
42)

is β -Lipschitz continuous, with β = τ L 2
...
3
...
20),
it operates with the updating rule

 γ ∈ [ε , 2/(τ L 2 ) − ε ]
 n
 y = x − γ L⊤ ∇g(Lx )
 n
n
n
n
(10
...

Convergence is guaranteed by Proposition 10
...


10
...
2 Douglas–Rachford splitting
Suppose that in Problem 10
...
44)

and (ri domg) ∩ ri L(dom f ) = ∅
...
As seen in Table 10
...
x, prox f2 has a closed-form expression in terms of proxg and we can therefore apply the Douglas–Rachford splitting method (Algorithm 10
...
In this scenario, the updating rule reads

 x = y + ν −1 L⊤ prox (Ly ) − Ly
n
n
n
 n
γν g

(10
...

Convergence is guaranteed by Proposition 10
...


20

Patrick L
...
6
...
22 f = h + · −r 2 /2, where h ∈ Γ0 (RN ) and r ∈ RN
...
41) becomes
minimize h(x) + g(Lx) +
x∈RN

1
x − r 2,
2

(10
...
g
...
If
(10
...
46) can be solved with the Dykstralike method of Section 10
...
1
...
1
...
Otherwise, we can exploit the nice properties of the
Fenchel-Moreau-Rockafellar dual of (10
...
46) [51]
...
23 (Dual forward–backward algorithm)
Fix ε ∈ 0, min{1, 1/ L 2} , u0 ∈ RM
For n = 0, 1,
...


(10
...
24 [51] Assume that (ri dom g) ∩ ri L(dom h) = ∅
...
23 converges
to the solution to (10
...


10
...
4 Alternating-direction method of multipliers
Augmented Lagrangian techniques are classical approaches for solving Problem 10
...
First, observe that (10
...


x∈RN , y∈RM
Lx=y

(10
...
48) is the saddle function
Lγ : RN × RM × RM → ]−∞, +∞]

1
1
(x, y, z) → f (x) + g(y) + z⊤ (Lx − y) +
Lx − y 2
...
49)

10 Proximal Splitting Methods in Signal Processing

21

The alternating-direction method of multipliers consists in minimizing Lγ over x,
then over y, and then applying a proximal maximization step with respect to the
Lagrange multiplier z
...


(10
...
9), if we denote by proxL the operator which maps a point
f
y ∈ RM to the unique minimizer of x → f (x) + Lx − y 2 /2, we obtain the following
implementation
...
25 (Alternating-direction method of multipliers (ADMM))
Fix γ > 0, y0 ∈ RM , z0 ∈ RM
For n = 0, 1,
...
51)

 yn+1 = proxγ g (sn + zn )
zn+1 = zn + sn − yn+1
...
50) has been investigated in several places, e
...
, [75, 77, 79]
...
41)
...
41) is shown
...
50) have been proposed [5, 39]
...
” Further applications and
connections are found in [2, 69, 117, 143]
...
7 Problems with m ≥ 2 functions
We return to the general minimization problem (10
...

Problem 10
...
, fm be functions in Γ0 (RN ) such that
(ri dom f1 ) ∩ · · · ∩ (ri dom fm ) = ∅

(10
...
The problem is to
minimize f1 (x) + · · · + fm (x)
...
53)

Since the methods described so far are designed for m = 2 functions, we can
attempt to reformulate (10
...
54)

22

Patrick L
...
To this end, observe that (10
...

(10
...
,xm )∈H
x1 =···=xm

If we denote by x = (x1 ,
...
55) is equivalent to
minimize ιD (x) + f (x),
x∈H

(10
...
, x) ∈ H | x ∈ RN
f : x → f1 (x1 ) + · · · + fm (xm )
...
57)

We are thus back to a problem involving two functions in the larger space H
...
For instance, the following parallel algorithm was derived from the Douglas–Rachford algorithm in [54] (see also
[49] for further analysis and connections with Spingarn’s splitting method [120])
...
27 (Parallel proximal algorithm (PPXA))
Fix ε ∈ ]0, 1[, γ > 0, (ωi )1≤i≤m ∈ ]0, 1]m such that
∑m ωi = 1, y1,0 ∈ RN ,
...


 For i = 1,
...
, m


yi,n+1 = yi,n + λn 2pn − xn − pi,n

xn+1 = xn + λn (pn − xn )
...
28 [54] Every sequence (xn )n∈N generated by Algorithm 10
...
26
...
29 (image recovery) In many imaging problems, we record an observation y ∈ RM of an image z ∈ RK degraded by a matrix L ∈ RM×K and corrupted by
noise
...

This representation is defined through a synthesis matrix F ⊤ ∈ RK×N (with K ≤ N)
such that F ⊤ F = ν I, for some ν ∈ ]0, +∞[
...
For this
purpose, we consider the problem
minimize
x∈C

1
LF ⊤ x − y
2

2

+ Φ (x) + tv(F ⊤ x),

(10
...
g
...
Using appropriate gradient filters in the computation of tv, it
is possible to decompose it as a sum of convex functions (tvi )1≤i≤q , the proximity
operators of which can be expressed in closed form [54,113]
...
58) appears
as a special case of (10
...
, q}
...
1
...
Thus, the PPXA
algorithm is well suited for solving this problem numerically
...
17
...
30 Let f1 ,
...
The
i=1
problem is to
m
1
(10
...

2
x∈RN
i=1
Algorithm 10
...
, zm,0 = x0
For n = 0, 1,
...
, m

 p = prox z
 i,n
f i i,n

 xn+1 = ∑m ωi pi,n
i=1

 For i = 1,
...


(10
...
32 [49] Every sequence (xn )n∈N generated by Algorithm 10
...
30
...

Problem 10
...
, m}, let gi ∈ Γ0 (RMi ) and let Li ∈ RMi ×N
...
, Lm q ∈ ri dom gm ,
(10
...
Combettes and Jean-Christophe Pesquet

that g1 (L1 x) + · · · + gm (Lm x) → +∞ as x → +∞, and that Q = ∑1≤i≤m L⊤ Li is
i
invertible
...

x∈RN

(10
...
55) and (10
...
62) can be recast as
minimize ιD (x) + g(y),
x∈H , y∈G
y=Lx

where


H = RN × · · · × RN , G = RM1 × · · · × RMm

L : H → G : x → (L1 x1 ,
...


(10
...
64)

In turn, a solution to (10
...
6
...

Algorithm 10
...
, ym,0 ∈ RMm , z1,0 ∈ RM1 ,
...


 xn = Q−1 ∑m L⊤ (yi,n − zi,n )
i=1 i

 For i = 1,
...
65)

This algorithm was derived from a slightly different viewpoint in [118] with a
connection with the work of [71]
...
The computation of xn in (10
...
Efficient methods
for solving such systems can be found in [82]
...

In the above algorithms, the proximal vectors, as well as the auxiliary vectors,
can be computed simultaneously at each iteration
...
A parallel
proximal algorithm is also available to solve multicomponent signal processing
problems [27]
...
Let us add that an alternative splitting framework applicable to
(10
...


10 Proximal Splitting Methods in Signal Processing

25

10
...
First, they employ proximity operators, a powerful generalization of the
notion of a projection operator
...
These methods
are applicable to a wide class of signal and image processing problems ranging from
restoration and reconstruction to synthesis and design
...
Finally, let us note that the variational problems described in (10
...
46), and (10
...
Therefore the associated algorithms can be used as a subroutine to compute
approximately proximity operators within a proximal splitting algorithm, provided
the latter is error tolerant (see [48, 49, 51, 66, 115] for convergence properties under
approximate proximal computations)
...


References
1
...
, Prestel, M
...
: Convergence d’un sch´ ma de minimisation altern´ e
...
Fac
...

e
e
Toulouse V
...
Math
...
Afonso, M
...
, Bioucas-Dias, J
...
, Figueiredo, M
...
T
...
IEEE Trans
...
19 2345–2356 (2010)
3
...
, Fan, J
...
J
...
Statist
...

96, 939–967 (2001)
4
...
, Leporini, D
...
C
...
Statist
...
Attouch, H
...
: Augmented Lagrangian and proximal alternating direction methods of multipliers in Hilbert spaces – applications to games, PDE’s and control
...

Optim
...
Aubin, J
...
: Optima and Equilibria – An Introduction to Nonlinear Analysis, 2nd edn
...
Aujol, J
...
, Aubert, G
...
, Chambolle, A
...
J
...
Imaging Vision 22,
71–88 (2005)
8
...
F
...
: Dual norms and image decomposition models
...
J
...
Bauschke, H
...
, Borwein, J
...
: Dykstra’s alternating projection algorithm for two sets
...

Approx
...
Bauschke, H
...
, Borwein, J
...
: On projection algorithms for solving convex feasibility problems
...
38, 367–426 (1996)
11
...
H
...
L
...
Math
...
Res
...
Bauschke, H
...
, Combettes, P
...
: A Dykstra-like algorithm for two monotone operators
...
Optim
...
Bauschke, H
...
, Combettes, P
...
: Convex Analysis and Monotone Operator Theory in
Hilbert Spaces
...
Combettes and Jean-Christophe Pesquet

14
...
H
...
L
...
: The asymptotic behavior of the composition of
two resolvents
...
60, 283–301 (2005)
15
...
, Teboulle, M
...
IEEE Trans
...
18, 2419–2434 (2009)
16
...
, Teboulle, M
...
SIAM J
...
2, 183–202 (2009)
17
...
, Blanc-F´ raud, L
...
, Chambolle, A
...
Lecture Notes in Comput
...
3024, 1–13 (2004)
18
...
, Zanella, R
...
, Bertero, M
...
Inverse Problems 26, 18 (2010)
...
025004
19
...
P
...
N
...

Athena Scientific, Belmont, MA (1997)
20
...
M
...
A
...
: A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration
...
Image Process
...
Bouman, C
...
: A generalized Gaussian image model for edge-preserving MAP estimation
...
Image Process
...
Boyle, J
...
, Dykstra, R
...
: A method for finding projections onto the intersection of convex
sets in Hilbert spaces
...
37, 28–47 (1986)
23
...
: A forward–backward splitting algorithm for the minimization of non-smooth
convex functionals in Banach space
...
Art
...
Bredies, K
...
A
...
J
...

Appl
...
Br` gman, L
...
: The method of successive projection for finding a common point of convex
e
sets
...
Dokl
...
Br´ zis, H
...
L
...
Israel J
...
29, 329–345 (1978)
e
e
27
...
M
...
L
...
Numer
...
Theory Methods
Appl
...
Cai, J
...
, Chan, R
...
, Shen, L
...
: Convergence analysis of tight framelet approach
for missing data recovery
...
Comput
...
31, 87–113 (2009)
29
...
F
...
H
...
, Shen, Z
...
Numer
...
112, 509–533 (2009)
30
...
F
...
H
...
: A framelet-based image inpainting algorithm
...
Comput
...
Anal
...
Censor, Y
...
A
...
Oxford University Press, New York (1997)
32
...
, Pesquet, J
...
, Ciuciu, P
...
: An iterative method for parallel
a
MRI SENSE-based reconstruction in the wavelet domain
...
Image Anal
...
Chambolle, A
...
J
...

Imaging Vision 20, 89–97 (2004)
34
...
: Total variation minimization and a class of binary MRF model
...
Sci
...
Chambolle, A
...
A
...
Y
...
J
...
IEEE
Trans
...
7, 319–335 (1998)
36
...
H
...
, Steidl, G
...
SIAM J
...
1, 273–293 (2008)
37
...
, Combettes, P
...
, Pesquet, J
...
, Wajs, V
...
: A variational formulation for framebased inverse problems
...
Chaux, C
...
C
...
: Nested iterative algorithms for convex constrained
image recovery problems
...
Imaging Sci
...
Chen, G
...
: A proximal-based decomposition method for convex minimization
problems
...
Programming 64, 81–101 (1994)

10 Proximal Splitting Methods in Signal Processing

27

40
...
H
...
, Rockafellar, R
...
: Convergence rates in forward–backward splitting
...

Optim
...
Cheney, W
...
A
...
Proc
...
Math
...
10,
448–450 (1959)
42
...
L
...
Proc
...
Combettes, P
...
: Inconsistent signal feasibility problems: Least-squares solutions in a product space
...
Signal Process
...
Combettes, P
...
: The convex feasibility problem in image recovery
...
Hawkes (ed
...
95, pp
...
Academic Press, New
York (1996)
45
...
L
...
IEEE Trans
...
6, 493–506 (1997)
46
...
L
...
In: Proc
...
6–16
...
Combettes, P
...
: A block-iterative surrogate constraint splitting method for quadratic signal
recovery
...
Signal Process
...
Combettes, P
...
: Solving monotone inclusions via compositions of nonexpansive averaged
operators
...
Combettes, P
...
: Iterative construction of the resolvent of a sum of maximal monotone operators
...
Convex Anal
...
Combettes, P
...
, Bondon, P
...
IEEE
Trans
...
47, 2460–2468 (1999)
51
...
L
...
C
...
Set-Valued
u
u
Var
...
18, 373–404 (2010)
52
...
L
...
C
...
IEEE J
...
1, 564–574 (2007)
53
...
L
...
C
...
SIAM J
...
18, 1351–1376 (2007)
54
...
L
...
C
...
Inverse Problems 24, 27 (2008)
...
065014
55
...
L
...
R
...
Multiscale Model
...
4, 1168–1200 (2005)
56
...
, Defrise, M
...
: An iterative thresholding algorithm for linear inverse
problems with a sparsity constraint
...
Pure Appl
...
57, 1413–1457 (2004)
57
...
, Teschke, G
...
: Iteratively solving linear inverse problems under general convex constraints
...
Imaging 1, 29–46 (2007)
58
...
, Defrise, M
...
Contemp
...

313, 85–96 (2002)
59
...
, Setzer, S
...
: Combined ℓ2 data and gradient fitting in conjunction with ℓ1
regularization
...
Comput
...
30, 79–99 (2009)
60
...
, Rachford, H
...
: On the numerical solution of heat conduction problems in two
or three space variables
...
Amer
...
Soc
...
Dunn, J
...
: Convexity, monotonicity, and gradient processes in Hilbert space
...
Math
...

Appl
...
Dup´ , F
...
, Fadili, M
...
, Starck, J
...
: A proximal iteration for deconvolving Poisson noisy
e
images using sparse representations
...
Image Process
...
Durand, S
...
, Nikolova, M
...
J
...
Imaging Vision 36, 201–226 (2010)
64
...
L
...
J
...
Stat
...
78,
837–842 (1983)
65
...
: Parallel alternating direction multiplier decomposition of convex programs
...

Optim
...
80, 39–62 (1994)
66
...
, Bertsekas, D
...
: On the Douglas-Rachford splitting method and the proximal
point algorithm for maximal monotone operators
...
Programming 55, 293–318 (1992)

28

Patrick L
...
Eckstein, J
...
F
...
SIAM J
...
48, 787–811 (2009)
68
...
: Iteration methods for convexly constrained ill-posed problems in Hilbert space
...
Funct
...
Optim
...
Esser, E
...
ftp://ftp
...
ucla
...
pdf
70
...
, Peyr´ , G
...
IEEE Trans
...
20, 657–669 (2011)
71
...
A
...
, Bioucas-Dias, J
...
: Deconvolution of Poissonian images using variable
splitting and augmented Lagrangian optimization
...
IEEE Workshop Statist
...
Cardiff, UK (2009)
...
org/abs/0904
...
Figueiredo, M
...
T
...
D
...

IEEE Trans
...
12, 906–916 (2003)
73
...
A
...
, Nowak, R
...
, Wright, S
...
: Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
...
Selected Topics
Signal Process
...
Fornasier, M
...
Inverse Problems 23, 2505–2526 (2007)
75
...
, Glowinski, R
...
): Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems
...
Gabay, D
...
In: M
...
Glowinski (eds
...
299–331
...
Gabay, D
...
: A dual algorithm for the solution of nonlinear variational problems
via finite elements approximations
...
Math
...
2, 17–40 (1976)
78
...
, Marrocco, A
...

e
e
e
e
e
RAIRO Anal
...
2, 41–76 (1975)
79
...
, Le Tallec, P
...
): Augmented Lagrangian and Operator-Splitting Methods
in Nonlinear Mechanics
...
Goldburg, M
...
J
...
IEEE Trans
...
32, 647–663 (1985)
81
...
, Osher, S
...
SIAM J
...
2, 323–343 (2009)
82
...
H
...
F
...
Johns Hopkins University
Press, Baltimore, MD (1996)
83
...
: On the convergence of the proximal point algorithm for convex minimization
...
Control Optim
...
G¨ ler, O
...
SIAM J
...
2,
u
649–664 (1992)
85
...
T
...
, Zhang, Y
...
SIAM J
...
19, 1107–1130 (2008)
86
...
T
...
Springer-Verlag, London (2009)
87
...
B
...
: Fundamentals of Convex Analysis
...
Huang, Y
...
K
...
W
...
Multiscale Model
...
7, 774–795 (2008)
89
...
, Elfving, T
...
, Censor, Y
...
E
...
: The applie
cation of an oblique-projected Landweber method to a model of supervised learning
...

Comput
...
Kiwiel, K
...
, Łopuch, B
...
SIAM J
...
7, 1084–1102 (1997)

10 Proximal Splitting Methods in Signal Processing

29

91
...
: The proximal algorithm
...
P
...
) New Methods in Optimization and
Their Industrial Uses, International Series of Numerical Mathematics, vol
...
73–87
...
Lemaire, B
...
In: Equations aux D´ riv´ es Partielles et Applicae
e e
tions, pp
...
Gauthiers-Villars, Paris (1998)
93
...
, Tuy, H
...
J
...
Anal
...
83, 554–565 (1981)
94
...
S
...
T
...
U
...
S
...
Comput
...

Math
...
6, 1–50 (1966)
95
...
: Approximation d’Op´ rateurs par des M´ thodes de D´ composition
...
Lions, P
...
, Mercier, B
...
SIAM
J
...
Anal
...
Loris, I
...
, De Mol, C
...
, Zanni, L
...
Appl
...

Harm
...
27, 247–254 (2009)
98
...
: R´ gularisation d’in´ quations variationnelles par approximations successives
...
Francaise Informat
...
Op´ r
...
Mercier, B
...
No
...
Tata Institute of Fundamental Research, Bombay (1979)
100
...
: In´ quations Variationnelles de la M´ canique
...
80
...
Universit´ de Paris-XI, Orsay, France (1980)
e
e
101
...
J
...
C
...
Acad
...
Paris S´ r
...
255, 2897–2899 (1962)
e
102
...
J
...
Bull
...
Math
...
Nemirovsky, A
...
, Yudin, D
...
: Problem Complexity and Method Efficiency in Optimization
...
Nesterov, Yu
...
Math
...
103, 127–152
(2005)
105
...
: Gradient methods for minimizing composite objective function
...
Nesterov, Yu
...
: A method of solving a convex programming problem with convergence rate
o(1/k2 )
...
Dokl
...
Nobakht, R
...
, Civanlar, M
...
: Optimal pulse shape design for digital communication systems by projections onto convex sets
...
Communications 43, 2874–2877 (1995)
108
...
B
...
J
...
Anal
...
72, 383–390 (1979)
109
...
C
...
L
...
IEEE Trans
...
44, 728–732 (1996)
´
110
...
: Eclatement de contraintes en parall` le pour la minimisation d’une forme quadrae
tique
...
Sci
...
Pierra, G
...
Math
...
Potter, L
...
, Arun, K
...
: A dual approach to linear inverse problems with convex constraints
...
Control Optim
...
Pustelnik, N
...
, Pesquet, J
...
: Parallel proximal algorithm for image
restoration using hybrid regularization
...
Image Process
...

http://arxiv
...
1536
114
...
T
...
Princeton University Press, Princeton, NJ (1970)
115
...
T
...
SIAM J
...
14, 877–898 (1976)
116
...
I
...
, Fatemi, E
...

Physica D 60, 259–268 (1992)

30

Patrick L
...
Setzer, S
...
Lecture Notes in Comput
...
5567, 464–476 (2009)
118
...
, Steidl, G
...
: Deblurring Poissonian images by split Bregman techniques
...
Vis
...
Image Represent
...
Sibony, M
...
Calcolo 7, 65–183 (1970)
e
120
...
E
...
Appl
...
Optim
...
Stark, H
...
): Image Recovery: Theory and Application
...
Stark, H
...
: Vector Space Projections : A Numerical Approach to Signal and Image
Processing, Neural Nets, and Optics
...
Steidl, G
...
: Removing multiplicative noise by Douglas-Rachford splitting methods
...
Math
...
Thompson, A
...
, Kay, J
...
Inverse Problems 9, 749–761 (1993)
125
...
: Regression shrinkage and selection via the lasso
...
Royal
...
Soc
...
Titterington, D
...
: General structure of regularization procedures in image reconstruction
...
and Astrophys
...
Tropp, J
...
: Just relax: Convex programming methods for identifying sparse signals in noise
...
Inform
...
Trussell, H
...
, Civanlar, M
...
: The feasible solution in signal restoration
...

Acoust
...
32, 201–212 (1984)
129
...
J
...
R
...

IEEE Trans
...
, Speech, Signal Process
...
Tseng, P
...
SIAM J
...
29, 119–138 (1991)
131
...
: On accelerated proximal gradient methods for convex-concave optimization
(2008)
...
math
...
edu/∼tseng/papers/apgm
...
Varga, R
...
: Matrix Iterative Analysis, 2nd edn
...
Vese, L
...
, Osher, S
...
: Image denoising and decomposition with total variation minimization
and oscillatory functions
...
Math
...
Vonesh, C
...
: A fast thresholded Landweber algorithm for wavelet-regularized multidimensional deconvolution
...
Image Process
...
Vonesh, C
...
: A fast multilevel algorithm for wavelet-regularized image restoration
...
Image Process
...
Weiss, P
...
, Blanc-F´ raud, L
...
SIAM J
...
Comput
...
Yamada, I
...
, Yamashita, Y
...
: Quadratic optimization of fixed points
of nonexpansive mappings in Hilbert space
...
Funct
...
Optim
...
Youla, D
...
: Generalized image restoration by the method of alternating orthogonal projections
...
Circuits Syst
...
Youla, D
...
: Mathematical theory of image restoration by the method of convex projections
...
Stark (ed
...
29–77
...
Youla, D
...
, Velasco, V
...
IEEE Trans
...
33, 465–468 (1986)
141
...
C
...
: Image restoration by the method of convex projections: Part 1 –
theory
...
Medical Imaging 1, 81–94 (1982)
142
...
: Nonlinear Functional Analysis and Its Applications, vol
...
Springer-Verlag,
New York (1985–1990)
143
...
, Burger, M
...
, Osher, S
...
SIAM J
...
3, 253–276 (2010)
144
...
Y
...

Math
...
Res
Title: Proximal Splitting Methods in Signal Processing
Description: Proximal Splitting Methods in Signal Processing Many signal processing problems can in fine be formulated as convex optimization problems of the form.