توضیحاتی در مورد کتاب Splitting algorithms, modern operator theory, and applications
نام کتاب : Splitting algorithms, modern operator theory, and applications
عنوان ترجمه شده به فارسی : الگوریتم های تقسیم، تئوری اپراتورهای مدرن و کاربردها
سری :
نویسندگان : Bauschke. Heinz, Burachik. Regina S., Luke. David Russell (ed.)
ناشر : Springer
سال نشر : 2019
تعداد صفحات : 500
ISBN (شابک) : 9783030259389 , 1761761781
زبان کتاب : English
فرمت کتاب : pdf
حجم کتاب : 5 مگابایت
بعد از تکمیل فرایند پرداخت لینک دانلود کتاب ارائه خواهد شد. درصورت ثبت نام و ورود به حساب کاربری خود قادر خواهید بود لیست کتاب های خریداری شده را مشاهده فرمایید.
فهرست مطالب :
Preface......Page 6
Contents......Page 7
Contributors......Page 14
1 Convergence Rate of Proximal Inertial Algorithms Associated with Moreau Envelopes of Convex Functions......Page 17
1.1.1 Introducing (RIPA) from a Dynamic Perspective......Page 18
1.1.2 The Sequence (tk)......Page 21
1.1.3 A Model Result......Page 22
1.2 A Regularized Inertial Proximal Algorithm......Page 23
1.2.1 Preliminary Lemmas......Page 24
1.2.2 Fast Convergence of the Values......Page 29
1.2.3 Faster Convergence......Page 31
1.2.4 Convergence of the Iterates......Page 33
1.3.2 Proximal Versus Gradient Approach......Page 34
1.3.3 Link with the General Maximally Monotone Case......Page 35
1.4 The Impact of Geometry on the Rates of Convergence......Page 36
1.5 Stability with Respect to Perturbations, Errors......Page 43
1.6 A Regularized Inertial Proximal-Gradient Algorithm......Page 50
Some Properties of the Moreau Envelope......Page 55
b......Page 56
Auxiliary Results......Page 57
References......Page 59
2 Constraint Splitting and Projection Methods for Optimal Control of Double Integrator......Page 61
2.1 Introduction......Page 62
2.2 Minimum-Energy Control of Double Integrator......Page 63
2.3 Projections......Page 68
2.4 Best Approximation Algorithms......Page 71
2.4.2 Douglas–Rachford Algorithm......Page 72
2.4.3 Aragón Artacho–Campoy Algorithm......Page 73
2.5.1 The Algorithms......Page 74
2.5.2 Numerical Experiments......Page 75
2.6 Conclusion and Open Problems......Page 82
References......Page 83
3 Numerical Explorations of Feasibility Algorithms for Finding Points in the Intersection of Finite Sets......Page 85
3.1 Introduction......Page 86
3.2 The Four Constellations......Page 87
3.3 The Four Feasibility Algorithms......Page 89
3.4 Setting Up the Numerical Explorations......Page 90
3.5 Determining the ``best\'\' Parameter λbest......Page 92
3.6 Tracking Orbits......Page 93
3.6.1 Few Sets with Few Points......Page 94
3.6.2 Few Sets with Many Points......Page 95
3.6.3 Many Sets with Few Points......Page 96
3.6.4 Many Sets with Many Points......Page 97
3.7 Local and Global Behaviour......Page 98
3.7.1 Few Sets with Few Points......Page 99
3.7.2 Few Sets with Many Points......Page 100
3.7.3 Many Sets with Few Points......Page 101
3.7.4 Many Sets with Many Points......Page 102
3.9 Concluding Remarks......Page 103
References......Page 105
4.1 Introduction......Page 107
4.2 Notation and Preliminaries......Page 109
4.3.1 Problem Formulation and Algorithm......Page 111
4.3.2 Convergence Analysis......Page 115
4.4 Convergence Rates in the Case When A+C Is Strongly Monotone......Page 125
References......Page 127
5.1 Introduction......Page 129
5.2 Preliminaries......Page 131
5.3.1 Morozov-Entropy Regularization......Page 133
5.3.2 Tikhonov-Entropy Regularization......Page 134
5.3.3 Tikhonov–Kullback–Leibler Regularization......Page 136
5.3.4 Nonquadratic Data Misfit......Page 137
5.3.5 Measure Space Solutions......Page 138
5.3.7 Ivanov Regularization......Page 139
5.4 Iterative Methods......Page 140
5.4.1 Projected Landweber Method for Non-negative Solutions of Linear Ill-Posed Equations......Page 141
5.4.2 EM Method for Integral Equations with Non-negative Data and Kernel......Page 143
5.4.3.1 EM Algorithms with Smoothing Steps......Page 146
5.4.3.2 EM-Kaczmarz Type Algorithms......Page 147
References......Page 149
6 Characterizations of Super-Regularity and Its Variants......Page 152
6.2 Normal Cones and Clarke Regularity......Page 153
6.3 Super-Regularity and Subsmoothness......Page 157
6.4 Regularity of Functions......Page 160
6.4.1 Lipschitz Continuous Functions......Page 162
6.4.2 Non-Lipschitzian Functions......Page 163
References......Page 166
7.1 Introduction......Page 168
7.2 Hildebrand–Graves Theorem......Page 171
7.3 The Lyusternik-Graves Theorem......Page 174
7.4 Bartle–Graves Theorem......Page 176
References......Page 178
8 Block-Wise Alternating Direction Method of Multipliers with Gaussian Back Substitution for Multiple-Block Convex Programming......Page 179
8.1 Introduction......Page 180
8.2.1 Variational Inequality Characterization......Page 187
8.2.2 Some Properties......Page 189
8.3.1 The New Algorithm......Page 190
8.3.2 Some Remarks......Page 192
8.4.1 Some Matrices......Page 193
8.4.2 A Prediction-Correction Reformulation of (12plus2minus0===by -===by -8.3.3)......Page 197
8.4.3 An Illustrative Example of Lemma 8.4.3......Page 201
8.4.4 Convergence Proof......Page 203
8.5.1 Convergence Rate in the Ergodic Sense......Page 205
8.5.2 Convergence Rate in a Nonergodic Sense......Page 207
8.6 Some Special Cases......Page 210
8.6.1 The ADMM-GBS in HTY-SIOPT......Page 213
8.6.2 The Splitting Method in HTY-IMA......Page 214
8.7 A Refined Version of Algorithm 1 with Calculated Step Sizes......Page 218
8.8 A Linearized Splitting Block-Wise ADMM with Gaussian Back Substitution......Page 220
8.8.1 Algorithm......Page 221
8.8.2 Convergence Analysis......Page 222
8.9.1 Convergence of Algorithm 1......Page 227
8.9.2 Convergence of Algorithm 1 with Iteratively Calculated Step Sizes......Page 231
8.9.3 Convergence of Algorithm 2......Page 232
8.10 Conclusions......Page 236
References......Page 238
9.1 Introduction......Page 241
9.2 Notation and Background......Page 242
9.3 Main Convergence Result......Page 244
9.4 Applications to the Forward-Backward Algorithm......Page 248
9.5 A Composite Monotone Inclusion Problem......Page 253
References......Page 256
10.1 Introduction......Page 257
10.2 Dynamics......Page 258
10.2.1 Why Basic Assumptions?......Page 260
10.3 Asymptotic Stability......Page 261
10.4 Pointwise Asymptotic Stability: Some Examples......Page 264
10.4.2 Steepest Descent for Convex Functions, and Beyond......Page 265
10.4.2.1 Differential Inclusions Given by Maximal Monotone Mappings......Page 266
10.4.2.2 Steepest Descent for a Convex Function......Page 267
10.4.2.3 Saddle-Point Dynamics for a Convex-Concave Function......Page 269
10.4.3 Consensus Algorithms......Page 270
10.5.1 Sufficient Conditions......Page 272
10.5.2 Reachable Sets and Limits of Solutions......Page 274
10.5.3 Converse Set-Valued Lyapunov Results and Robustness......Page 275
References......Page 278
11.1 Introduction......Page 282
11.2 Preliminaries......Page 285
11.3 The Original Proximal Point Type Method for Vector Optimization Problems......Page 289
11.4.1 Algorithms with Bregman-Type Distances......Page 295
11.4.2 Algorithms with Viscosity Functions and Tikhonov Type Regularizations......Page 297
11.4.3 Algorithms with Lyapunov-Type Distances......Page 300
11.4.4 Algorithms with Hybrid and Inertial Steps......Page 302
11.5 Proximal Point Type Algorithms for Other Vector Optimization Problems......Page 308
11.5.1 Vector-Minimization of Sums of Vector Functions......Page 309
11.5.2 Vector-Minimization of Differences of Cone-Convex Vector Functions......Page 312
11.6 Conclusions and Further Research Directions......Page 313
Appendix: Proof of Theorem 11.17......Page 315
References......Page 318
12.1 Introduction......Page 322
Notations......Page 323
12.2 Frank-and-Wolfe Sets......Page 324
12.3 Frank-and-Wolfe Theorems for Restricted Classes of Quadratic Functions......Page 326
12.4 Motzkin Type Sets......Page 331
12.5 Invariance Properties of Motzkin FW-Sets......Page 336
12.6 Parabolic Sets......Page 339
References......Page 341
13.1 Introduction......Page 343
13.2 Three Techniques......Page 344
13.3 ADMM and Douglas–Rachford Method......Page 348
13.4 ADMM and Peaceman–Rachford Method......Page 350
13.5 Chambolle–Pock and Douglas–Rachford Methods......Page 353
Appendix 1......Page 356
Appendix 3......Page 358
Appendix 4......Page 359
References......Page 360
14.1 Introduction......Page 362
14.2 Banach Space Notation and Definitions......Page 363
14.3 Quasidensity......Page 364
14.4 Monotone Multifunctions: Basic Results......Page 366
14.5 Quasidensity and the Classification of Maximally Monotone Multifunctions......Page 368
14.6 The Hilbert Space Case......Page 369
14.7 The Reflexive Banach Space Case......Page 370
14.8 The Nonreflexive Banach Space Case......Page 371
References......Page 372
15.1 Introduction......Page 374
15.1.2 Related Work......Page 376
15.2.1 Notation and Known Facts......Page 377
15.2.2 Generalized Differentiability......Page 380
15.3.1 Proximal Point and Moreau Envelope......Page 382
15.3.2 Forward-Backward Splitting......Page 385
15.3.3 Error Bounds and Quadratic Growth......Page 386
15.4 Forward-Backward Envelope......Page 388
15.4.1 Basic Properties......Page 389
15.4.2 Further Equivalence Properties......Page 394
15.4.3 Second-Order Properties......Page 395
15.5 Forward-Backward Truncated-Newton Algorithm (FBTN)......Page 399
15.5.1 Subsequential and Linear Convergence......Page 402
15.5.2 Superlinear Convergence......Page 403
15.6 Generalized Jacobians of Proximal Mappings......Page 408
15.6.1 Properties......Page 409
15.6.2 Indicator Functions......Page 412
15.6.3 Norms......Page 416
15.7 Conclusions......Page 417
Appendix: Auxiliary Results......Page 418
References......Page 419
16.1 Introduction......Page 424
16.2 Preliminary......Page 435
16.2.1 Selected Elements of Convex Analysis and Optimization......Page 436
16.2.2 Selected Elements of Fixed Point Theory of Nonexpansive Operators for Application to Hierarchical Convex Optimization......Page 440
16.2.3 Proximal Splitting Algorithms and Their Fixed Point Characterizations......Page 444
16.2.4 Hybrid Steepest Descent Method......Page 447
16.3 Hierarchical Convex Optimization with Proximal Splitting Operators......Page 449
16.3.1 Plugging DRS Operators into Hybrid Steepest Descent Method......Page 450
16.3.2 Plugging LAL Operator into Hybrid Steepest Descent Method......Page 457
16.3.3 Conditions for Boundedness of Fixed Point Sets of DRS and LAL Operators......Page 461
16.4.1 Support Vector Machine......Page 462
16.4.2 Optimal Margin Classifier with Least Empirical Hinge Loss......Page 465
16.4.3 Numerical Experiment: Margin Maximization with Least Empirical Hinge Loss......Page 468
16.5.1 TREX: A Nonconvex Automatic Sparsity Control of Lasso......Page 469
16.5.2 Enhancement of Generalized TREX Solutions with Hierarchical Optimization......Page 474
16.5.3 Numerical Experiment: Hierarchical TREX2......Page 476
16.6 Concluding Remarks......Page 477
B: Proof of Proposition 16.10(a)(d)......Page 480
C: Proof of Theorem 16.15......Page 483
D: Proof of Theorem 16.17......Page 484
E: Proof of Theorem 16.19......Page 485
F: Proof of Theorem 16.23......Page 486
G: Proof of Lemma 16.27......Page 489
H: Proof of Theorem 16.28......Page 490
References......Page 494