1.Convex Optimization Models: An Overview
1.1.Lagrange Duality
1.1.1.Separable Problems — Decomposition
1.1.2.Partitioning
1.2.Fenchel Duality and Conic Programming
1.2.1.Linear Conic Problems
1.2.2.Second Order Cone Programming
1.2.3.Semidefinite Programming
1.3.Additive Cost Problems
1.4.Large Number of Constraints
1.5.Exact Penalty Functions
1.6.Notes, Sources, and Exercises
2.Optimization Algorithms: An Overview
2.1.Iterative Descent Algorithms
2.1.1.Differentiable Cost Function Descent — Unconstrained Problems
2.1.2.Constrained Problems — Feasible Direction Methods
2.1.3.Nondifferentiable Problems — Subgradient Methods
2.1.4.Alternative Descent Methods
2.1.5.Incremental Algorithms
2.1.6.Distributed Asynchronous Iterative Algorithms
2.2.Approximation Methods
2.2.1.Polyhedral Approximation
2.2.2.Penalty, Augmented Lagrangian, and Interior Point Methods
2.2.3.Proximal Algorithm, Bundle Methods, and Tikhonov Regularization
2.2.4.Alternating Direction Method of Multipliers
2.2.5.Smoothing of Nondifferentiable Problems
2.3.Notes, Sources,and Exercises
3.Subgradient Methods
3.1.Subgradients of Convex Real—Valued Functions
3.1.1.Characterization of the Subdifferential
3.2.Convergence Analysis of Subgradient Methods
3.3.∈—Subgradient Methods
3.3.1.Connection with Incremental Subgradient Methods
3.4.Notes, Sources,and Exercises
4.Polyhedral Approximation Methods
4.1.Outer Linearization — Cutting Plane Methods
4.2.Inner Linearization — Simplicial Decomposition
4.3.Duality of Outer and Inner Linearization
4.4.Generalized Polyhedral Approximation
4.5.Generalized Simplicial Decomposition
4.5.1.Differentiable Cost Case
4.5.2.Nondifferentiable Cost and Side Constraints
4.6.Polyhedral Approximation for Conic Programming
4.7.Notes, Sources, and Exercises
5.Proximal Algorithms
5.1.Basic Theory of Proximal Algorithms
5.1.1.Convergence
5.1.2.Rate of Convergence
5.1.3.Gradient Interpretation
5.1.4.Fixed Point Interpretation, Overrelaxation, and Generalization
5.2.Dual Proximal Algorithms
5.2.1.Augmented Lagrangian Methods
5.3.Proximal Algorithms with Linearization
5.3.1.Proximal Cutting Plane Methods
5.3.2.Bundle Methods
5.3.3.Proximal Inner Linearization Methods
5.4.Alternating Direction Methods of Multipliers
5.4.1.Applications in Ma.chine Learning
5.4.2.ADMM Applied to Separable Problems
5.5.Notes, Sources,and Exercises
6.Additional Algorithmic Topics
6.1.Gradient Projection Methods
6.2.Gradient Projection with Extrapolation
6.2.1.An Algorithm with Optimallteration Complexity
6.2.2.Nondifferentiable Cost — Smoothing
6.3.Proximal Gradient Methods
6.4.Incremental Subgradient Proximal Methods
6.4.1.Convergence for Methods with Cyclic Order
6.4.2.Convergence for Methods with Randomized Order
6.4.3.Applicationin Specially Structured Problems
6.4.4.Incremental Constraint Projection Methods
6.5.Coordinate Descent Methods
6.5.1.Variants of Coordinate Descent
6.5.2.Distributed Asynchronous Coordinate Descent
6.6.Generalized Proximal Methods
6.7.∈—Descent and Extended Monotropic Programming
6.7.1.∈—Subgradients
6.7.2.∈—Descent Method
6.7.3.Extended Monotropic Programming Duality
6.7.4.Special Cases ofStrong Duality
6.8.InteriorPoint Methods
6.8.1.Primal—DualMethods for Linear Programming
6.8.2.Interior Point Methods for Conic Programming
6.8.3.CentralCutting Plane Methods
6.9.Notes, Sources,and Exercises
AppendixA:Mathematical Background
A.1.Linear Algebra
A.2.Topological Properties
A.3.Derivatives
A.4.Convergence Theorems
Appendix B: Convex Optimization Theory: A Summary
B.1.Basic Concepts of ConvexAnalysis
B.2.Basic Concepts of Polyhedral Convexity
B.3.Basic Concepts of Convex Optimization
B.4.Geometric Duality Framework
B.5.Duality and Optimization
References
Index