Semidefinite Relaxations Approach to Polynomial Optimization and Its Extensions PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Semidefinite Relaxations Approach to Polynomial Optimization and Its Extensions PDF full book. Access full book title Semidefinite Relaxations Approach to Polynomial Optimization and Its Extensions by Li Wang. Download full books in PDF and EPUB format.

Semidefinite Relaxations Approach to Polynomial Optimization and Its Extensions

Semidefinite Relaxations Approach to Polynomial Optimization and Its Extensions PDF Author: Li Wang
Publisher:
ISBN: 9781321084412
Category :
Languages : en
Pages : 119

Book Description
The goal of this thesis is to study a special nonlinear programming, namely, polynomial optimization in which both the objective and constraints are polynomials. This kind of problem is always NP-hard even if the objective is nonconvex quadratic and all constraints are linear. The semidefinite (SDP) relaxations approach, based on sum of squares representations, provides us with strong tools to solve polynomial optimization problems with finitely many constraints globally. We first review two SDP relaxation methods for solving polynomial optimization problems with finitely many constraints: the classic Lasserre's SDP relaxation and Jacobian SDP relaxation. In general, these methods relax the polynomial optimization problem as a sequence of SDPs whose optima are the lower bounds of the global minimum and converge to the global minimum under certain assumptions. We also prove that the assumption of nonsingularity in Jacobian SDP relaxation method can be weakened to have finite singularities. Then, we study the problem of minimizing a rational function. We reformulate the problem by the technique of homogenization, the original problem and the reformulated problem are shown to be equivalent under some generic conditions. The constraint set of the reformulated problem may not be compact, and Lasserre's SDP relaxation may not have finite convergence, so we apply Jacobian SDP relaxation to solve the reformulated polynomial optimization problem. Some numerical examples are presented to show the efficiency of this method. Next, we consider the problem of minimizing semi-infinite polynomial programming (SIPP). We propose an exchange algorithm with SDP relaxations to solve SIPP problems with a compact index set globally. And we extend the proposed method to SIPP problems with noncompact index set via homogenization. The reformulated problem is equivalent to original SIPP problem under some generic conditions. At last, we study the problem of finding best rank-1 approximations for both symmetric and nonsymmetric tensor. For symmetric tensors, this is equivalent to optimizing homogeneous polynomials over unit spheres; for nonsymmetric tensors, this is equivalent to optimizing multi-quadratic forms over multi-spheres. We use semidefinite relaxations approach to solve these polynomial optimization problems. Extensive numerical experiments are presented to show that this approach is practical in getting best rank-1 approximations.

Semidefinite Relaxations Approach to Polynomial Optimization and Its Extensions

Semidefinite Relaxations Approach to Polynomial Optimization and Its Extensions PDF Author: Li Wang
Publisher:
ISBN: 9781321084412
Category :
Languages : en
Pages : 119

Book Description
The goal of this thesis is to study a special nonlinear programming, namely, polynomial optimization in which both the objective and constraints are polynomials. This kind of problem is always NP-hard even if the objective is nonconvex quadratic and all constraints are linear. The semidefinite (SDP) relaxations approach, based on sum of squares representations, provides us with strong tools to solve polynomial optimization problems with finitely many constraints globally. We first review two SDP relaxation methods for solving polynomial optimization problems with finitely many constraints: the classic Lasserre's SDP relaxation and Jacobian SDP relaxation. In general, these methods relax the polynomial optimization problem as a sequence of SDPs whose optima are the lower bounds of the global minimum and converge to the global minimum under certain assumptions. We also prove that the assumption of nonsingularity in Jacobian SDP relaxation method can be weakened to have finite singularities. Then, we study the problem of minimizing a rational function. We reformulate the problem by the technique of homogenization, the original problem and the reformulated problem are shown to be equivalent under some generic conditions. The constraint set of the reformulated problem may not be compact, and Lasserre's SDP relaxation may not have finite convergence, so we apply Jacobian SDP relaxation to solve the reformulated polynomial optimization problem. Some numerical examples are presented to show the efficiency of this method. Next, we consider the problem of minimizing semi-infinite polynomial programming (SIPP). We propose an exchange algorithm with SDP relaxations to solve SIPP problems with a compact index set globally. And we extend the proposed method to SIPP problems with noncompact index set via homogenization. The reformulated problem is equivalent to original SIPP problem under some generic conditions. At last, we study the problem of finding best rank-1 approximations for both symmetric and nonsymmetric tensor. For symmetric tensors, this is equivalent to optimizing homogeneous polynomials over unit spheres; for nonsymmetric tensors, this is equivalent to optimizing multi-quadratic forms over multi-spheres. We use semidefinite relaxations approach to solve these polynomial optimization problems. Extensive numerical experiments are presented to show that this approach is practical in getting best rank-1 approximations.

Moments, Positive Polynomials and Their Applications

Moments, Positive Polynomials and Their Applications PDF Author: Jean-Bernard Lasserre
Publisher: World Scientific
ISBN: 1848164467
Category : Mathematics
Languages : en
Pages : 384

Book Description
1. The generalized moment problem. 1.1. Formulations. 1.2. Duality theory. 1.3. Computational complexity. 1.4. Summary. 1.5. Exercises. 1.6. Notes and sources -- 2. Positive polynomials. 2.1. Sum of squares representations and semi-definite optimization. 2.2. Nonnegative versus s.o.s. polynomials. 2.3. Representation theorems : univariate case. 2.4. Representation theorems : mutivariate case. 2.5. Polynomials positive on a compact basic semi-algebraic set. 2.6. Polynomials nonnegative on real varieties. 2.7. Representations with sparsity properties. 2.8. Representation of convex polynomials. 2.9. Summary. 2.10. Exercises. 2.11. Notes and sources -- 3. Moments. 3.1. The one-dimensional moment problem. 3.2. The multi-dimensional moment problem. 3.3. The K-moment problem. 3.4. Moment conditions for bounded density. 3.5. Summary. 3.6. Exercises. 3.7. Notes and sources -- 4. Algorithms for moment problems. 4.1. The overall approach. 4.2. Semidefinite relaxations. 4.3. Extraction of solutions. 4.4. Linear relaxations. 4.5. Extensions. 4.6. Exploiting sparsity. 4.7. Summary. 4.8. Exercises. 4.9. Notes and sources. 4.10. Proofs -- 5. Global optimization over polynomials. 5.1. The primal and dual perspectives. 5.2. Unconstrained polynomial optimization. 5.3. Constrained polynomial optimization : semidefinite relaxations. 5.4. Linear programming relaxations. 5.5. Global optimality conditions. 5.6. Convex polynomial programs. 5.7. Discrete optimization. 5.8. Global minimization of a rational function. 5.9. Exploiting symmetry. 5.10. Summary. 5.11. Exercises. 5.12. Notes and sources -- 6. Systems of polynomial equations. 6.1. Introduction. 6.2. Finding a real solution to systems of polynomial equations. 6.3. Finding all complex and/or all real solutions : a unified treatment. 6.4. Summary. 6.5. Exercises. 6.6. Notes and sources -- 7. Applications in probability. 7.1. Upper bounds on measures with moment conditions. 7.2. Measuring basic semi-algebraic sets. 7.3. Measures with given marginals. 7.4. Summary. 7.5. Exercises. 7.6. Notes and sources -- 8. Markov chains applications. 8.1. Bounds on invariant measures. 8.2. Evaluation of ergodic criteria. 8.3. Summary. 8.4. Exercises. 8.5. Notes and sources -- 9. Application in mathematical finance. 9.1. Option pricing with moment information. 9.2. Option pricing with a dynamic model. 9.3. Summary. 9.4. Notes and sources -- 10. Application in control. 10.1. Introduction. 10.2. Weak formulation of optimal control problems. 10.3. Semidefinite relaxations for the OCP. 10.4. Summary. 10.5. Notes and sources -- 11. Convex envelope and representation of convex sets. 11.1. The convex envelope of a rational function. 11.2. Semidefinite representation of convex sets. 11.3. Algebraic certificates of convexity. 11.4. Summary. 11.5. Exercises. 11.6. Notes and sources -- 12. Multivariate integration 12.1. Integration of a rational function. 12.2. Integration of exponentials of polynomials. 12.3. Maximum entropy estimation. 12.4. Summary. 12.5. Exercises. 12.6. Notes and sources -- 13. Min-max problems and Nash equilibria. 13.1. Robust polynomial optimization. 13.2. Minimizing the sup of finitely many rational cunctions. 13.3. Application to Nash equilibria. 13.4. Exercises. 13.5. Notes and sources -- 14. Bounds on linear PDE. 14.1. Linear partial differential equations. 14.2. Notes and sources

An Introduction to Polynomial and Semi-Algebraic Optimization

An Introduction to Polynomial and Semi-Algebraic Optimization PDF Author: Jean Bernard Lasserre
Publisher: Cambridge University Press
ISBN: 1316240398
Category : Mathematics
Languages : en
Pages : 355

Book Description
This is the first comprehensive introduction to the powerful moment approach for solving global optimization problems (and some related problems) described by polynomials (and even semi-algebraic functions). In particular, the author explains how to use relatively recent results from real algebraic geometry to provide a systematic numerical scheme for computing the optimal value and global minimizers. Indeed, among other things, powerful positivity certificates from real algebraic geometry allow one to define an appropriate hierarchy of semidefinite (SOS) relaxations or LP relaxations whose optimal values converge to the global minimum. Several extensions to related optimization problems are also described. Graduate students, engineers and researchers entering the field can use this book to understand, experiment with and master this new approach through the simple worked examples provided.

Sparse Polynomial Optimization: Theory And Practice

Sparse Polynomial Optimization: Theory And Practice PDF Author: Victor Magron
Publisher: World Scientific
ISBN: 1800612966
Category : Mathematics
Languages : en
Pages : 223

Book Description
Many applications, including computer vision, computer arithmetic, deep learning, entanglement in quantum information, graph theory and energy networks, can be successfully tackled within the framework of polynomial optimization, an emerging field with growing research efforts in the last two decades. One key advantage of these techniques is their ability to model a wide range of problems using optimization formulations. Polynomial optimization heavily relies on the moment-sums of squares (moment-SOS) approach proposed by Lasserre, which provides certificates for positive polynomials. On the practical side, however, there is 'no free lunch' and such optimization methods usually encompass severe scalability issues. Fortunately, for many applications, including the ones formerly mentioned, we can look at the problem in the eyes and exploit the inherent data structure arising from the cost and constraints describing the problem.This book presents several research efforts to resolve this scientific challenge with important computational implications. It provides the development of alternative optimization schemes that scale well in terms of computational complexity, at least in some identified class of problems. It also features a unified modeling framework to handle a wide range of applications involving both commutative and noncommutative variables, and to solve concretely large-scale instances. Readers will find a practical section dedicated to the use of available open-source software libraries.This interdisciplinary monograph is essential reading for students, researchers and professionals interested in solving optimization problems with polynomial input data.

A New Relaxation Technique for Polynomial Optimization and Spectrahedral Geometry Problems

A New Relaxation Technique for Polynomial Optimization and Spectrahedral Geometry Problems PDF Author: Christian Trabandt
Publisher:
ISBN:
Category :
Languages : en
Pages : 94

Book Description


Optimization of Polynomials in Non-Commuting Variables

Optimization of Polynomials in Non-Commuting Variables PDF Author: Sabine Burgdorf
Publisher: Springer
ISBN: 3319333380
Category : Mathematics
Languages : en
Pages : 118

Book Description
This book presents recent results on positivity and optimization of polynomials in non-commuting variables. Researchers in non-commutative algebraic geometry, control theory, system engineering, optimization, quantum physics and information science will find the unified notation and mixture of algebraic geometry and mathematical programming useful. Theoretical results are matched with algorithmic considerations; several examples and information on how to use NCSOStools open source package to obtain the results provided. Results are presented on detecting the eigenvalue and trace positivity of polynomials in non-commuting variables using Newton chip method and Newton cyclic chip method, relaxations for constrained and unconstrained optimization problems, semidefinite programming formulations of the relaxations and finite convergence of the hierarchies of these relaxations, and the practical efficiency of algorithms.

Contributions to the Moment-SOS Approach in Global Polynomial Optimization

Contributions to the Moment-SOS Approach in Global Polynomial Optimization PDF Author: Thanh Tung Phan
Publisher:
ISBN:
Category :
Languages : en
Pages : 119

Book Description
Polynomial Optimization is concerned with optimization problems of the form (P) : f* = { f(x) with x in set K}, where K is a basic semi-algebraic set in Rn defined by K={x in Rn such as gj(x) less or equal 0}; and f is a real polynomial of n variables x = (x1, x2, ..., xn). In this thesis we are interested in problems (P) where symmetries and/or structured sparsity are not easy to detect or to exploit, and where only a few (or even no) semidefinite relaxations of the moment-SOS approach can be implemented. And the issue we investigate is: How can the moment-SOS methodology be still used to help solve such problem (P)? We provide two applications of the moment-SOS approach to help solve (P) in two different contexts. * In a first contribution we consider MINLP problems on a box B = [xL, xU] of Rn and propose a moment-SOS approach to construct polynomial convex underestimators for the objective function f (if non convex) and for -gj if in the constraint gj(x) less or equal 0, the polynomial gj is not concave. We work in the context where one wishes to find a convex underestimator of a non-convex polynomial f of a few variables on a box B of Rn. The novelty with previous works on this topic is that we want to compute a polynomial convex underestimator p of f that minimizes the important tightness criterion which is the L1 norm of (f-h) on B, over all convex polynomials h of degree d _fixed. Indeed in previous works for computing a convex underestimator L of f, this tightness criterion is not taken into account directly. It turns out that the moment-SOS approach is well suited to compute a polynomial convex underestimator p that minimizes the tightness criterion and numerical experiments on a sample of non-trivial examples show that p outperforms L not only with respect to the tightness score but also in terms of the resulting lower bounds obtained by minimizing respectively p and L on B. Similar improvements also occur when we use the moment-SOS underestimator instead of the aBB-one in refinements of the aBB method. * In a second contribution we propose an algorithm that also uses an optimal solution of a semidefinite relaxation in the moment-SOS hierarchy (in fact a slight modification) to provide a feasible solution for the initial optimization problem but with no rounding procedure. In the present context, we treat the first variable x1 of x = (x1, x2, ...., xn) as a parameter in some bounded interval Y of R. Notice that f*=min { J(y) : y in Y} where J is the function J(y) := inf {f(x) : x in K ; x1=y}. That is one has reduced the original n-dimensional optimization problem (P) to an equivalent one-dimensional optimization problem on an interval. But of course determining the optimal value function J is even more complicated than (P) as one has to determine a function (instead of a point in Rn), an infinite-dimensional problem. But the idea is to approximate J(y) on Y by a univariate polynomial p(y) with the degree d and fortunately, computing such a univariate polynomial is possible via solving a semidefinite relaxation associated with the parameter optimization problem. The degree d of p(y) is related to the size of this semidefinite relaxation. The higher the degree d is, the better is the approximation of J(y) by p(y) and in fact, one may show that p(y) converges to J(y) in a strong sense on Y as d increases. But of course the resulting semidefinite relaxation becomes harder (or impossible) to solve as d increases and so in practice d is fixed to a small value. Once the univariate polynomial p(y) has been determined, one computes x1* in Y that minimizes p(y) on Y, a convex optimization problem that can be solved efficiently. The process is iterated to compute x2 in a similar manner, and so on, until a point x in Rn has been computed. Finally, as x* is not feasible in general, we then use x* as a starting point for a local optimization procedure to find a final feasible point x in K. When K is convex, the following variant is implemented. After having computed x1* as indicated, x2* is computed with x1 fixed at the value x1*, and x3 is computed with x1 and x2 fixed at the values x1* and x2* respectively, etc., so that the resulting point x* is feasible, i.e., x* in K. The same variant applies for 0/1 programs for which feasibility is easy to detect like e.g., for MAXCUT, k-CLUSTER or 0/1-KNAPSACK problems.

Semidefinite Optimization and Convex Algebraic Geometry

Semidefinite Optimization and Convex Algebraic Geometry PDF Author: Grigoriy Blekherman
Publisher: SIAM
ISBN: 1611972280
Category : Mathematics
Languages : en
Pages : 487

Book Description
An accessible introduction to convex algebraic geometry and semidefinite optimization. For graduate students and researchers in mathematics and computer science.

Handbook of Semidefinite Programming

Handbook of Semidefinite Programming PDF Author: Henry Wolkowicz
Publisher: Springer Science & Business Media
ISBN: 1461543819
Category : Business & Economics
Languages : en
Pages : 660

Book Description
Semidefinite programming (SDP) is one of the most exciting and active research areas in optimization. It has and continues to attract researchers with very diverse backgrounds, including experts in convex programming, linear algebra, numerical optimization, combinatorial optimization, control theory, and statistics. This tremendous research activity has been prompted by the discovery of important applications in combinatorial optimization and control theory, the development of efficient interior-point algorithms for solving SDP problems, and the depth and elegance of the underlying optimization theory. The Handbook of Semidefinite Programming offers an advanced and broad overview of the current state of the field. It contains nineteen chapters written by the leading experts on the subject. The chapters are organized in three parts: Theory, Algorithms, and Applications and Extensions.

Polynomial Optimization, Moments, and Applications

Polynomial Optimization, Moments, and Applications PDF Author: Michal Kočvara
Publisher: Springer Nature
ISBN: 3031386590
Category : Mathematics
Languages : en
Pages : 274

Book Description
Polynomial optimization is a fascinating field of study that has revolutionized the way we approach nonlinear problems described by polynomial constraints. The applications of this field range from production planning processes to transportation, energy consumption, and resource control. This introductory book explores the latest research developments in polynomial optimization, presenting the results of cutting-edge interdisciplinary work conducted by the European network POEMA. For the past four years, experts from various fields, including algebraists, geometers, computer scientists, and industrial actors, have collaborated in this network to create new methods that go beyond traditional paradigms of mathematical optimization. By exploiting new advances in algebra and convex geometry, these innovative approaches have resulted in significant scientific and technological advancements. This book aims to make these exciting developments accessible to a wider audience by gathering high-quality chapters on these hot topics. Aimed at both aspiring and established researchers, as well as industry professionals, this book will be an invaluable resource for anyone interested in polynomial optimization and its potential for real-world applications.