Convex Optimization-culled from http://en.wikipedia.org/wiki/Convex_programming
Convex minimization, a subfield of optimization, studies the problem of minimizing convex functions(a function whose graph of real values is below the line segment joining two points of this graph) over convex sets. The property that makes an optimization convex can make it in some sense "easier" than the general case - for example, any local minimum must be a global minimum.
Given a
real
vector
space
together with a convex, real-valued
function
defined on a convex subset (i.e a set
of two points that lie in an object whose straight line segment has points on it
that also lie in the object)
of
.
As such the issue is to find any point
in
for which the number
is smallest, i.e., a point
such that
An optimization case (also referred to as mathematical programming
program or minimization problem) of finding some
such that
where
is the feasible set and
is the objective, is called convex if
is a closed convex set and
is convex on
.
[1]
[2]
Alternatively, an optimization problem on the form
is called convex if the functions
are convex.[3]
The following statements are true about the convex minimization problem:
Standard form consists of the following three parts:
A convex minimization case is thus written as
Note that every equality constraint
can be equivalently replaced by a pair of inequality constraints
and
.
Following from this fact, it is easy to understand why
has to be affine(preserved linearly) as opposed to merely being convex. If
is convex,
is convex, but
is concave. Therefore, the only way for
to be convex is for
to be affine.
Consider a convex minimization problem given in standard form by a cost function
and inequality constraints
,
where
.
Then the domain
is:
The Lagrangian function for the problem is
For each point x in X that minimizes f over X, there exist real numbers λ0, ..., λm, called Lagrange multipliers, that satisfy these conditions simultaneously:
If there exists a "strictly feasible point", i.e., a point z satisfying
then the statement above can be upgraded to assert that λ0=1.
Conversely, if some x in X satisfies 1-3 for scalars λ0, ..., λm with λ0 = 1, then x is certain to minimize f over X.
Kiwiel, Krzysztof C. (2001). "Convergence and efficiency of subgradient methods for quasiconvex minimization". Mathematical Programming (Series A) (Berlin, Heidelberg: Springer) 90 (1): pp. 1–25. doi:10.1007/PL00011414. ISSN 0025-5610. MR 1819784.Kiwiel acknowledges that Yuri Nesterov first established that quasiconvex minimization problems can be solved efficiently.
Xgains4keeps Inc