Optimization List of numerical analysis topics




1 optimization

1.1 basic concepts
1.2 linear programming
1.3 convex optimization
1.4 nonlinear programming
1.5 optimal control , infinite-dimensional optimization
1.6 uncertainty , randomness
1.7 theoretical aspects
1.8 applications
1.9 miscellaneous





optimization

mathematical optimization — algorithm finding maxima or minima of given function


basic concepts

active set
candidate solution
constraint (mathematics)

constrained optimization — studies optimization problems constraints
binary constraint — constraint involves 2 variables


corner solution
feasible region — contains solutions satisfy constraints may not optimal
global optimum , local optimum
maxima , minima
slack variable
continuous optimization
discrete optimization

linear programming

linear programming (also treats integer programming) — objective function , constraints linear



algorithms linear programming:

simplex algorithm

bland s rule — rule avoid cycling in simplex method
klee–minty cube — perturbed (hyper)cube; simplex method has exponential complexity on such domain
criss-cross algorithm — similar simplex algorithm
big m method — variation of simplex algorithm problems both less , greater constraints


interior point method

ellipsoid method
karmarkar s algorithm
mehrotra predictor–corrector method


column generation
k-approximation of k-hitting set — algorithm specific lp problems (to find weighted hitting set)


linear complementarity problem
decompositions:

benders decomposition
dantzig–wolfe decomposition
theory of two-level planning
variable splitting


basic solution (linear programming) — solution @ vertex of feasible region
fourier–motzkin elimination
hilbert basis (linear programming) — set of integer vectors in convex cone generate integer vectors in cone
lp-type problem
linear inequality
vertex enumeration problem — list vertices of feasible set

convex optimization

convex optimization



quadratic programming

linear least squares (mathematics)
total least squares
frank–wolfe algorithm
sequential minimal optimization — breaks large qp problems series of smallest possible qp problems
bilinear program


basis pursuit — minimize l1-norm of vector subject linear constraints

basis pursuit denoising (bpdn) — regularized version of basis pursuit

in-crowd algorithm — algorithm solving basis pursuit denoising




linear matrix inequality
conic optimization

semidefinite programming
second-order cone programming
sum-of-squares optimization
quadratic programming (see above)


bregman method — row-action method strictly convex optimization problems
proximal gradient method — use splitting of objective function in sum of possible non-differentiable pieces
subgradient method — extension of steepest descent problems non-differentiable objective function
biconvex optimization — generalization objective function , constraint set can biconvex

nonlinear programming

nonlinear programming — general optimization problem in usual framework



special cases of nonlinear programming:

see linear programming , convex optimization above
geometric programming — problems involving signomials or posynomials

signomial — similar polynomials, exponents need not integers
posynomial — signomial positive coefficients


quadratically constrained quadratic program
linear-fractional programming — objective ratio of linear functions, constraints linear

fractional programming — objective ratio of nonlinear functions, constraints linear


nonlinear complementarity problem (ncp) — find x such x ≥ 0, f(x) ≥ 0 , x f(x) = 0
least squares — objective function sum of squares

non-linear least squares
gauss–newton algorithm

bhhh algorithm — variant of gauss–newton in econometrics
generalized gauss–newton method — constrained nonlinear least-squares problems


levenberg–marquardt algorithm
iteratively reweighted least squares (irls) — solves weigted least-squares problem @ every iteration
partial least squares — statistical techniques similar principal components analysis

non-linear iterative partial least squares (nipls)




mathematical programming equilibrium constraints — constraints include variational inequalities or complementarities
univariate optimization:

golden section search
successive parabolic interpolation — based on quadratic interpolation through last 3 iterates




general algorithms:

concepts:

descent direction
guess value — initial guess solution algorithm starts
line search

backtracking line search
wolfe conditions




gradient method — method uses gradient search direction

gradient descent

stochastic gradient descent


landweber iteration — used ill-posed problems


successive linear programming (slp) — replace problem linear programming problem, solve that, , repeat
sequential quadratic programming (sqp) — replace problem quadratic programming problem, solve that, , repeat
newton s method in optimization

see under newton algorithm in section finding roots of nonlinear equations


nonlinear conjugate gradient method
derivative-free methods

coordinate descent — move in 1 of coordinate directions

adaptive coordinate descent — adapt coordinate directions objective function
random coordinate descent — randomized version


nelder–mead method
pattern search (optimization)
powell s method — based on conjugate gradient descent
rosenbrock methods — derivative-free method, similar nelder–mead guaranteed convergence


augmented lagrangian method — replaces constrained problems unconstrained problems term added objective function
ternary search
tabu search
guided local search — modification of search algorithms builds penalties during search
reactive search optimization (rso) — algorithm adapts parameters automatically
mm algorithm — majorize-minimization, wide framework of methods
least absolute deviations

expectation–maximization algorithm

ordered subset expectation maximization




nearest neighbor search
space mapping — uses coarse (ideal or low-fidelity) , fine (practical or high-fidelity) models



optimal control , infinite-dimensional optimization

optimal control



pontryagin s minimum principle — infinite-dimensional version of lagrange multipliers

costate equations — equation lagrange multipliers in pontryagin s minimum principle
hamiltonian (control theory) — minimum principle says function should minimized


types of problems:

linear-quadratic regulator — system dynamics linear differential equation, objective quadratic
linear-quadratic-gaussian control (lqg) — system dynamics linear sde additive noise, objective quadratic

optimal projection equations — method reducing dimension of lqg control problem




algebraic riccati equation — matrix equation occurring in many optimal control problems
bang–bang control — control switches abruptly between 2 states
covector mapping principle
differential dynamic programming — uses locally-quadratic models of dynamics , cost functions
dnss point — initial state optimal control problems multiple optimal solutions
legendre–clebsch condition — second-order condition solution of optimal control problem
pseudospectral optimal control

bellman pseudospectral method — based on bellman s principle of optimality
chebyshev pseudospectral method — uses chebyshev polynomials (of first kind)
flat pseudospectral method — combines ross–fahroo pseudospectral method differential flatness
gauss pseudospectral method — uses collocation @ legendre–gauss points
legendre pseudospectral method — uses legendre polynomials
pseudospectral knotting method — generalization of pseudospectral methods in optimal control
ross–fahroo pseudospectral method — class of pseudospectral method including chebyshev, legendre , knotting


ross–fahroo lemma — condition make discretization , duality operations commute
ross π lemma — there fundamental time constant within control solution must computed controllability , stability
sethi model — optimal control problem modelling advertising

infinite-dimensional optimization



semi-infinite programming — infinite number of variables , finite number of constraints, or other way around
shape optimization, topology optimization — optimization on set of regions

topological derivative — derivative respect changing in shape


generalized semi-infinite programming — finite number of variables, infinite number of constraints

uncertainty , randomness

approaches deal uncertainty:

markov decision process
partially observable markov decision process
robust optimization

wald s maximin model


scenario optimization — constraints uncertain
stochastic approximation
stochastic optimization
stochastic programming
stochastic gradient descent


random optimization algorithms:

random search — choose point randomly in ball around current iterate
simulated annealing

adaptive simulated annealing — variant in algorithm parameters adjusted during computation.
great deluge algorithm
mean field annealing — deterministic variant of simulated annealing


bayesian optimization — treats objective function random function , places prior on it
evolutionary algorithm

differential evolution
evolutionary programming
genetic algorithm, genetic programming

genetic algorithms in economics


mcacea (multiple coordinated agents coevolution evolutionary algorithm) — uses evolutionary algorithm every agent
simultaneous perturbation stochastic approximation (spsa)


luus–jaakola
particle swarm optimization
stochastic tunneling
harmony search — mimicks improvisation process of musicians
see section monte carlo method



theoretical aspects

convex analysis — function f such f(tx + (1 − t)y) ≥ tf(x) + (1 − t)f(y) t ∈ [0,1]

pseudoconvex function — function f such ∇f · (y − x) ≥ 0 implies f(y) ≥ f(x)
quasiconvex function — function f such f(tx + (1 − t)y) ≤ max(f(x), f(y)) t ∈ [0,1]
subderivative
geodesic convexity — convexity functions defined on riemannian manifold


duality (optimization)

weak duality — dual solution gives bound on primal solution
strong duality — primal , dual solutions equivalent
shadow price
dual cone , polar cone
duality gap — difference between primal , dual solution
fenchel s duality theorem — relates minimization problems maximization problems of convex conjugates
perturbation function — function relates primal , dual problems
slater s condition — sufficient condition strong duality hold in convex optimization problem
total dual integrality — concept of duality integer linear programming
wolfe duality — when objective function , constraints differentiable


farkas lemma
karush–kuhn–tucker conditions (kkt) — sufficient conditions solution optimal

fritz john conditions — variant of kkt conditions


lagrange multiplier

lagrange multipliers on banach spaces


semi-continuity
complementarity theory — study of problems constraints of form 〈u, v〉 = 0

mixed complementarity problem

mixed linear complementarity problem
lemke s algorithm — method solving (mixed) linear complementarity problems




danskin s theorem — used in analysis of minimax problems
maximum theorem — maximum , maximizer continuous function of parameters, under conditions
no free lunch in search , optimization
relaxation (approximation) — approximating given problem easier problem relaxing constraints

lagrangian relaxation
linear programming relaxation — ignoring integrality constraints in linear programming problem


self-concordant function
reduced cost — cost increasing variable small amount
hardness of approximation — computational complexity of getting approximate solution

applications

in geometry:

geometric median — point minimizing sum of distances given set of points
chebyshev center — centre of smallest ball containing given set of points


in statistics:

iterated conditional modes — maximizing joint probability of markov random field
response surface methodology — used in design of experiments


automatic label placement
compressed sensing — reconstruct signal knowledge sparse or compressible
cutting stock problem
demand optimization
destination dispatch — optimization technique dispatching elevators
energy minimization
entropy maximization
highly optimized tolerance
hyperparameter optimization
inventory control problem

newsvendor model
extended newsvendor model
assemble-to-order system


linear programming decoding
linear search problem — find point on line moving along line
low-rank approximation — find best approximation, constraint rank of matrix smaller given number
meta-optimization — optimization of parameters in optimization method
multidisciplinary design optimization
optimal computing budget allocation — maximize overall simulation efficiency finding optimal decision
paper bag problem
process optimization
recursive economics — individuals make series of two-period optimization decisions on time.
stigler diet
space allocation problem
stress majorization
trajectory optimization
transportation theory
wing-shape optimization

miscellaneous

combinatorial optimization
dynamic programming

bellman equation
hamilton–jacobi–bellman equation — continuous-time analogue of bellman equation
backward induction — solving dynamic programming problems reasoning backwards in time
optimal stopping — choosing optimal time take particular action

odds algorithm
robbins problem




global optimization:

brst algorithm
mcs algorithm


multi-objective optimization — there multiple conflicting objectives

benson s algorithm — linear vector optimization problems


bilevel optimization — studies problems in 1 problem embedded in another
optimal substructure
dykstra s projection algorithm — finds point in intersection of 2 convex sets
algorithmic concepts:

barrier function
penalty method
trust region


test functions optimization:

rosenbrock function — two-dimensional function banana-shaped valley
himmelblau s function — two-dimensional 4 local minima, defined



f
(
x
,
y
)
=
(

x

2


+
y

11

)

2


+
(
x
+

y

2



7

)

2




{\displaystyle f(x,y)=(x^{2}+y-11)^{2}+(x+y^{2}-7)^{2}}


rastrigin function — two-dimensional function many local minima
shekel function — multimodal , multidimensional


mathematical optimization society






Comments

Popular posts from this blog

Biography Pavel Yablochkov

Discography Three Man Army

History VMFA-121