Sparselp¶
- computeMutualCoherence(A)[source]¶
Computes mutual coherence
- USAGE
result = computeMutualCoherence (A)
- INPUT
A – input matrix
- OUTPUT
result – mutual coherence
- evalObj(x, theta, pNeg, pPos, epsilonP, alpha, approximation)[source]¶
Computes the value of the sparseLP objective function
obj = evalObj(x,theta,pNeg,pPos,epsilonP,alpha,approximation);
- INPUTS
x – current solution vector
theta, pNeg, pPos, epsilonP, alpha – parameters of the approximations
approximation – appoximation type of zero-norm. Available approximations:
‘cappedL1’ : Capped-L1 norm
‘exp’ : Exponential function
‘log’ : Logarithmic function
‘SCAD’ : SCAD function
‘lp-’ : L_p norm with p < 0
‘lp+’ : L_p norm with 0 < p < 1
‘l1’ : L1 norm
- OUTPUT
obj – Current value of the objective function
- % .. Author: - Hoai Minh Le, 20/10/2015
Ronan Fleming, 2017
- optimizeCardinality(problem, param)[source]¶
DC programming for solving the cardinality optimization problem The l0 norm is approximated by a capped-l1 function.
\(min c'(x, y, z) + lambda_0*k.||*x||_0 + lambda_1*o.*||x||_1 . - delta_0*d.||*y||_0 + delta_1*o.*||y||_1\) . + alpha_1*o.*||z||_1` s.t. \(A*(x, y, z) <= b\) \(l <= (x,y,z) <= u\) \(x in R^p, y in R^q, z in R^r\)
- USAGE
solution = optimizeCardinality (problem, param)
- INPUT
problem – Structure containing the following fields describing the problem:
.p - size of vector x OR a size(A,2) x 1 boolean indicating columns of A corresponding to x (min zero norm).
.q - size of vector y OR a size(A,2) x 1 boolean indicating columns of A corresponding to y (max zero norm).
.r - size of vector z OR a `size(A,2) x 1`boolean indicating columns of A corresponding to z .
.A - s x size(A,2) LHS matrix
.b - s x 1 RHS vector
- .csense - s x 1 Constraint senses, a string containing the constraint sense for
each row in A (‘E’, equality, ‘G’ greater than, ‘L’ less than).
.lb - size(A,2) x 1 Lower bound vector
.ub - size(A,2) x 1 Upper bound vector
.c - size(A,2) x 1 linear objective function vector
- OPTIONAL INPUTS
problem – Structure containing the following fields describing the problem: * .osense - Objective sense for problem.c only (1 means minimise (default), -1 means maximise) * .k - p x 1 OR a size(A,2) x 1 strictly positive weight vector on minimise ||x||_0 * .d - q x 1 OR a size(A,2) x 1 strictly positive weight vector on maximise ||y||_0 * .o size(A,2) x 1 strictly positive weight vector on minimise ||[x;y;z]||_1 * .lambda0 - global parameter on minimise ||x||_0 * .lambda1 - global parameter on minimise ||x||_1 * .delta0 - global parameter on maximise ||y||_0 * .delta1 - global parameter on minimise `||y||_1 * .alpha1 - global parameter on minimise `||z||_1
param – Parameters structure: * .printLevel - greater than zero to recieve more output * .nbMaxIteration - stopping criteria - number maximal of iteration (Default value = 100) * .epsilon - stopping criteria - (Default value = feasTol) * .theta - starting parameter of the approximation (Default value = 0.5)
For a sufficiently large parameter, the Capped-L1 approximate problem and the original cardinality optimisation problem are have the same set of optimal solutions. However, starting with a smaller theta seems to avoid getting stuck in a local minimum. Local minima can be detected by checking if running the algorithm multiple times gives fifferent solutions. If som try reducing theta to e.g. 0.1
.thetaMultiplier - at each iteration: theta = theta*thetaMultiplier
.eta - Smallest value considered non-zero (Default value feasTol)
- sparseLP(model, approximation, params)[source]¶
DC programming for solving the sparse LP \(min ||x||_0\) subject to linear constraints See Le Thi et al., DC approximation approaches for sparse optimization, European Journal of Operational Research, 2014; http://dx.doi.org/10.1016/j.ejor.2014.11.031
- USAGE
[solution, nIterations, bestApprox] = sparseLP (model, approximation, params)
- INPUTS
model – Structure containing the following fields describing the linear constraints:
.A - m x n LHS matrix
.b - m x 1 RHS vector
.lb - n x 1 Lower bound vector
.ub - n x 1 Upper bound vector
.csense - m x 1 Constraint senses, a string containting the model sense for each row in A (‘E’, equality, ‘G’ greater than, ‘L’ less than).
- OPTIONAL INPUTS
approximation: appoximation type of zero-norm. Available approximations:
‘cappedL1’ : Capped-L1 norm
‘exp’ : Exponential function
‘log’ : Logarithmic function
‘SCAD’ : SCAD function
‘lp-’ : L_p norm with p < 0
‘lp+’ : L_p norm with 0 < p < 1
‘l1’ : L1 norm
‘all’ : try all approximations and return the best result
- OPTIONAL INPUTS
params – Parameters structure:
.nbMaxIteration - stopping criteria - number maximal of iteration (Defaut value = 1000)
.epsilon - stopping criteria - (Defaut value = 10e-6)
.theta - parameter of the approximation (Defaut value = 0.5)
- OUTPUT
solution – Structure containing the following fields:
.x - n x 1 solution vector
.stat - status:
1 = Solution found
2 = Unbounded
0 = Infeasible
-1= Invalid input
nIterations: Number of iterations bestApprox: Best approximation
- updateObj(x, theta, pNeg, pPos, epsilonP, alpha, approximation)[source]¶
Update the linear objective - variables (x,t)
c = updateObj(x,theta,pNeg,pPos,epsilonP,alpha,approximation);
- INPUTS
x – current solution vector
theta, pNeg, pPos, epsilonP, alpha – parameters of the approximations
approximation – appoximation type of zero-norm. Available approximations:
‘cappedL1’ : Capped-L1 norm
‘exp’ : Exponential function
‘log’ : Logarithmic function
‘SCAD’ : SCAD function
‘lp-’ : L_p norm with p < 0
‘lp+’ : L_p norm with 0 < p < 1
‘l1’ : L1 norm
- OUTPUT
c – New objective function
- % .. Author: - Hoai Minh Le, 20/10/2015
Ronan Fleming, 2017