Nonsmooth optimization is an optimization of nonsmooth function subject to optional nonsmooth constraints gi(x) and hi(x) (in addition to traditional box and linear constraints).
Non-smooth optimization has very high cost, and if you can find equivalent smooth formulation for your problem, it is better to do so. Smooth algorithms run much faster and with better convergence guarantees. However, sometimes you have no choice but to solve problem with nonsmoothness either in target function or in constraints. In this case ALGLIB will provide you with optimized implementation of adaptive gradient sampling algorithm - robust solver with convergence guarantees.
1 Nonsmooth optimization overview
2 AGS - adaptive gradient sampling method
Overview
Solver API
Examples
3 Other questions
Scale of the variables
4 Downloads section
There are exist several families of algorithms for non-smooth optimization, each with its own benefits and drawbacks. We will review them shortly before proceeding to ones implemented in ALGLIB:
AGS (Adaptive Gradient Sampling) solver is provided by minns subpackage of Optimization package. This solver is intended for solution of nonsmooth problems with box, linear and nonlinear (possibly nonsmooth) constraints. AGS solver includes several important features:
The most important limitation of AGS solver is that this solver is not intended for high-dimensional problems. One AGS step requires ~2·N gradient evaluations at randomly sampled locations (just for comparison - LBFGS requires O(1) evaluations per step). Typically you will need O(N) iterations to converge, which results in O(N2) gradient evaluations per optimization session.
From the other side, this solver requires only a little tuning. Once you select initial sampling radius (almost any sufficiently large value will work; start with 1.0) and penalty parameter for nonlinear constraints, it will be working like a bulldozer - slowly but robustly moving towards solution. Algorithm is very tolerant to its tuning parameters, so you won't have to make numerous trial runs in order to find good settings for your optimizer.
As all ALGLIB optimizers, minns subpackage is built around minnsstate object. You should use it for everything optimization-related - from problem specification to retrieval of results:
ALGLIB Reference Manual includes several examples for minns subpackage:
Before you start to use optimizer, we recommend you to set scale of the variables with minnssetscale functions. Scaling is essential for the correct work of the stopping criteria (and sometimes for the convergence of optimizer). You can use optimizer without scaling if your problem is well scaled. However, if some variables are up to 100 times different in magnitude, we recommend you to tell solver about their scale. And we strongly recommend to set scaling in case of larger difference in magnitudes.
We recommend you to read separate article on variable scaling, which is worth reading unless you solve some simple toy problem.
This article is licensed for personal use only.
ALGLIB Project offers you two editions of ALGLIB:
ALGLIB Free Edition:
+delivered for free
+offers full set of numerical functionality
+extensive algorithmic optimizations
-no multithreading
-non-commercial license
ALGLIB Commercial Edition:
+flexible pricing
+offers full set of numerical functionality
+extensive algorithmic optimizations
+high performance (SMP, SIMD)
+commercial license with support plan
Links to download sections for Free and Commercial editions can be found below: