|
There are a lot of promising global optimization methods I learned from this site (especially kind of evolutionary algorithms) and I would like to apply them to my quadratic programming task which have linear and bounding box constraints. However, not all optimizers handle, for example, linear constraints. Is it possible to still use them? The main idea that comes to mind is to fix fitness/goal function to return about-infinity if the current solution does not fit into the constraints, but I think it would be better to provide custom mutation/crossing-over functions that handle the constrains to the optimizer. Are there such well-known functions for linear and bounding box constraints?
This question is marked "community wiki".
|
|
Some unconstrained optimization implementations assume that the underlying function is smooth and may behave erratically if that is not the case. It may be possible to relax some of your constraints by reworking your formulation. As Alexandre and Jonathan mentioned, Lagrange multipliers may help. If you cannot relax or eliminate the constraints, it is likely that you will need an optimizer that can deal with constraints. This OpenOpt QP page is one place to start FWIW, I was able to use an unconstrained optimization implementation (conjugate gradients) on an SDP problem by optimizing a bound on the objective. Paper: Fast Maximum Margin Matrix Factorization for Collaborative Prediction. 'course, YMMV.
This answer is marked "community wiki".
|
You might be interested in lagrange multipliers http://en.wikipedia.org/wiki/Lagrange_multiplier
You could just kill any solution that violates the constraints, then solutions will all stay within your constrained area. Of course, complex constraints will make this approach work poorly.
I think this may give you some ideas as well. As Alex mentioned, Lagrange multipliers would probably work. http://web.mit.edu/dongs/www/publications/projects/2006-05-23_18.086_ConstrainedOptimization.pdf