When we discuss nonlinear optimization, usually we seek a local solution rather than global solution. This week, in the class, I got to know a way to get glocal solution for quadratic programming (any type matrix is ok). The main idea is to use branch and bound ( similar to IP problem ). In the IP, we relax integer variables. In the QP, for each inequality we can use a concave function bounded above and convex function bounded below. By doing that, we can get a convex set. Then we solve relaxed problem, if it happens to be in the boundry. We are done. Otherwise, branch and solve the problem. As IP, which variables you are going to branch is important decision. Here, how you branch the area is also a key question.
Also any IP problem can be reformed as binary problem. And any binary problem can be reformed as quadratic problem, like x(1-x)=0.
It is interesting that you can connect two distinct optimization areas togather.
Subscribe to:
Post Comments (Atom)
1 comment:
I agree. That was one of the greatest lectures ever.
Post a Comment