To be convenient, I wrote down my question in a .doc file as attachments. Please refer to the .doc file.
The question is about convex function and convex set.
Thanks very much!
Hey baiyang11.
A convex set that has the property where at + (1-t)b is also in the set for a,b in the set and t is in [0,1]. This assumes that a and b are quantities in some vector space (which has addition and scalar multiplication) and if it isn't a vector space, then you need to make it one.
A convex function is convex if the mapping of the function has the property f(t*a + (1-t)*b) <= t*f(a) + (1-t)f(b).
Basically the intuitive way to picture a convex set is that it has no "bumps". If it had a bump the you could draw a line from the bump to another point on the boundary of the set and the line would go through some points not contained in the set.
Thanks for your answer!
But I think my questions are about when a set is "locally convex", although it is not convex as a whole. Particularly, as shown in the .doc file, if the set is defined by the feasible region , what condition on and makes a neighborhood of is "locally convex", as the pircture 2 in the .doc file.
I'm looking forward to your answer for this.
By the way, I created a new thread to make my question in text form, since I didn't find any option to edit my post. Please relay there if possible. Thank you very much!
Locally convex means that you are looking at a neighborhood around a point just like when you look at continuity around a point.
Think in terms of some epsilon distance around some point of a set (i.e. in terms of distance from a point in the convex set) and if you have convexity for that subset of the whole set then you have "local" convexity at that point.
Thanks! This is approaching what I exactly want to know, which is "how do I know if the subset is convex?". A guess is the Hessian of the point is involved. But what exactly?
Especically the set is defined by the g(x) and h(x), as described in the .doc file.
If you want to test whether a subset is convex, you just look at the definition of convexity for that subset.
The hessian matrix looks at the 2nd derivative in a multi-linear setting and you can use it to show whether the curvature for a boundary is convex based on its determinant (much in the same way you do it in the one dimensional case).
This notion of curvature is the intuitive understanding of the connection between convexity and the hessian.
Thanks! I think I understand the relationship between convexity and the hessian. How about I put my questions like this:
(1) Given a twice continuously differentiable function , it can be justified that is not always positive for . However, if , is ("locally") convex in some epsilon distance around ? (As shown in the 1st picutre in .doc file)
(2) Given a twice continously differentiable function , it can be justified that Hessian Matrix of is not always postive definite for . However, if Hessian of at is positve definite, is ("locally") convex in some epsilon neighborhood of ?
(3) Given a region defined by and and (usually defines the feasible region of a general constrained optimization problem), where every and is twice continously differentiable. Here is not convex for , is not affinely linear, so is not a convex set "as a whole". But for a feasible point , on what condition (I would like to know condition about and , not just the "at + (1-t)b" definition of convexity set), a neighborhood of in is ("locally") convex? (As shown in the 2nd picture in .doc file)
As to this question, if this kind of condition exists, Hessian of and is probably involved, as I guessed.
I believe these three questions make it easier for you to answer exactly. Thanks very much!
For (1) and (2) The answer should be yes provided those conditions exist. Basically the argument should be the same as those that involve local maximums or minimums when you look at the first derivative being equal to 0, or when an inverse function exists in a neighborhood (take a look at the inverse function theorem for more on this).
I'll have to think about (3), but with regard to the first two you should look at the inverse function theorem and then adapt the argument with regard to the hessian condition for positive definiteness around some neighborhood.
The argument with regard to looking at neighborhoods should be the same: the difference is that you are looking at convexity instead of the existence of an inverse function.
Thanks! Question (3) is actually what I want to ask. I think a sufficient condition is that the Hessian of every and is positive definite, which means every and is locally convex around . I didn't demonstrate that, but I strongly feel it is true. Do you think so? Maybe see more at Page 231 of the pdf (218 of the book) in book Practical Methods of Optimization, Second Edition | Roger Fletcher | digital library Bookfi, the paragraph starts with "It can be seen from the above theorems that a convexity assumption is in the nature of...".
However, if that is a sufficient condition, is that necessary?
I'm looking forward to your complete answer about (3). Any progress I made, I'll let you know. Thanks very much!