Optimization using Lagrange multipliers

Hello. I am trying to teach myself the Lagrangean method for purposes of constrained optimization, and I've had a couple of shots at a problem which I, uh, haven't been able to solve for some reason. Mostly because it degenerates to algebraic nonsense, I think. I would appreciate if someone took a look at the first part of my solution, so I can be sure that I am not messing it up before I start solving the equation system it yields, which obviously can be messed up easily.

I have an objective function (which is to be minimized)

and a constraint function

I make a Lagrangean:

which obviously turns into

,

and I proceed taking partial derivatives, putting them equal to zero and thus creating a system of equations:

Is there anything obviously messed up so far? When I put the system of equations into CAS, it gives "*false*" as solution, so I kind of guess that I am messing something up.

Re: Optimization using Lagrange multipliers

Hi JacobOCD have you tried using the ''jacobian''...

Sorry I realy tough it was funny...

I kind of feel bad now...

I'll try to help you then!

I think the problem is your ''Lagrangean'' (Well I for sure haven't seen it that way, I can't say it's wrong, but I've no idea what that is)

The method for Lagrange multiplicator I've in my school book say something like that (It's in french so I'll translate)

To find the min and max value of f(x,y,z) under a cibstraubt function g(x,y,z)=k (There are some stuff you have to make sure, like the derivate exist.)

find every value of x,y,z so that:

then evalaute f with those point

Note

is the same as

Now this is for the student book I've.

This is how I leanred lagrange multiplcator.

Does it help a bit more?

note: You said: "and I proceed taking partial derivatives, putting them equal to zero and thus creating a system of equations:"

Tacking the partial derivates is a step in lagrange multiplacator altough you never put them equal to zero. the equation you've to solve

If that wasn't clear enough let me know I'll try to explain more.

Re: Optimization using Lagrange multipliers

This is the method I was taught:

Objective function:

Constraint:

Set up the system:

which is:

The second equation may be written:

This implies:

i) but the constraint precludes this value of .

ii)

Now, substitute for into the constraint:

Hence, we find the objective function is minimized by:

Re: Optimization using Lagrange multipliers

Guys, thanks alot. I actually tried doing what MarkFL did, actually quite exactly. I, though, made some mistakes along the way, and I couldn't solve the equation system as efficiently.

Barioth, if you evaluate my expressions with the partials, you will see that they can be re-written into the expressions Mark wrote, as well as the constraint expression, we end up with. Thus it doesn't make any difference, like if you write 2x-3y=0 or 2x=3y. It is still the same equation, so to say. Now I've done a few more problems using the Lagrangean method (this time succesfully), and I've found a good resource where I could see a sort of geometrical proof of the method, so I think I am starting to make a whole lot of sense of it. It's actually quite simple, it seems.

Under all circumstances: Thanks alot for the help, again. My next project is devising Taylor polynomials, so I guess I'll return with more questions. Maybe I should take a few proper calculus courses instead of building my learning on random pdf notes and YouTube videos (Rofl)