Ok... I'm currently attempting this question...

You are faced with the problem of choosing a portfolio of stocks from a group of n stocks to hold for a year and within a given budget M, say (i.e. M is the amount of money you have to spend ). Suppose that (from historical records) the returns on the stocks are known to have a multivariate normal distribution with mean vector μ and variance-covariance matrix V, i.e. if X is the (n x 1) vector of returns on the stocks in one years time then

X ~ N(μ,V)

If you purchase si units of stock i then your total return will be the random number




where St = (s1,s2,... ..sn). First, determine both the expected value ( return ) and variance ( risk ) of your return in terms of the si values ( i.e. as a function of the vector S ). Obviously the bigger the mean return the better, however the bigger the variance the more risk is attached to your portfolio. Economists often consider the utility attached to a portfolio like this can be expressed in the form

U = (Expected Return) - δ(Risk)

where δ is a constant which represents your individual aversion to risk. Note, if δ = 0, your utility depends only on expected return and you don't care how risky your investment is (so, e.g., you put everything into offshore oil exploration with a 90% chance of losing all your money and a 10% chance of becoming an overnight millionaire ). The larger δ is the lower your utility is if your portfolio contains dodgy shares, so you invest in nice steady low risk but lower expected return shares like banks (oops!) and blue chip industrials. Of course the value of δ is between you and your maker.

Now do it scientifically - maximise U, which is a function of the si values subject to the constraint

--------------------------

I found the expected value to be sμ, and the variance to be E[(stx)^2] - s^2μ^2

The t there should be transpose.

I think I'm wrong here? Because I'm not sure how to maximise U = (Expected Return) - δ(Risk) when there's an E in it.

Thanks for any help.