# Thread: MLE of amplitude rescaling

1. ## MLE of amplitude rescaling

if y=ax+n where y is a d-dimensional vector, a is a scalar (amplitude rescaling factor), and n is a d-dimensional vector drawn from a zero mean gaussian, what is the MLE of $a$ given x and y?

the way i see it, this is equivalent to minimizing the sum of squared errors:
$\sum_{i=1}^d(y_i - ax_i)^2$, taking the derivative with respect to $a$ and setting it equal to zero i end up with $a = \frac{x^Ty}{x^Tx}$. for some reason, this seems too simple, and intuitively, doesn't make a whole lot of sense. does this look right?

edit: n is drawn from a zero mean gaussian with vI covariance (v is scalar, I is the identity matrix).

i think its the same, except you're minimizing $\sum_{i=1}^dv(y_i - ax_i)^2$, so $a = v\frac{x^Ty}{x^Tx}$

edit 2: v cancels out, $a = \frac{x^Ty}{x^Tx}$

2. Hello,

I think you're correct. But I've never applied the MLE to a linear model...

You may be interested by this part of a Wikipedia article : Linear model - Wikipedia, the free encyclopedia, which seems to confirm your result.

The only difference is that you need the X in the article to be of full rank (for a nxm matrix, "full rank" means that its rank is min(n,m)). But here, I'm quite unsure what stands for X

3. cool, thanks

in my case $x$ and $y$ are just vectors so $x^Ty$ and $x^Tx$ are scalar values, no need to worry about rank. I got good results with this solution ( on a nearest neighbor classifier), so I'm assuming its correct

thanks again.

Originally Posted by Moo
Hello,

I think you're correct. But I've never applied the MLE to a linear model...

You may be interested by this part of a Wikipedia article : Linear model - Wikipedia, the free encyclopedia, which seems to confirm your result.

The only difference is that you need the X in the article to be of full rank (for a nxm matrix, "full rank" means that its rank is min(n,m)). But here, I'm quite unsure what stands for X