Hi again,

This one will probably be very simple for those of you well-versed in statistics.

I am looking into discriminant analysis at the moment (as per my recent thread in this same sub-forum) and I got onto logistic regression. I came across an example which is as follows:

$L_( B | x_1, x_2, ..., x_m) = {\boldsymbol\Pi} [\frac{1}{1+\exp^{({\boldsymbol -x}^\mathrm{t}}{\boldsymbol B})}]^\mathrm{y_j} [1-\frac{1}{1+\exp^{({\boldsymbol -x}^\mathrm{t}}{\boldsymbol B})}]^\mathrm{1-y_j}$

Where:

$L_( B | x_1, x_2, ..., x_m)$ = Likelihood function

$B$ = Supposed to be the Greek letter beta but I couldn't figure out how to add it (is there another menu for this math text I should be using?). Beta is a vector representing my scalar weight parameters. My understanding is that for the logistic regression I have to maximise these values in order to best discriminate between my cases.

${\boldsymbol\Pi}$ = product symbol, i.e. I need to get the product of respective probabilities (from j=1 to k)

${\boldsymbol x}$ = Vector of parameters upon which the probability depends

${y_j}$ = A dichotomous variable (0 or 1) depending on whether point j (=1 to k) is a member of class 1 or class 2 that I am trying to discriminate between.

The purpose of the $\mathrm{y_j}$ and $\mathrm{1-y_j}$ exponents seems to be to remove one of the terms in the product so that only one goes through. That is:

[left term]$^\mathrm{y_j}$ [right term]$^\mathrm{1-y_j}$

...turns into:

[left term]$^1$ [right term]$^0$

... when $\mathrm{y_j}$ = 1. In this example the above equals:

[left term]

... this then gets multiplied by the next probability until we reach j = k.

So yes, what is j and how does it link to the other parameters?

Thanks again for any help explaining this. I'm sure I must have missed some piece of basic understanding on this.

This one will probably be very simple for those of you well-versed in statistics.

I am looking into discriminant analysis at the moment (as per my recent thread in this same sub-forum) and I got onto logistic regression. I came across an example which is as follows:

$L_( B | x_1, x_2, ..., x_m) = {\boldsymbol\Pi} [\frac{1}{1+\exp^{({\boldsymbol -x}^\mathrm{t}}{\boldsymbol B})}]^\mathrm{y_j} [1-\frac{1}{1+\exp^{({\boldsymbol -x}^\mathrm{t}}{\boldsymbol B})}]^\mathrm{1-y_j}$

Where:

$L_( B | x_1, x_2, ..., x_m)$ = Likelihood function

$B$ = Supposed to be the Greek letter beta but I couldn't figure out how to add it (is there another menu for this math text I should be using?). Beta is a vector representing my scalar weight parameters. My understanding is that for the logistic regression I have to maximise these values in order to best discriminate between my cases.

${\boldsymbol\Pi}$ = product symbol, i.e. I need to get the product of respective probabilities (from j=1 to k)

${\boldsymbol x}$ = Vector of parameters upon which the probability depends

${y_j}$ = A dichotomous variable (0 or 1) depending on whether point j (=1 to k) is a member of class 1 or class 2 that I am trying to discriminate between.

__What I'm stuck with is what point j is supposed to be (aside from a value between 1 and k)? Moreover, what is k?__The purpose of the $\mathrm{y_j}$ and $\mathrm{1-y_j}$ exponents seems to be to remove one of the terms in the product so that only one goes through. That is:

[left term]$^\mathrm{y_j}$ [right term]$^\mathrm{1-y_j}$

...turns into:

[left term]$^1$ [right term]$^0$

... when $\mathrm{y_j}$ = 1. In this example the above equals:

[left term]

... this then gets multiplied by the next probability until we reach j = k.

So yes, what is j and how does it link to the other parameters?

Thanks again for any help explaining this. I'm sure I must have missed some piece of basic understanding on this.

Last edited: