# Variation on Recursive Bayesian Estimation

• Jun 15th 2013, 07:45 AM
dudyu
Variation on Recursive Bayesian Estimation
Hi,
I'm doing a research on a variation of Recursive Bayesian Estimation- it's a Bayesian estimation with a 'time smoothing' term:

${ \hat { P } }_{ k+1 }(x)\quad =\quad \eta \cfrac { { \hat { P } }_{ k }(x)p({ z }_{ k+1 }|x) }{ \int _{ -\infty }^{ \infty }{ { \hat { P } }_{ k }(x)p({ z }_{ k+1 }|x)dx } } \quad +\quad (1-\eta ){ \hat { P } }_{ k }(x)\\ where:\\\\ { \hat { P } }_{ k }(x)\quad is\quad the\quad esimated-prior\quad function,\quad at\quad time\quad k\\\\ z_{ k }\quad \in \quad \Re ,\quad is\quad a\quad measurement\quad presented\quad at\quad time\quad k\\\\ \eta \quad \in \quad [0,1]\quad is\quad the\quad learning\quad rate\quad constant\quad \\\\ p(z|x)\quad is\quad the\quad likelihood\quad function:\quad \quad p(z|x)\quad =\quad \cfrac { 1 }{ \sqrt { 2\pi } \sigma } exp\{ \cfrac { { (z-x) }^{ 2 } }{ { 2\sigma }^{ 2 } } \}$

My question is not regarding the math itself, but regarding its context:
This recursive relation does not work well as an estimator, but it does display some interesting behavior (clustering, bifurcations as function of sigma and eta)
Can anyone suggest a field or a case where this recursive relation is/could be manifested? maybe a natural or an artificial system where a recursive bayesian estimation is applied, but the system fails to completely "forget" its previous state, and therefor a time-smoothing term is present.

Thank you
• Jun 15th 2013, 06:09 PM
chiro
Re: Variation on Recursive Bayesian Estimation
Hey dudyu.

Have you considered something like an artificial neural network?
• Jun 23rd 2013, 09:19 AM
dudyu
Re: Variation on Recursive Bayesian Estimation
Quote:

Originally Posted by chiro
Hey dudyu.

Have you considered something like an artificial neural network?

Hi, Thanks for your reply. I haven't considered that, actually. How can this model be applied to a neural network?
• Jun 23rd 2013, 11:36 PM
chiro
Re: Variation on Recursive Bayesian Estimation
Basically what happens is that the network updates itself with probability information so that the weight data in the network reflects the best model given the training data and information used to tune the network itself.

An artificial neural network is just a way of calculating weights that take inputs and create a set of weighted outputs that are used to model some system (black box).

If the weights are well calibrated, then the neural network represents a good model of the phenomena.

You can also update the weights just like you update the posterior distribution given some theoretical prior.