Challenge yourself - Help a cognitive science student decipher Dynamic logic(neural)
My name is Kim and I'm currently studying cognitive science in Gothenburg, Sweden.
I have a problem I was hoping someone would be kind enough to help me with. Before I describe the problem you have to know that my mathematical knowledge isn't very advanced... at all. The "challenge" is to explain it in a way that I can understand despite my lack of background knowledge.
I'm interested in the mathematical description of mechanisms of the mind called "Neural modeling fields" developed by Leonid Perlovsky, but the mathematical formulation of the mind's mechanism (Knowledge instinct) that Perlovsky calls "Dynamic logic" is impossible for me to understand! I understand that DL maximizes the similarity measure (L) between top-down (M) (mental concept-models) and bottom-up (X) (sensory input) signals in a process 'from vague to crisp'. But the details that explain how to achieve this is lost on me.
Sure I can look up the meaning of mathematical symbols like "∈" or a capital Pi, but I have no idea why you would write n∈N under a capital Pi.
Neural modeling fields - Wikipedia, the free encyclopedia
This is the Wikipedia page describing Neural modeling fields and Dynamic logic. I have a couple of journal articles that explain DL and they're all slightly different (why?) which I could send as attachments (I think) if the need should arise. But I would be forever grateful if someone would try and explain it to me in plain English and/or simple math (I know some logic, so that would be fine too - anything but this nonsensical assembly of letters and symbols really!).
Re: Challenge yourself - Help a cognitive science student decipher Dynamic logic(neur
I just read the wiki and the basic idea is that you are using probabilistic information to estimate a quantity known as Sm (which according to the wiki entry is the information about the relevant sensory neurons involved given the signal and other data).
The way it estimates this is by using Maximum Likelihood Estimation and the basic idea is that you find solutions that give estimates of your variables (Sm in this case) by finding the biggest probability in the distribution that corresponds to that particular estimate.
MLE estimators in statistics are often the best estimators that you can get (or if they are biased, they are usually very good).
In short its taking the signal data X(n), the concept models M, and the parameters in the model that you are trying to estimate Sm and its specifying a probability distribution (also known as likelihood) in which it finds the estimates of these values by finding the highest probability corresponding to those estimates.
When you maximize a function in mathematics you get its derivative, set it to zero and then solve for the particular input values. You also have to check that the second derivative is less than zero for it to be a local maximum and if you get multiple solutions, then you need to evaluate each one for being a global maximum or a local maximum.