Results 1 to 3 of 3

Math Help - Neural Networks

  1. #1
    Junior Member
    Joined
    Nov 2007
    Posts
    30

    Neural Networks

    Hi guys, I watched a video on youtube called "next generation neural networks" by Geoffrey Hinton and I've got some questions in relation to the presentation.



    @ 8:30, we are presented a formula using Gibbs sampling and although he describes that the first term in the brackets <vi * hi>^0 was stated to represent "statistics measured with the data", I'm just not quite sure what is meant by that. Also is it a dot product between the two vectors or is it another operation?

    Also, what is meant by the term on the left of the brackets? He mentions that the weights determine the energies linearly and that the probabilities are exponential functions of the energies and therefore the log probabilities are linear functions of the weights. I just cant draw a picture in my head to figure out what all this means. (anything to do with multivariate distributions?)

    He explains the concept behind the formula well, but I just don't understand what the individual parts to the formula mean exactly.



    I'm also confused by neural networks somewhat. In particular, I don't quite understand exactly how they work. I've tried understanding them for quite some time now and have read a bunch of articles on backprop, perceptrons, hopfield nets and although I understand the general gist of what is going on, I don't quite understand it to a level where I can start playing around with these various techniques. In the scope of the presentation on digit recognition, I would like to clarify a few things that will I hopefully clear up a few of my missing puzzle pieces.

    The first thing we need to do, is train the network. This was done using the RBM. My question is this. In matlab, I got the MNIST files and converted them into a format that is recognised by matlab (all the code is given on his site). As each digit is 28x28 pixels, we are esentially creating an array (which represents the image in binary form) with one row of 784 parameters for each of the digits in the training data. This I understand more or less. The next step is where I get confused...

    After we take the first number in the training set, how do we input it into the RBM so that it actually starts to learn? The RBM that was shown @ 6:50, has 2 layers, 1 hidden & 1 visible. Does that mean that we have 784 visible neurons, each with an activation function outlined @ 5:50 (this somehow doesn't make sense as there's really nothing to activate for each neuron if it gets either a 0 or a 1 from the array in question)? Or does each neuron simply represent xi where i is the ith binary element in the 784 dimensional array? i.e.

    () () () Hidden Units

    (0) (1) Visible units (there will be 784 of these.

    If this is indeed the case, how do we determine the number of hidden units needed? @18:43, he only uses 500 hidden neurons for a 784 dimensional arrays.

    I just can't picture how the information flows from ur 28x28 image, into a binary state and then onto a further binary state.


    Next...

    Considering the fact that the formula @9:30 was obviously too slow, the formula @10:39 was proposed. If we use that, does that mean that we take the 784x1 dimensional vector and do the two steps i.e. up to the hidden -> back to the visible and back up to the hidden and then move onto the second value in the training data and then simply do that until we have gone through all the training data? I assume that the weights are not reset after each value in the training data is used for the training? (in which case, does that mean that we need to train the network for each class of data seperatey? i.e. 0's first, then save the results, do 1's next etc... but this seems to defeat the purpose and goes against the idea of trying to find the p(image).

    Well I think that's enough for now. I know I kind of jumped the gun with the first question and it really should have been asked after the clarifications as to the workings of neural networks in general. Anyway, wrote way too much, hopefully one of you will be kind enough to go through my uneducated drivle and actually come up with a response.

    Thanks.
    Last edited by Atomic_Sheep; October 8th 2009 at 03:59 AM.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Junior Member
    Joined
    Nov 2007
    Posts
    30
    Got another question... decided to plonk it in this thread...

    Must the inputs into the Neural Networks be binary or is it possible to have very different sets of data with very different ranges as inputs?
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Junior Member
    Joined
    Nov 2007
    Posts
    30
    Hi guys, still struggling with these neural networks. I've got a basic RBM (Restricted Boltzmann Machine) and I have data vectors of lets say 500 values. How do I go about training the first layer? My data is non binary and it's not from a range of 0-1. I don't understand how you put in a vector of such information? The subsequent RBM's that are used in a greedy learning network I think I'll manage as you train binary with binary data but I just don't understand what data you are meant to use and how to implement it initially.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. 2 bayesian networks
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: July 10th 2011, 08:08 AM
  2. Clique Problem of Social Networks
    Posted in the Discrete Math Forum
    Replies: 1
    Last Post: June 1st 2010, 10:13 PM
  3. Cubes, networks and shortest path.
    Posted in the Discrete Math Forum
    Replies: 9
    Last Post: April 21st 2010, 05:00 PM
  4. Theoretical Neuroscience - Neural Decoding
    Posted in the Advanced Statistics Forum
    Replies: 0
    Last Post: May 24th 2009, 08:55 PM
  5. neural netowrks in mat lab examples needed
    Posted in the Math Software Forum
    Replies: 0
    Last Post: April 13th 2009, 10:23 AM

Search Tags


/mathhelpforum @mathhelpforum