BrainMeta'   Connectomics'  

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
> Multi-Layer Perceptron, Multi-Layer Perceptron
nd00jan
post Aug 16, 2006, 07:34 AM
Post #1


Newbie
*

Group: Basic Member
Posts: 1
Joined: Aug 16, 2006
Member No.: 5504



I'm new to Perceptrons and things like that.
For instance on this page:
http://diwww.epfl.ch/mantra/tutorial/engli...html/index.html
...at the end there is a question like: "How often does the network find a solution?".

When do we know it's find a solution? What's indicates that a solution is found?
Can't really get a grip on this...
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
lucid_dream
post Aug 16, 2006, 07:56 AM
Post #2


God
******

Group: Admin
Posts: 1711
Joined: Jan 20, 2004
Member No.: 956



Single layer perceptrons are only capable of learning linearly separable patterns. However, feedforward neural networks with three or more layers (i.e, multilayer perceptrons) have far greater processing power.

The universal approximation theorem for neural networks states that every continuous function that maps intervals of real numbers to some output interval of real numbers can be approximated arbitrarily closely by a multi-layer perceptron with just one hidden layer.

Multi-layer networks use a variety of learning techniques, the most popular being back-propagation. Here the output values are compared with the correct answer to compute the value of some predefined error-function. By various techniques the error is then fed back through the network. Using this information, the algorithm adjusts the weights of each connection in order to reduce the value of the error function by some small amount. After repeating this process for a sufficiently large number of training cycles the network will usually converge to some state where the error of the calculations is small. In this case one says that the network has learned a certain target function. To adjust weights properly one applies a general method for non-linear optimization task that is called gradient descent. For this, the derivative of the error function with respect to the network weights is calculated and the weights are then changed such that the error decreases (thus going downhill on the surface of the error function). For this reason back-propagation can only be applied on networks with differentiable activation functions.

Learn more here
User is offlineProfile CardPM
Go to the top of the page
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



Lo-Fi Version Time is now: 19th November 2017 - 12:32 AM


Home     |     About     |    Research     |    Forum     |    Feedback  


Copyright BrainMeta. All rights reserved.
Terms of Use  |  Last Modified Tue Jan 17 2006 12:39 am

Consciousness Expansion · Brain Mapping · Neural Circuits · Connectomics  ·  Neuroscience Forum  ·  Brain Maps Blog
 · Connectomics · Connectomics  ·  shawn mikula  ·  shawn mikula  ·  articles