BrainMeta'   Connectomics'  

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
> Evolving intelligence.
psychEE
post Nov 07, 2013, 08:04 PM
Post #1


Newbie
*

Group: Basic Member
Posts: 2
Joined: Nov 07, 2013
Member No.: 36292



Hi,

I have a little hobby project; I'd like to make an artificial neural-based pet, and share video footage of what it does here online as I develop it -- but I intend to develop it to evolve on it's own as much as possible and grow in complexity over time; This is going to be more complicated than mere "genetic" algorithms -- and eventually, the pet could possibly become complicated enough to simulate a real animal "lab rat" quite realistically on it's own.

What I have already done:

To get started I made pet virtual world, which will need to be redone/expanded as the project progresses --
I have built a *very* basic/crude 2D simulation world, with walls, rewards, and punishments (pygame) -- and a simulation body which can navigate around in this world by my pressing keys, and which supplies localized senses of smell, touch, and a sub-set of screen pixels (black and whilte) .for "vision", along with tank-track body motion for move body right, left, or move eyesight location.

The world has an interface function which returns the status of the senses, and allows commanding the body to do various tasks -- which right now, I am doing manually -- but which should go to an AI brain.

I have developed a neuron which can learn arbitrary sequential or parallel boolean logic, and which can be run in a very fast XILINX virtex FPGA (Fast!). The neuron is my own unique design, and learns based on negative feedback alone, and is designed to become progressively more resistant to changes which conflict with earlier learnings.

I have already figured out how to interface the neurons to the pet world; so that the neural processor can run the pet, and watch which keys I press, and receive all sensory data including positive rewards and punishments; I can also program special keys communicate with the pet directly if so desired for manual training,etc.

What I would like to do;

I would like to develop an ANN brain similar to what a real animal has, able to recognize rewards -- punishments -- walls, but based on evolutionary principles. And what I am most interested in, is the design of algorithms to automatically evolve a neural net pet -- when operated with a pre-programmed body that has preferences that I have selected in advance; such as becoming hungry, or full, or hurt, or etc....

Right now, that means, I want to write some kind of algorithm to "grow" a brain by randomly/semi-randomly connecting neurons -- and then benchmarking them for various kinds of success at tasks; and keeping the more successful solutions from parallel runs (competitive trials). The goal is ultimately to automate the evolutionary ANN process, so that the pet becomes more effective -- in various worlds -- at doing whatever it takes to keep it's body "happy",

This is to be a low level evolution, at near the neuron level.... sub-symbolic ANN capable evolution.

To do everything randomly is impractical, because it will take too much time (I don't have millions of years, smile.gif ), so I want to start try designing part of the brain, manually, with an idea I have -- and see if it inspires suggestions on how I might go about designing a semi-random evolution algorithm or on problems/advantages of various things you might know about that I don't. ( I have several other ideas, but I'm interested in a conversation.... not teaching a course... smile.gif )

---------------------
An example, manually done; The visual perception for the pet...

The "eye" of the pet is a linear array of binary values in the virtual world; Each time the world clocks, the array is updated with whatever the virtual pet "sees" whether close at hand, or if the eye is focused far -- what is farther away.

The values in the array represent pixels from the pet's screen thresholded so that a black and white image of an item's outline can be discerned in the array's data values.

The general purpose of vision is to categorize whatever is being seen by the pet, in all it's various poses, so that the pet can decide what it want's to do about the "thing" in front of it's eyes. (visual taxonomy).

There are many complications I intend to naively ignore about multiple objects being in a field of view, and "where" they are in the field of view, but I don't know how to solve all that; so I want to discuss a reduced problem -- identify one thing in the picture focused on by the "eye" and how it might be done by a neural net -- to give you an idea of what kind of networks I'm thinking I'd like to have automatically created....

For my pet, I think there will probably be less than 15 items on the screen which it needs to know about;
so I want to take all of the vision pixels in the array no matter how many there are and turn them into a number between 0 and 15; 0 will be a catch all code for "unknown". the others are arbitrary numbers for "different" things the pet notices.

In this first pet, I figure what I would do is make a ANN of 32 inputs and 4 outputs. Basically, a timy perceptron -- that sees a small rectangles worth of pixels as inputs -- and produces a 4 bit number (symbol) as an output.

So I'll name this ANN the "perceptron" with the 4 outputs collectively called the "item" symbol.

Now, I don't want to manually train the net, but to have it designed generically so as to be able to evolve to solve useful problems automatically; so I think what I really want is unsupervised learning whenever possible.

Therefore, I think what I might do is create a second network called "imagitron", and have it take 4 inputs
and create an output for every pixel of the eye; The perceptron categorizes the item (taxonomic system), and the imagitron re-creates it, so that whenever the "item" number from the perceptron is connected to the imagitron's inputs, the imagitron should produce a nearly identical picture to what the "eye" is seeing -- or at least one close enough to be recognizable by the perceptron again.

The item number, then, produced by the neural network can become a kind of SOFM (self organizing feature map) if I'm using the terminology right, and I'm not sure I am. Correct me if I'm wrong.

Now:
Since the two ANN's are inverses of each other, they are detectably in error whenever the image produced by the imagitron is "different" from the pixel pattern of the eye the perceptron received. Just so, training or negative feedback could be done two ways -- by comparing the image of the imagitron against the eye, pixel by pixel, and deciding on some kind of metric to determine "sufficiently wrong" for negative feedback -- or by taking the regenerated image from the imagitron, and plugging it BACK into the perceptron; any change in the output of the perceptron automatically means there was a mismatch -- Either way, a mismatch is an error, and should be punished (unless the code is 00 -- unknown.)

Both methods of training have problems, which might be overcome by external networks that I haven't thought of -- or which will naturally evolve later....

What are the advantages or disadvantages you might see in these two approaches?

It seems clear, that ultimately, the importance of the "item" code is to identify poison, vs. food, etc. which are all important by definition of something outside the vision system.

SO that misidentification, is to be detected/punished by an even more external and remote ANN, which perhaps ties the stomach to past events which were seen. ( The hipocampus in animals appears to do this function, at least partially. )

What are your thoughts? comments? questions?
Am I on the right kind of track, or am I overlooking something fundamental?
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
evita123
post Nov 10, 2013, 10:09 AM
Post #2


Newbie
*

Group: Basic Member
Posts: 36
Joined: Jun 28, 2013
From: Sweden
Member No.: 35194



I'm not an expert by any means, but this seems like an interesting project. The best of luck!
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
richardberezewski
post Nov 29, 2013, 03:31 AM
Post #3


Newbie
*

Group: Basic Member
Posts: 2
Joined: Nov 29, 2013
Member No.: 36594



QUOTE(evita123 @ Nov 10, 2013, 10:09 AM) *

I'm not an expert by any means, but this seems like an interesting project. The best of luck!


Neither me!..
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
haohao
post Mar 23, 2016, 11:08 PM
Post #4


Awakening
***

Group: Basic Member
Posts: 176
Joined: Mar 18, 2016
Member No.: 38120



AI improvement is possible, even via the automatic optimization. It involves the data processing and physical AI part advancement.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Çağın Çevik
post Apr 09, 2016, 02:52 PM
Post #5


Newbie
*

Group: Basic Member
Posts: 3
Joined: Apr 09, 2016
Member No.: 38157



sorry for not helping,I'm not an expert by any means , I wish you success in work.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
haohao
post Apr 09, 2016, 07:21 PM
Post #6


Awakening
***

Group: Basic Member
Posts: 176
Joined: Mar 18, 2016
Member No.: 38120



I think you could then first make out the plan of your focus points and key parts. For example, you need a data flow channel systems, and determine the arragement of your controlling part and function database.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



Lo-Fi Version Time is now: 22nd September 2017 - 11:26 PM


Home     |     About     |    Research     |    Forum     |    Feedback  


Copyright © BrainMeta. All rights reserved.
Terms of Use  |  Last Modified Tue Jan 17 2006 12:39 am

Consciousness Expansion · Brain Mapping · Neural Circuits · Connectomics  ·  Neuroscience Forum  ·  Brain Maps Blog
 · Connectomics · Connectomics  ·  shawn mikula  ·  shawn mikula  ·  articles