BrainMeta'   Connectomics'  

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
> The Neural Abacus, Mechanistic model of brain function
wan
post Mar 05, 2010, 02:42 PM
Post #1


Newbie
*

Group: Basic Member
Posts: 44
Joined: Mar 03, 2010
Member No.: 32643



This first post will mostly be limited to articulating a basic mechanistic model, but will include an outline of how memory and functions match very well with basic empirical facts in neuroscience. The advantage to this model is to demonstrate how these functions can arise spontaneously, without the need for predefined structure. I'll end with only a hint about how consciousness can be embedded in this network, to be followed up in subsequent post to receive proper treatment. Qualia, instinct, intelligence, and consciousness must be defined in a consistent and hierarchical manner to be meaningful. This is a toy model using nothing more than a plane of metronomes and springs to illustrate a model of brain function. It's purpose is to intuitively illustrate Hebbian learning in a purely mechanistic 'self organizing' model.

Consider two out of phase metronomes, swinging at different rates. If you place these two metronomes on a single movable plate they will spontaneously sync up. This effect has caused more than one suspended bridge to collapse. Here (video link) is a Ted Talk video by Steven Strogatz on spontaneous synchronization. At 11:40 he demonstrates the spontaneous synchronization of metronomes I'll be using as the basis of this toy model of brain function. However, instead of a movable platform, this model will use springs connecting the base of the metronomes with their neighbors. So how does this become a brain that's provides an understanding of the world? To illustrate let's make a very basic list of empirical data to explain:

[1] Memory is not stored in particular brain locations.
[2] Electrical stimulation of individual neurons can produce memories, actions, etc., as if that was the brain location of the memory, skill, etc.
[3] Recalling a memory increases the rate the memory degrades.
[4] Recalling or observing something activates related information not previously related through any observation (inventiveness).
[5] Memory consolidation, such that memories get overlayed and entangled with other memories, thus prone to false memories.

These springs have a very simple rule. When the springs stretch between out of phase metronomes their tensions reduces, like fatigued metal. When two metronomes are in phase such that the spring connection don't stretch the spring tension increases. The metronomes haves two states, exited and ground state, which are represented by two different periodicities. Two metronomes simultaneously in the excited state are in phase. Now each neural input, neurons in out eyes excited by photons, on our skin by touch, etc., have connections to neurons in our brain, metronomes in the analogy.

Now what happens when you increase the periodicity of a certain set of metronomes (neurons) but not others? The spring tension (connection) between the excited metronomes increase, and loosens between those metronomes that are not excited at the same time. Thus with this spring tension adjusted this way (by an experience/sensory input), all it takes to remember that experience is to excite one of the metronomes that was part of the excited group when the experience took place. The spring tension automatically syncs up and excites the entire group of metronomes corresponding to that experience, but not the others. For the same reason suspended bridges can collapse through resonants and metronomes on a movable base sync up.

[1] Explained by a distribution of spring tensions.
[2] Explained by the self syncing of the metronomes in accordance with their connectivity.
[3] By the same mechanism that created the memory. When you remember in absentia of the stimulus that produces it, the spring tension between the greater and lesser excited states begins to loosen, by the same rule that loosened connectivity between metronomes of different states to allow the memory to form in the first place.
[4] When two experiences excite a subset of the metronomes from previous experiences, one experience can induce a memory of another. Wow, that butterfly is flapping its wings like that bird I seen the other day. This overlap recognition is our intelligence (though not necessarily consciousness).
[5] By [3], as the memory degrades with recollection, the act of remembering then becomes a memory in itself that can supersede the original. By [4], this memory of remembering then contains related information which can be mistaken as part of the original memory. Thus false memories are born.

This in general explains a lot about brain function and why it is so plastic, as memories are imprinted via the distribution of excited metronomes, irrespective of any particular initial distribution or structure in the metronomes connectivity. It falls short at this point on what consciousness is. To fully remedy that takes a more complete description of consciousness, instinct, qualia, etc., that I will follow up with in subsequent post. For now I will provide a simplified construct of what consciousness is.

In the above toy model we have metronomes that correspond to sensory input and some that correspond to motor output. As is it can be trained to output motor functions in response to very generalized sensory inputs. For consciousness we need a third set of sensory input metronomes, but instead of input coming from external stimuli the inputs come from the patterns of metronomes excitations that do have inputs from external sensory inputs. The same learning capacity of this third set thus creates what psychologist call "theory of mind". This third set can then recognize from experience (memory) the excitation overlap between present sensory input and prior memories as in [4]. By looping through and activating various related memories, similar to the way an electric probe activates memories and actions, new knowledge can be acquired from thought alone through overlap recognition. This integration process of the various parts of self in the mind is what we call consciousness and provides what we perceive as our intelligence.

It gets even more interesting when it's taken in the context of evolution, and the roles of qualia and overcoming the limitations of instinct are taken into account. I didn't include the references I had in mind, but anybody who wishes to object or have any reservations about the functional validity of the stated mechanisms is welcome to ask for the evidence. Here are a few articles that may answer most such questions though:

(Link): Researchers discover how old memories are re-saved and changed
(Link): Brain quirk makes eyewitnesses less reliable

I follow up soon with a more detailed outline of various concepts.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
wan
post Mar 07, 2010, 10:02 PM
Post #2


Newbie
*

Group: Basic Member
Posts: 44
Joined: Mar 03, 2010
Member No.: 32643



The toy model given here has its limitations. But these limitations were necessary to demonstrate plasticity. Generally speaking we know from neuroscience that specific areas of the cerebral cortex are specialized for various tasks. We also know that if these areas are damaged early enough in development that the same function can develop elsewhere. The toy model described here was limited to a single qualitative memory trace or experience, and the effects of overlaying a new memory trace. In general it is necessary to prevent many brain functions from getting overlayed and consolidated by other functions. This would place undue training difficulties on new skills, and limit the complexity of learnable skills. To overcome this it becomes necessary to group like functions into nearby sets of neurons, and then to have neural connections between these functional sets. Thus complex new skills can be learned by training for the application functional blocks, rather than training the all neurons at the single neuron level for each new skill. The above model assumed a randomization of sensory and action neurons to demonstrate the ubiquity and plasticity, but to get skills as complex as humans this approach would create synesthesia on a massive scale. Thus judicious use of grouping and developmental stages can achieve far more complex skill sets learned in far less time than starting fresh with each new skill.

To place consciousness and qualia in proper context it must viewed hierarchically in evolutionary terms. Qualia is defined as the introspectively accessible phenomenal aspects of our mental lives. This definition includes intellectual qualia, as well as sensory qualia. So far we have a self organized neural toy model that in principle can take a singular experience and learn an effective response. This scheme can be replicated for a litany of sensory responses as the neural network grows. As the skill complexity increases it becomes necessary to group skill sets within specific neural regions. To confer survival skills to offspring it become necessary to 'predefine' choice sets in the connectivity between these regions, forming the basis of what we call instincts. Here's where I'm going to argue that qualia develops before consciousness, or the property we call 'self-awareness'. For complex instincts to evolve, instincts must operate on a limited set of states of groups of skill set blocks. Not on states determined by the entire set of all neurons. Thus groups of neurons containing primitive skill sets are treated 'as if' they were a single neuron. This grouping of neural sets defines the world model on which instincts operate. This grouping of skill sets 'as if' they were produced by the state of a single neuron massively reduces the computational complexity of defining instinctual response sets in complex situations.

It's rather trivial to fool our sense of qualia when it's understood how our mind constructs it. You may see a box over in the corner, but your eyes don't actually see a box. Rather your mind reconstructs a box from what it sees, based on previous experience, and feeds this qualia of a box to your representational model. Yet we perceive this model as the reality of the situation. But what you see as reality is not the reality of what you see. Yet it is simply too computationally complex to hold every possible representational model in your mind at once, especially when those models appear incongruent. Here is a Ted Talk video that demonstrates a range of representational errors.
Al Seckel says our brains are mis-wired: http://www.ted.com/talks/lang/eng/al_seckel_says_our_brains_are_mis_wired.html
Another example would be to wear a pair of glasses that make you see the world upside down. After wearing these glasses a few weeks it will start to appear perfectly normal, until you take the glasses off again and have to readjust to not wearing them. It should be clear to most here that qualia is a perceptual shortcut to feed us an expected reality. Independent of what the reality actually is. Truthers do the same thing in the intellectual arena where our intellectual model is itself a qualia construct. They select a singular intellectual litmus test as the one 'true' test of what is real and not. Yet we know that no such single test is universally valid irrespective of context. It's also true that more than one 'valid' representational model exist for any given actual physical state. This fact that whole classes of valid models of a single physical state exist should not be mistaken as a justification for defending an 'invalid' model, but I digress. The notion that our intellectual model is itself a qualia construct is interesting in itself and helps explain a lot about our ideological differences.

So we're up to the level of instinct, yet instincts are severely limited in that only a predefined selection of responses are available for any given sensory input. As this network continues to evolve, the learning capacity of the network can be repeated again treating neural blocks as individual neurons. It had to begin as instincts, to make the qualia grouping useful early on. Yet the neural connections that define the instincts are themselves capable of learning. Like a neural network of neural networks. Thus those individuals learning to make exceptions to instinctual responses have a huge survival advantage. An advantage that is compounded when this emergent network can loop through experiential memories, and construct novel responses to future events in the safety of hiding. Thus I'm arguing that the mechanism of instinct is what evolved to become what we report to ourselves as consciousness. But only after it developed the capacity selectively revisit memories and relate them to skill sets, thus thought alone becomes a source of new experiences and learning.

So what evidence is there that this model is in fact sufficient to fully describe consciousness as we perceive it? The best evidence is the empirical limits and function of our consciousness. We are all aware that our conscious attention is limited to a few variables at a time. Though our sensory input can redirect that attention very fast. Around 7 variables seems awfully pitiful, but consider how it was described in the opening post. It must loop through and activate sets of memories and knowledge, much like the neuroscientist electric probe, while keeping unrelated activations to a minimum. From the results of this comparison concepts found to be related get activated. A numerical model of activation scheme has been shown to correctly predict the general size of our working memory: http://www.physorg.com/news178220995.html. If we tried to divide up consciousness into separate networks like neural groups for different skill sets to increase its computational power, then we'd have a situation where consciousness couldn't maintain full awareness of what it's supposed to be conscious of, i.e., itself. Consciousness must have a complexity limit for the same reason the brain had to section off skills into regions, else be incapable of full self consciousness like a person with a lobotomy. This 'meaning' map of qualia that we continually refine through our 'conscious' efforts takes the role of instincts in certain other animals, and allows us to systematically model the world we live in. Yet presently we don't even have a very complete understanding of the qualia model we use to define our models of the world.

I would like to hear any objections as to why this general model lacks the depth to explain consciousness, or fails to provide an outline of the broad technical milestones needed to get there. I have shortchanged emotions is this model, but emotions are simply another form of qualia that can be very useful in decision making in uncertain conditions and placing soft constraints on allowable solutions. These technical milestones certainly need to be articulated in more detail. There's no reason why our information technology can't be implemented as a 'co-processor' to our own mind. Internet that retrieves or post what you want simply because you wanted it to, without lifting a finger. Having an intelligence at least on par with our own in that system may sound scary to many, but in the long run no law can possibly prevent it.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
AleeBaBa
post Dec 26, 2016, 07:04 PM
Post #3


Newbie
*

Group: Basic Member
Posts: 7
Joined: Nov 16, 2016
Member No.: 38431



what evolved to become what we report to ourselves as consciousness. But only after it developed the capacity selectively revisit memories and relate them to skill sets, thus thought alone becomes a source of new experiences and learning.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



Lo-Fi Version Time is now: 27th June 2017 - 04:23 AM


Home     |     About     |    Research     |    Forum     |    Feedback  


Copyright BrainMeta. All rights reserved.
Terms of Use  |  Last Modified Tue Jan 17 2006 12:39 am

Consciousness Expansion · Brain Mapping · Neural Circuits · Connectomics  ·  Neuroscience Forum  ·  Brain Maps Blog
 · Connectomics · Connectomics  ·  shawn mikula  ·  articles