Skip to main content

Salience Theory: Joined at the mind.




Each child has a fully structured brain, two cerebral hemispheres, a fully formed brain stem, cerebellum and spinal cord. There was also the bridge of tissue, through which neurological information might be shared; within days of their birth, it became apparent that if one twin was pricked with a needle, the other would cry.




The existence of conjoined twins like Tatiana and Krista proved to me years ago several things about consciousness.

1) It can be distributed.

2) It is substrate dependent.

:Some would read those two conclusions above as contradictory in a way, if the consciousness is bound to a singular substrate (one brain) as asserted in 2) how can it also be distributed across brains as asserted in 1) ??

The answer is that consciousness itself emerges from a piecing together of interactions between different cognitive modules that don't distinguish strongly what their sensory input drivers are. We know that the particular piece of brain that unifies the hemispheres and serves as a multiplexer of sorts of all the sensory data being processed in the neocortex would do it's job if there were 2 sets of eyes feeding it data or 4 sets of eyes, in truth the processing task would only differ in terms of density of information storage and comparison. The same is true if we think of multiple sensory input for the other senses....vision, olfaction and in the case of these girls the deeper somatosensory processing which itself is mostly distributed through out the embodiment of the body itself.

So here these girls sit, two bodies, two brains sharing one common input highway, proximal intensity of rewiring to a given body dominating signal processing in the respective brain associated with the body feeding it. The input signal of one set of eyes strong because it is fed by the proximal neocortical pathway for vision processing of the brain connected directly to those eyes via the optic nerves , the other set of eyes distally connected via that connection....so what's going on?

How can one girl see through the eyes of another by simply thinking about it, well first she must reduce the sensory load coming into her own eyes by closing them...now with that signal attenuated she can tune in to the signal firing coming from her sisters visual system across the brain bridge and can "see" in her minds eye (literally) what her sister sees.



But what does this have to do with consciousness??

Last year I put forward a theory that consciousness was emergent (not new), that it was substrate dependent (not new) but that it was also salience dependent (new!). In this "salience theory" the dynamic cognition of the mind is enabled by a roiling comparison over time between sensory input, memory stored and an associated import or salience tag....both in autonomic and emotional factors that at base *drive* cognition.

The theory was the culmination of several years of thought on the matter and research in the latest results on neuroscience regarding brain imaging studies illuminating the parameters of consciousness. These thoughts came on the heels of my beginning the implementation of the Action Delta Assessment (ADA) algorithm which extends the Action Oriented Workflow paradigm I started working on in 2003 to enable autonomous work routing.

These algorithms form an invariant general set of algorithms for encoding business processes and workflows for application development into an efficient social system for getting work actions performed as efficiently as possible over an entire organization......holistically.

The similarities of these ideas in workflow should be familiar to any one who has studied some of the neuroscience on consciousness and the deeper neuro anatomy of sensory input, neocortical processing and memory formation.

I was struck by the similarity between the two and in 2011 asked myself the question what factors would need to come together in the brain in order to create dynamic cognition? thought? consciousness? The salience theory is my attempt to answer those questions. In it consciousness is an emergent phenomena, this is not a new idea as mentioned before however how it is driven was a mysery...salience theory proposes that autonomic and emotional factors drive consciousness but only as fueled by sensory input and processing which compares memory to input.

Consciousness thus emerges as a more and more refined ability to dance across comparisons of this sort across the number of sensory modalities that a given living agent (or artificial agent soon) is able to span as independent dimensions. For example, a pigeon and you have 5 primary sensory modalities in common but a pigeon has at least one more (they can "see" electromagnetic fields) that you don't have.

The brain takes these sensory inputs and preferentially shuttles the signal data to particular areas of the neocortex for comparison and processing. What is interesting is that there is very little specialization to a given sensory input type in the neocortext itself. Sound processing layers look basically identical to vision processing layers which look identical to taste processing ones save for interesting differences in organization of neuronal sublayers.

A few years ago I saw this invariance across layer types as a strong clue that the cognitive algorithm was common across all sensory input types but also that it meant there must be some type of time associated integration across processing actions in any given sensory type and that integration would need a metronome of sorts to determine how it was proceeding and why.

If I ask you right now what you are sitting on, your mind immediately shifts focus to the object in question, your skin immediately relay to you how hard or soft it is, is it itchy or smooth...yet prior to my asking the question you were focused on reading this text...the sensory reality of the chair you may be sitting on skipped and muted by conscious examination.

How does the brain do that, how does it mute sensory inputs that are incoming in parallel? My answer is that it has to be salience, every little experience is constantly being judged for its importance...primarily the importance is sentimental and thus associated with what feeling is associated with the thing in question but sentiment is often a proxy to the deeper meanings for our doing things which are purely autonomic.



Would you be reading this passage here digesting these ideas efficiently if you hadn't eaten in 3 days ?? Could you do the same if the room you were sitting in was an unbearably cold temperature and you had no clothes on?

The prioritization of autonomic need above sentiment with regard to ideas we happen to be evaluating is all the clue we need to realize a) how important it is to drive cognition and b) to assert that without it very little would get done.

Think about it, here we are devoting our time to thinking about (in my case) and reading (in your case) this write up because we have *leisure* enabled by the previous satisfaction of prior autonomic requirements. You likely would not set down to read an article in a room set to -20 F in your shorts, you likely would not also do so after 3 days of not eating solid food. Autonomic drivers become dominant factors that completely short circuit our ability to submit to leisure activities.

So what about emotion? sentiment sits atop autonomic modulation at a finer resolution of assessment. emotion inherently implies a sense of choice that response to autonomic variation does not. If you suddenly found yourself sitting on hot coals you would not consider if you should get up rather than continue reading this article, you would *unconsciously* jump out of the seat as the pain signals from the hot coals stimulate your skin to over ride your cognitive processing circuits. The dynamic nature of your cognition would be biased by the pain signals even over meaning ....in fact at that point meaning would be irrelevant , you'd just want to stop the pain at all costs.

Under lower levels of autonomic stress emotional modulation helps make decisions which can be tolerated in one way or another based on how those choices outcomes *in the past* panned out. You may for example be crossing the street and further down the road notice a dog, past experience with dogs on the road may lead you to reverse course and go down another street routing around the path or it may lead you to give the dog a wide berth but stay on the same road. An emotional salience factor, fear, coupled with the cognitive exercise of a means of mitigating to eliminate or reduce that fear gives you a range of choices. You don't have to reverse course and you don't have to keep going but the fear salience level would determine that and it does that based on what experience you had in the past.

It is known that some people have no sense of fear or rather a radically reduced sense of it that is foreign to many of us, you may think such people are like super heroes but it turns out they are prone to getting into accidents because their brains are not associating with experience the very healthy skepticism that should attend certain life endangering activities. Fear is good in this regard, not only because it allows survival but because when evaluated as a contribution to salience determination on sensory input compared to past memory it flows cognitive dynamism. The brain moves on to other ideas on how to navigate away from the dog, you stop in your path, you evaluate escape routes...etc.

If I were to take you off the street and instead put you at the helm of a simulation where yo u are walking on a treadmill keyed to a virtual street and I told you that your virtual body was impervious to dog bites you wouldn't care about going around it, you'd plot your course and walk. Absent the real consequences (in terms of pain) and buffered by the associated emotional correlation (of fear) that is normally associated with walking in a street as a dog approaches your behavior would be modulated.

One at a time we could take away autonomic consequences (burning if you walk in a flame, freezing if you jump in an ice lake, starvation if you fail to eat) and the scope of our choices would balloon all the while our reason for engaging choice dwindles!!

Isn't that an interesting reality, in the limit if we take away all consequence we end up with no reasons to do anything at all. Imagine a video game constructed this way it would be something you'd play likely for a few minutes and then simply stop as none of your actions would have consequences and as a result you'd have an apathetic response (emotionally) to all interactions in that virtual space.

What does this have to do with conjoined twins cognitive state?

Everything. If it is true that dynamism of the mind is enabled by salience determination in the body and emotional centers, a hypothesis that the consciousness state of one twin could be effected by the body state of another would be valid. An interesting experiment to test this hypothesis out would be to stagger the eating periods for the twins....I'd imagine they are fed at the same time, staggering their eating times could reveal induced hunger from one body to the next through the connection of their conjoined brains and therefore their minds. It would be as if in the video game example I was able to strap you into a machine that could transmit a pain response to you if the virtual dog bit your character. Doing so, one would all of a sudden recover the associated emotional import factor associated with memory of dogs and possibly being bitten because the physical consequences would present. If one were able to engineer experiments to test out salience associated response to other dimensions of stimuli I'd predict very similar leaky assessments between the twins.

Ones cognition would need to dial through the permutations of possible evasion methods rather than marching through the road as if super man as when no such signal was connected. Salience theory simply asserts that to finer and finer degree we do things as driven by these physical and emotional queues in response to the comparison results between our incoming sensation and our past memory.

Tatiana and Krista stand as two minds, fed by a double set of sensory input sources but salience modulated by two bodies, one distal and one proximal...the distal body always contribute signal modulation to the proximal and vice versa and thus their mutual salience modules are (I assert) homogenized. They are together but still separated. Having individual experiences while sharing a common mutual one. Thus presenting opportunity for another hypothesis...they likely feel the same way (what is your favorite color? do you like the taste of custard? does this music please you? etc.)  about the same things and as they age the aspects of individuality that would present in other non conjoined twins simply will not present in them as the unique way that their body is joined has also ensured that their mind(s) are also joined in a dynamic cognitive dance of experiences that play as one music that only they two can hear. The article indicates that they have preferences despite this but these are subjective assessments not double blind ones...more rigor can be used to probe out interesting connections between their interests.

A unique opportunity for testing the limits of joined minds at least so far as their particular connection is concerned can be had here.

Links:


http://sent2null.blogspot.com/2011/12/how-does-idea-form-autonomics-memory.html

http://sent2null.blogspot.com/2012/02/with-completion-of-ada-action-delta.html

http://sent2null.blogspot.com/2012/02/when-your-smart-phone-comes-alive.html

http://sent2null.blogspot.com/2013/05/ada-on-road-to-dynamic-cognition-how-is.html

http://sent2null.blogspot.com/2013/02/on-consciousness-there-is-no-binding.html

http://sent2null.blogspot.com/2013/02/emotions-identity-crisis-in-our-brain.html

http://sent2null.blogspot.com/2013/05/ada-on-road-to-dynamic-cognition-how-is.html

http://sent2null.blogspot.com/2012/03/integrated-information-does-not-equate.html


Comments

Popular posts from this blog

the attributes of web 3.0...

As the US economy continues to suffer the doldrums of stagnant investment in many industries, belt tightening budgets in many of the largest cities and continuous rounds of lay offs at some of the oldest of corporations, it is little comfort to those suffering through economic problems that what is happening now, has happened before. True, the severity of the downturn might have been different but the common factors of people and businesses being forced to do more with less is the theme of the times. Like environmental shocks to an ecosystem, stresses to the economic system lead to people hunkering down to last the storm, but it is instructive to realize that during the storm, all that idle time in the shelter affords people the ability to solve previous or existing problems. Likewise, economic downturns enable enterprising individuals and corporations the ability to make bold decisions with regard to marketing , sales or product focus that can lead to incredible gains as the economic

How many cofactors for inducing expression of every cell type?

Another revolution in iPSC technology announced: "Also known as iPS cells, these cells can become virtually any cell type in the human body -- just like embryonic stem cells. Then last year, Gladstone Senior Investigator Sheng Ding, PhD, announced that he had used a combination of small molecules and genetic factors to transform skin cells directly into neural stem cells. Today, Dr. Huang takes a new tack by using one genetic factor -- Sox2 -- to directly reprogram one cell type into another without reverting to the pluripotent state." -- So the method invented by Yamanaka is now refined to rely only 1 cofactor and b) directly generate the target cell type from the source cell type (skin to neuron) without the stem like intermediate stage.  It also mentions that oncogenic triggering was eliminated in their testing. Now comparative methods can be used to discover other types...the question is..is Sox2 critical for all types? It may be that skin to neuron relies on Sox2

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform cert