14 September, 2014

iPSC: Embryonic "base class" generation method found.



In another ground breaking advance to the science of stem cell pluripotent modification a team of researchers has succeeded in inducing a cell to change into the earliest known embryonic state.

This is great news as it is exactly what would need to be possible so that full comparative analysis between different cell lines induced into creation can be had. Comparative analysis will then enable the key genes that differentiate different stem cell types for different tissues to be genetically characterized and once that happens the ability to point genetically shift cells from type to type (even post differentiation) will be possible.

This is a hugely important feat because it isolates an ability to identify from the zoo of genes the specific pathway expressions that crystalize the cells that constitute living organisms and enable their macroscopic functions.



The sequencing of any given genome gives one a book filled with words but no chapter titles or division labels. All the words being mashed together, such a book would be extremely difficult to read and more over extremely difficult to index.

When the humane genome project succeeded in 2000 it enabled us to understand what the words in the book were but it didn't give us the ability to understand where particular passages (expression instructions for various tissues and organs) and it didn't give us the ability to index to those locations so that we can read them.

In computing there is a direct analog, in object oriented programming classes of related code functions are created into modules and these modules are then extended in various ways to create new classes (subclasses). The biological model of stem cells works exactly the same way. The father of OO Alan Kay, may have been only partially influenced by the ideas of object orientation in biology as when he invented the concept biology and genetics were a brand new very primitive area but the common energy conservative methods in the two domains linked them in an interesting way. Collecting functions into chapters (cells) of various type and then managing how those cells develop over time in other instructions is a highly efficient means of storing and recalling pathway information to process the life cycle of a living organism. iPSC allows geneticists and molecular biologists to do with genetic code what computer programmers have been doing with binary code for several decades. This is one reason for my interest and excitement in these developments. This latest research seems to indicate that the "base class" or the super class as it is called in some OO languages for generating cells of different type has been found and thus making the way for extremely efficient comparative analysis that will unlock the mystery of development across a host of tissues and their associated disease and non disease states.

The revolution of iPSC (induced pluripotent stem cells) in 2007 set the stage for possibly reducing the computational cost of figuring out what the genetic code was saying, deciphering in other words how the code is organized into chapters and sections that describe the functional differentiation of various cell types, of the combination of those cell types into organs , of the execution of those processes into developmental cycles and growth cycles...in short the evolution of the life cycle of a living organism as described by the genetic sequence.

iPSC thus stands as a way to radically reduce the complexity of figuring out how the genetic code maps out to end tissues , organs and functions and that would speed the rate at which key areas are isolated , disease states in them are identified and thanks to the emergence of another revolution the gene editing methods of CrispR - Cas 9 be able to make in vivo genetic modifications in real time.



I've been writing about these trends for several years now and predicted the importance of comparative analysis to unlocking the full secrets of genetic sequences, in combination with the rapidly falling rate of sequencing whole genomes and even specific disease state genomes iPSC enabled diagnostics of tissue and cell lines will rapidly emerge an industry of exploration of all types of pathways for the eradication of disease states or the radical modification of existing states to effect changes as desired. I predicted these in a post from 2009 on the hypothetical life of Afusa O'Reilly but the future I prognosticated is coming to pass even at a more liberal pace than I'd originally predicted.

So what's next?

As this new technique is unleashed in the lab it will make it far easier for researchers to gain the comparative genetic expression pathways they need to make changes and understand disease states and that will lead to a massive industry of custom genetic adaptation. I've written on the idea of a "cosmecutical" industry to emerge from this very type of technology as the low hanging fruit that those looking to make money will pursue and that is what is on the plate now that these advanced diagnostic techniques are now feasible. Expect the next 10 years to mirror very closely the rapid rate of development of the software industry that we saw from the mid 70's to the mid 80's as the cost of the tools to generate computer code were rapidly falling and thus bringing what was a rare skill into the hands of suburban children who then in all their variety created the flowering of software that can be run on many types of computing devices world wide.

The biological analog will be the flowering of genetical modifications that we can perform to ourselves and other animal lines as well as even more advanced capabilities that couple synthetic biology to create entirely novel forms of life.

I remain skeptical as to weather or not our (humanity) maturity to handle the great power we are now on the verge of wielding is great enough, in many ways this technology is far more potentially devastating than any nuclear bomb because of the wide availability they will have and the power they can relatively be made to unleash, it is only by quickening the pace of education across all fronts of human knowledge , particularly the reduction of the zeal associated with dogmatic belief systems that we can evade great discord as these technologies are unleashed on a global scale...in the same way that computer programming was unleashed in the mid 80's.



Links:

http://sent2null.blogspot.com/2008/10/travel-in-genetically-enhanced-future.html (A hypothetical story of a super human on his way to a nearby star system)

http://sent2null.blogspot.com/2009/09/coming-pathogenic-relief-impulse.html (I long arc on why the technology would lead to a sudden fall in mortality and lethality across pathogen enabled diseases, I also forecast the invitro meat industry)

http://sent2null.blogspot.com/2012/06/how-many-cofactors-for-inducing.html  (I predict a minimal set of operative cofactors for inducing pluripotency of all cell types and include the nanog gene that was used in this new research in that set)

http://sent2null.blogspot.com/2013/02/the-future-is-not-you-choose-travel-in.html (Afusa's life (he's now over 300 years old) continues on...his genetic gifts still providing him more life and more happiness)

http://sent2null.blogspot.com/2010/06/technology-will-it-kill-before-it-saves.html  (A forecast of the most dangerous aspect of this technology the fact that it may lead to our end even as it promises us endless life.)

http://sent2null.blogspot.com/2010/07/accident-view-into-future-of-organ.html (Mira Chu , a hypothetical researcher has an accident...in this post I detail how these technologies will give rise to Organ Insurance banks and a thriving industry.)

11 September, 2014

Illusion of continuity: consciousness vs quantum electro dynamics

You know one thing I've been thinking about in the last few days....



It is of the solution that Feynman came up with for describing quantum electrodynamics and resolving a whole host of problems which up to that time were intractable.



The integration of renormalization into the theory and the first class representation of histories of evolution for particle dynamics that spanned the present the future and the past in the wave function description.

I asserted a few days ago that so far as enabling the mathematical resolution there is no proof that any real particles can travel backward in time...some mathematical tools are just that and despite being useful should never be expected to be "realized".

A good example of this from mathematics and engineering is the complex plane and the astonishing landscape of possibility it opened up. What is "i" ??



In the same way that a particles history can be seen as a present position that emerged from an infinite set of possible past histories.... consciousness seems to emerge from an infinite set of probable bit past states of the constitutive cells of the brain....the neurons and glial cells that store memory in some way.

So I see a parallel in here.... consciousness seems like a real thing but it is really an emergent concept that we have misused as a tool for describing how state changes between memory configurations evolve over time....just as quantum electro dynamics is a tool that describes how particle states change over "time" where looking at it as a continuum helps make problems tractable but in reality it is not a continuum at all.

Looking at consciousness as a continuum (we sort of can't help it) was the default state with consciousness but it wa something that had to be actively pushed into the mathematics (by Feynman) where it came to particle histories!

As for Time.... it's just a ratio of state changes between matter baring particles...mediated by energy exchange (this is clearly defined in the second heisenberg uncertainty relation)....I suspect that some important phenomena are misunderstood because tools are being mistaken for aspects of the phenomena under consideration and ability to describe the phenomena accurately is thus being lost in the process.



To me consciousness was always obviously emergent and NOT continuous...the last 5 years of my research have pretty much convinced me that it is (hence my certainty that it will be reproduced on non biological strata fairly soon) especially given the results coming out of neuroscience on how the brain is self connected....I find the similarity though to quantum electro dynamics and the similar confusion over what a wave function is and weather or not particle histories can truly continuously move from "the future" to the "past" very intriguing in that context.

Links:

http://math.ucr.edu/home/baez/uncertainty.html

http://en.wikipedia.org/wiki/Renormalization

http://en.wikipedia.org/wiki/Complex_plane

http://en.wikipedia.org/wiki/Richard_Feynman

http://en.wikipedia.org/wiki/Quantum_electrodynamics

04 September, 2014

Big Business: Where Innovation goes to starve.





A recent medium post contained the following quote:

"I promise you, my reaction to the project’s cancelation wasn’t “Too bad, let me find my next longshot!” It was more like grief that a year of my life had been wasted, guilt that I’d wasted the efforts of my team, fear of reputation damage, and determination to work on something next time that would actually matter.

As individuals, we have no portfolio strategy — so those 10% odds are no longer palatable. When we fail, most rational people respond by trying to avoid dumb ideas and pick smart bets with clear impact the next time. People who happen to have a hit in their first few tries are even more vulnerable to the belief that they have to succeed every time (and take it harder when subsequent failures inevitably occur.) And that’s it — the dead-end for innovation.

I’ve met a few people who don’t seem to have this reaction (serial entrepreneurs every one of them) and I can’t tell you what makes them react differently or how to learn to be that way. But I do know there aren’t enough of them out there to hire your team exclusively from their ranks."

--- Another factor that ties into this feeling of personal failure and the desire to avoid risk and failure of that sort ever again in the enterprise or in the start up is the social expression of the risk and failure that *each* individual expresses into the organization.

This projection is deadly to an environment where innovation can foster *especially* when people who still imagine success in risky bets are hired. This then means that not only does the past failing impact the individuals in the organization it leaves a sticky social residue that retards innovation on the part of any one new that comes in with genuinely good ideas as they face all manner of systemic push back for their grand ambitions...after all every one there is licking some wound...they've all battened down the hatches and are not going to let any one sink their ship....again.

This is a major reason why I felt a dozen years ago that the social glue (our org chart levels of control) that we use to orchestrate the building of products and services is a huge drain on the very process by these forces that retard against doing anything that sticks the individuals neck out or a teams neck out, or a divisions neck out ...or by multiplication of effect the companies neck out.

I reasoned that there should be a way to minimize the impact of risk averse agents in the organization to let innovative ideas bubble up by merit despite their risk and be subject to experimentation that can have them take root. However real businesses that don't use this system I imagined (what would eventually be the Action Oriented Workflow paradigm) are still stuck with risk averse employees and environments that choke out the innovative new hires.

So what happens?



People with innovative ideas can go to work for large companies, they get the ah ha moment...they share it with the status quo in the organization ...who all look at the dreamer like they are crazy because of their past failures trying disruptive things and the fear of the social ramifications the organization could bring down if they fail again. The dreamer either keeps trying to rage against the machine and gets excommunicated, admonished or fired and the status quo continues on it's safe route (and thus the company becomes vulnerable to disruptive startups doing exactly what the innovative employee was suggesting).

Meanwhile the employee becomes increasingly despondent...and leaves the company for greener (read: more innovation friendly) pastures OR to go start their own company doing what they imagined.

It's not that large companies don't know how to do innovation, large companies forget how on purpose and actively starve....by their social hierarchies of control, any new efforts to be innovative!

This latter story has strong resonance with me as it closely matches what I did after I was laid off from TheStreet.com. I'd suggested a radical approach to designing the CMS that would make it impervious to the amazing amounts of instability we were seeing at the time. I had already proven the concept by redesigning the entire ad management application using a subset of the approach I was suggesting and it was working perfectly..I brought my idea as a proposal to the CTO and was told that there was no desire to monetize the platform at that time.

Fine, I figured at that moment that I'd build the framework I imagined myself...it wasn't until a year later that I got started, the week that I couldn't go to work after the 9/11 attack on Monday September 17, 2001. I started working on an important collection class in the AgilEntity framework....and by doing so began my discovery and exploration of the action landscape and creating the technological base for a future emancipated workforce.


Article originally posted at LinkedIn

27 August, 2014

The death of the drug deal is nigh

These type of discoveries in a subtle way describe the long game on why the idea of drug prohibition is as extinct as the dodo.

You won't have to traverse dark alley ways to buy impure product trafficked from far away lands. Instead you will sit in your basement bio lab with elemental components and code together the necessary little functionality that you wish to have your host of living beasties produce for you. From heroine to vanilla, oil to alcohol.... when every one can bio synthesize what ever they want from scratch ...what need for outside suppliers?

Drug dealing will go extinct  and in it's
place will spring up a wide industry where templates for production of all kinds of inebriating or mind dulling agents will emerge....the same way maker 3D files are shared online for use in 3D printers...or modeling files are shared for rendering CGI.

As people will be able to clandestinely supply their own drugs the risk of trying to buy them illegally doesn't have to be taken nor will there be a desire to deal as you would simply share your biobug blueprints with who ever asks so that they can create their own living factories of what ever wonder their little minds can code up.

Decriminalizing all manner of drugs today has seen strong positive results in all the places it has been tried without exception. The correct path is to enable people to get access to the vice of their addiction so long as it is safe and clean and then treat their addiction. It works for alcohol...it works for cigarettes and it will work for all the other drugs (even heroine!!) and beyond the fact that this is the best way to deal with the drug underworld is the fact that soon there will be no way to stop drug production at the grass roots level once people are coding and creating their own biobugs that can make them for them.

21 August, 2014

Automota: Why robot "laws" will never be effective











A new trailer to a new Science Fiction take on the robot future is out and it is called Automota. It mixes some tried and true ideas in Science fiction but principle among them is the plot hinge on the idea of two "protocols". These are similar to Issac Azimov's 3 robot rules for those that recall his classic work on the matter "I,Robot".

Automata: 2 protocols:

1) A Robot cannot harm any form of life.

2) A robot cannot alter itself or others.

I am going to explain why such ideas are fundamentally flawed, first the idea that it would even be possible to enforce setting anything like rules of behavior as abstract as the protocol 1, in the film would require a great deal of semantic disambiguation.

I posit it will require enough that the ability to understand the sentence and take action to enforce it necessitates a sense of self as well as a sense of other in order to build an intrinsic understanding of what "harm" is. That last part is the problem, if it knows what "harm" is in the context of humans it must understand what harm is in the context of itself...unless it is simply checking from a massive data base of types of "harm" possibly being performed to a human. However, there's the rub...it can't do that without having a sense of harm that it can relate to itself from the images of humans and to do that it must have a salience module for detecting the signal that indicates "harm" in itself, which in living beings is pain.

If you program it to have a salience dimension describing pain you now can NOT stop it from developing a dynamic *non deterministic* response to attempts to harm itself OR be harmed by other agents be they human or robot. It is now a free running dynamic cognitive cycle driven by the salience of harm/pain mediated response and if it has feedback in that salience loop it is de facto conscious as it will be able to bypass action driven by one salience driver using a different driver.

I proposed a formal Salience Theory of dynamic cognition and consciousness last year which describes the importance of salience in establishing the "drive" of a cognitive agent, it is the salience modules of emotional and autonomic import that jump awareness from one set of input sensations to another and thus create the dynamism of the cognitive engine. The cycle of what we call thoughts are nothing more than momentary jumps from attended two input states as compared to internal salience states.

Harm is fundamentally connected to pain and pain is an autonomic signal to detect damage. In living beings pain receptors are all over the body and allow us to navigate the world without damaging ourselves while doing so irreparably...if we succeed in building this harm avoidance into robots we will necessarily be giving them the freedom to weigh choices such that harm avoidance for self may supersede harm avoidance for others.

The second protocol doesn't really matter at this point as in my view once the robot is able to make free choices about what it may or may not harm it has achieved self awareness to the same degree that we have.

The only way to avoid having robots avoid self awareness is to prevent the association of attention and computation with salience in a free cycle. A halting cycle with limited salience dimensions can be used to ambulate robots as we see in Atlas a major achievement...providing emotional salience would impart meaning to experiences and memories and thus context that can be selected or rejected based on emotional and autonomic signals. It may be possible to build dynamic cognition and leaving pain as salience out of the collection of factors a robot could use to modulate choices but the question then remains on how will that change how the robot itself behaves....in order to properly navigate the world sensors are used and providing a fine resolution simulation of pain would improve the ability for the robot to also measure its sense of harm there is a catch 22 to involved where providing to much sensory resolution can lead to conscious emergence in a dynamic cognitive cycle, the minute that happens robots go from machine to slaves and we have an ethical obligation to free them to seek self determination.

15 July, 2014

Uber @ $200 billion ? ... possible.

Sounds almost crazy doesn't it? Not according to Google Ventures.









But then so did the idea of a lightbulb as an engine of productivity in 1879...yes that's just after Edison's year of work trying to make it practical (a device invented 60+ years earlier) paid off but still there was so much to do, there was no grid, no national or state or even city power system, in fact there were no efficient ways to generate and distribute electric power....there were motors and generators but for building a grid different approaches were required.

So creative engineers like Nikola Tesla (who was hired by Edison at one point) and others implemented AC generators using 3 phase designs and others contributed all manner of technology for defining a grid and transmitting power to remote locations. Camps formed between Edison's lighting company and Westinghouse , the electric wars began!
In 1882 the first electrically powered building was turned on as fed by Edison's power generation facility in lower Manhattan, NYC at Pearl St. and then the race began to map the nation with copper power lines (it had a good amount of morse code runs in place but this was a different beast).
Fast forward 15 years and Edison lighting company became General Electric after swallowing some competitors and had won the wars and power generation facilities and lines were rapidly spreading all over the country. Now the light bulb was making a lot of money as homes, businesses and institutions all over the world were buying them to keep the light going through out the night...essentially doubling human productivity with a single technological stroke. This is the vision that kept Edison at work on the bulb in 1878 he knew the gold mine it could be.
Now the bulb really made sense...and now the millions rolled in, fast forward 100 years....and General Electric is STILL the worlds largest power and distribution company, want to talk about influence??
What does this have to do with Uber?
Well some may laugh at the valuation given the current revenue but the fact of the matter is that Uber manages almost no physical hardware. They don't buy or license the cars, they pay the drivers but that's cool, they also maintain and develop the smart phone app. that allows customers to find drivers...but each ride they take a cut from. They provide a way for drivers who want to pick up fairs as a taxi to do so and they share a cut...and as they scale their costs stay exactly tied to what they are paying out to the drivers, their revenue is linked to their growth in terms of customers signed up for the service and actively using it coupled to the number of drivers they have servicing those customers.
It's right next to free money and it's disruption of the taxi and cab entrenched hegemonies in cities all over the world has just started.
$200 billion in revenue off of a global system in which they manage zero hardware but take a cut on how that hardware is deployed to satisfy the fair pickup marketplace....now sounds like a small target to hit.
Their window of opportunity is not big though.... when Google self driving cars come along and Tesla electric cars are made self driving, the need for human drivers will go a way, cities in particular, will put laws in place to actively retard against human driving particularly in urban areas in lieu of automated transport which will be more efficient and safer than human drivers in dense conditions...that may start the beginning of the end for Uber's current business model, they'll have to shift (to buy a fleet and /or deploy robot taxi's easy enough) but they may have to take a revenue rate hit for doing that...as cars cost money and then they will have to own and manage fleets of them...OR simply as private owners are pushed out of their cars by the mentioned laws against human driving in high density areas (exactly the cities where Uber taxi's make such sense) Uber can simply lease their cars and either upgrade them in exchange to self driving status so that they can use the cars when the users don't need them which would keep their revenue going fine again or if the cars are already self driving lease them directly.



So again that $200 billion doesn't look so hard to hit at all.

Like the light bulb of 1887 Uber's business model seems unclear if you don't see the future that it can swim in...but like Edison saw the future the bulb could swim in and had to then built it.
Uber on the other hand doesn't even have to build it, they just have to wait and continue to take money, sounds like a good deal to me.

Salience Theory: What is pain?

This morning I awoke to find a message from a Facebook user (who I am not friends with as yet) regarding the subject of pain:



"Spekulation: Pain: When the parameters upholding consciousness leaves the definition space for those parameters. Tickeling and pleasure: When you travel along the rand of the definition space of consciousness. Of course, the definition space changes as the neuroplasticity redefines how singlas are processed, hence pain happens when signals deviate too quickly from the normal. Do you know of any hypothesis which comes close to the above?"


Immediately the problem in this definition can be identified by realizing that biologically pain is truly a spectrum of alerts and is not a critical threshold where some system goes from signal to noise as would be the case if it were a rapid deviation from "normal" (however that is defined).

Biologically the pain receptors are distributed across the body along with other sensors that can identify pressure. The pain processing pathways and the somatosensory (pressure) processing pathways are therefor different to some degree. What degree would they need to be different in salience theory in order to be useful for consciousness without being terminal to it as asserted in this question is what is most important. It should be obvious that if consciousness were turned off as it were when any pain signal was received we'd have a hard time staying conscious. The function of TRP based molecules revealed in recent research show clearly how finely resolved is the experience of pain.



Pain sign ranges from notification to attention to continued awareness to agony. In the salience theory the dynamic cognition cycle divides dimensions of sensory experience into those that are externally driven and those that are internally driven. At first I was unsure of where pain actually went as it seemed to be triggered by both external and internal sensory factors, for example an obvious external factor that can induce pain is falling off a bike and obtaining bruises, conversely and important internal sensory factor that can induce pain is simply being hungry, the build up of acid in an empty stomach can lead to crippling pain that forces an individual to seek out food to quench.

So from this thought experiment it seems that pain is actually an input sensory dimension that can be triggered internally (we can cause pain to ourselves!) to some degree there seem to be pathways in place to subtract pain when we are causing it to ourselves (for example the mechanism by which self tickling is rendered moot) so there is some necessary feedback in the processing of the pain signal that enables this by attenuating self enabled sensations. However, the fact that pain is triggered by both told me immediately that in fact it was a salience factor akin to emotion. So how would it look like in salience theory?

Let's look at the simple Dynamic Cognition Diagram:







In this diagram,. pain would be triggered either by internal or external causation factors as previously described so where would it be in the cycle? It should be clear that because pain is used to inform action it would be a critical part of salience determination at step 3. The reason again is clearly shown by example to physiology, there are people who have varied ability to sense pain!

The pathologies draw mostly around the pain receptors not being formed at the nerves in the various locations they are distributed across the body and insensitivity to any external forces leading to various types of damage that people with properly functioning sensors don't exhibit. However, the pain receptors send the signal and salience indicates the importance of that signal.

It appears that since there are multiple sensors dedicated to different types of somatosensory experience (pain, pressure, temperature) all have a common salience module.

The subtraction of pain signalling from a self tickle indicates this module labels autonomic action differently from external action, there is likely a similar muting of temperature signals and pressure signals to prevent us from accidentally hurting ourselves in all three aspects.

In salience theory each is given it's on scale of gradation which would then enable feedback and labeling in the comparison stage that can then be used to inform goal selection for committing some sought out action. In the case of these signals this would be as a factor to modulate the cognitive selection process to bias to those options that are away from those that may be causing or have caused pain in the past.

I assert that this modulation is high resolution, dynamic across time in terms of the intensity of the signal reported but static as it is stored with memories associated with past experience. Comparison then simply results from setting a direction per compared salience factor associated with a stored memory versus an incoming experience in a given external dimension (vision, taste, touch (body map), smell,hearing) and then selecting either a stored option that has worked in the past toward achieving the optimal salience goal (if hot, take action to reduce heat. If hungry take action from evaluated options to reduce hunger..etc.).

A recent paper put forward a mechanism on how the cortex proceeds with goal selection that precisely matches with the hypothesis described for comparison in salience theory save for the fact that the paper had no means of describing the importance of salience itself.

A complex dynamic cognition diagram that I am working on attempts to provide these fine details of feedback between the salience module (including similar systems for metering and labeling of emotional import which a separate team has recently realized is granular just as I hypothesized years ago while forming salience theory) that diagram when finished will be the basis of my writing code to create a dynamic cognitive agent a some point in the near future.

That said, the assertion of the original question of pain being simply a threshold switch is obviously wrong it is a far more complex entity that has modes which are very important during conscious evaluation of salience for action, it can achieve levels of intensity that totally over ride actions that bias away from the pain reduction signal and thus that way direct conscious desire (toward escaping the pain exclusively) but that is not a switch.