October 29, 2011

Facilitated Variation (FV): a random (walk) tour

I have always been interested in the interface between complexity theory and evolutionary theory. Particularly, there is a lot to be learned from characterizing evolution as a complex, dynamic system. One central theme in the study of evolving systems is the ability of diverse populations to adapt to new environments. According to contemporary evolutionary theory, this should occur in a non-goal-directed (e.g. random) fashion, and seemingly presents a dilemma: how does variation get “shaped” towards adaptive ends when there is no centralized or deterministic control process? If it is true that evolutionary processes involve small, incremental changes that occur by chance, even in the presence of natural selection or genetic drift, then one might not expect highly-adaptive outcomes that contribute towards the evolution of complex traits.

Facilitated variation (hereafter FV) is a rather obscure concept proposed to explain how randomly-occurring variation in the genotype can end up producing adaptive, specialized phenotypes of high fitness. One of its proponents, a physiologist by the name of Marc Kirschner, has advanced this idea through publications[1], books[2], and conference talks. Kirschner's view is largely physiological, and so focuses on examples from biology (as opposed to computation). In the book “Plausibility of Life”, an example of FV is given that involves the rapid evolution of a double declaw morphology in dogs [2].

Figure 1. Figure 35, from Chapter 7 in “Plausibility of Life” [2]. Note that highly-specialized changes in the phenotype are determined by large-scale changes in the protein sequence.

FV depends on another concept called evolvability [3]. Evolvability, besides having a potentially recursive definition [4], defines the degree to which an organism or population can evolve towards new phenotypes given its current set of constraints and genetic background. According to the criteria of [1, 5], organisms and their phenotypes are evolvable if they exhibit one or more of the following characteristics: weak linkage, modularity, and exploratory behavior. In Gerhart and Kirschner [1], facilitated variation is conceptualized as a force of innovation in evolution resulting in events such as the emergence of RNA transcription, the specialization of cells towards differentiated phenotypes, and origins of morphogenetic patterning. I will now use examples from the evolution of nervous systems to demonstrate how these manifest themselves in the phenotype.

Weak linkage refers to the relaxed coupling of specific components that make up a trait. In the mammalian brain [6], linkage (or covariance) between all structures has been relaxed. This is the mechanism behind enlargement of certain structures relative to overall brain size. For example, structures such as mammalian isocortex are able to expand (or shrink) according to the demands of behavior, information processing, and/or energetics. However, it is important to note that linkage is never entirely disrupted, the importance of which is connected to modularity [7].

Modularity is also an important recipe for the evolution of structures such as the mammalian isocortex. The partitioning of information processing in a brain structure allows for things to proceed in parallel, which can facilitate “on-the-fly” adaptation as well as adaptive changes over evolutionary timescales. This distinction between evolutionary plasticity and life-history plasticity, and the ability to possess both, is a by-product of extensive modularity. In a more general phenotypic context, modularity allows for an organism to evolve specific evolutionary solutions. For example, if there is selective pressure on an organism for a certain type of locomotion, it does not make much sense for parts of the organism not directly involved in locomotion to be involved in the solution.

Exploratory behavior involves the relaxation of the brain or other phenotype’s “hard wiring”, whether it relates to connectivity or changes in structure. Again, isocortex and related structures seem to match this criterion. The combination of dense local connectivity with selective, sparse long-range connectivity (referred to in the literature as “scale-free” [8] connectivity) seems to have been a major evolutionary innovation in isocortex [9]. This type of architecture, seen in many highly adaptable complex networks (from social networks to protein-protein interaction networks), has allowed for myriad changes in the size and location of cortical maps across species.

Figure 2. Examples of brain connectivity. Nodes represent neurons or neural structures, while the arcs represent connections between nodes. A: whole-brain structural (synaptic) connectivity [8], B: example of a network with random connectivity [10], C: example of a network with scale-free (quasi-random) connectivity [10].

Another component of this facilitation mechanism concerns under what conditions FV acts on a population during the course of evolution. In their contribution to the literature on FV , Parter, Kashtan, and Alon [11] have characterized this in terms of RNA evolution and logical circuitry (e.g. a computational approach). These authors argue that many studies of evolution and evolvability examine the range and diversity without regard for the usefulness of the novel phenotypes produced in the course of evolution. Figure 3 demonstrates how this is also a matter of mapping between genotype and phenotype (across examples as diverse as animals, RNA structure, and boolean networks).

Figure 3. Animal, RNA structure, and boolean network example from Figure 1 as shown in [11].

Taking a more computational view than do Gerhart and Kirschner, Parter, Kashtan, and Alon characterize FV as an emergent phenomenon (e.g. the sum of a process being greater than its parts). Like Gerhart and Kirschner [1], they equate FV with the four characteristics of evolvability. Moreover, their view of FV has parallels with learning and memory mechanisms, particularly an idea originating from the turn of the last century called Baldwinian evolution [2]. This idea is based on the assumption that natural selection can be guided in part by the interaction between prior experience and current innovations [12]. While not usually a feature of mainstream evolutionary theory, it is often used in the evolutionary computation community (e.g. genetic algorithm design, behavioral simulations). However, this might also play a role in the evolvability of specific traits, allowing for complex solution to emerge from an otherwise random process.

This largely hypothetical idea (Baldwinian evolution)is the inspiration for Parter, Kashtan, and Alon’s conception of rapid adaptation, which results in the enhanced generation of novelty. In theory, an organism would learn (e.g. acquire mutations) from explorations of past environments, traces of which would then be retained in a population’s genomic diversity. In the context of subsequent environmental challenges, these genomic “memories” (likely instantiated as neutral mutations) would allow for rapid adaptation to seemingly novel environments. Dividing up the genotype and/or phenotype into modules, each component of which performing different tasks and evolving towards different fitness optima, seems to give the best results among their Boolean network examples. It is worth noting that this same research group (Uri Alon’s laboratory) has done work on a topic called environmental switching [13], or the rapid alteration between two environmental contexts. In bacterial and computational models, it has been observed that populations subject to switching are able to evolve solutions more rapidly than those remaining in a single environment.

I would now like to present my own ideas regarding potential mechanisms behind FV, based on insights from complexity theory. Let’s consider the aggregate effect of mutations across individuals in a population as a diffusive process (e.g. driven by noise). Mutations, being largely random, should be distributed across a population in a way that favors no particular portion of the phenotype space. In other words, changes to the variety of phenotypes due to mutations across a population should result in solutions that are distributed in a uniform manner around the current solution (mean phenotype). While this may or may not be a fair assumption, we can envision this as a random walk across the phenotype space. A random walk [14] is a model typically used to describe the fluctuation of particles from a central point that explore a given space both incrementally and randomly. These step sizes (e.g. fluctuations) are drawn from a Gaussian distribution (a.k.a. Brownian motion or white noise), which has a uniform variance about the mean value (see Figure 4).

Figure 4. Left: an instance of a Gaussian random walk. Right: the distribution of step lengths about the mean.

However, random walks and their associated noise need not be based on a Gaussian distribution. There is a class of random walks called Levy flights (or Drunkard's walks) [15], which are based on a 1/f noise distribution and may allow for large-scale jumps across phenotype space using a random (e.g. noise-driven) mechanism (see Figure 5). Levy fights have a behavior similar to avalanches and other power law behaviors in that they exhibit a few very large magnitude events embedded in a vast number of more uniform events. In terms of phenotype space, these large jumps towards phenotypes that exhibit novel adaptations of high fitness may occur due to mutations in genes of large effect, or mutations in the regulatory regions of a gene. In addition, Levy flights (e.g. 1/f random walks) seem to have the right timescale characteristics -- as major evolutionary innovations are quite rare [16] -- to serve as a driving mechanism for evolutionary innovation.

Figure 5. Left: an instance of the Levy flight (1/f random walk). Right: the distribution of step lengths about the mean.

I will not attempt to reconcile all of these ideas here with one grand statement. Nevertheless, I will say that FV is a potentially powerful mechanism for explaining the open-ended nature of evolutionary innovation. Ideas from complexity theory may also help us resolve this ongoing dilemma, in addition to sojourns into high-throughput data. References for further reading can be found below.

References
[1] Gerhart, J. and Kirschner, M. (2007). The theory of facilitated variation. PNAS, 104(1), 8582–8589.

[2] Kirschner, M. and Gerhart, J. (2005). Plausibility of Life. Yale University Press, New Haven.

[3] Wagner, A. (2005). Robustness and Evolvability in Living Systems. Princeton University Press, Princeton, NJ.

[4] Pigliucci, M. (2008). Is evolvability evolvable? Nature Reviews Genetics, 9, 75-82.

[5] Streidter, G. (2006). Principles of Brain Evolution. Sinauer Press, Sunderland, MA.

[6] Darlington, R.B. and Finlay, B.L. (1995). Linked regularities in the development and evolution of mammalian brains. Science, 268, 1578-1584.

[7] Schlosser, G. and Wagner G. (2004) Modularity in Development and Evolution. Harvard University Press, Cambridge, MA.

[8] Bullmore, E. and Sporns, O. (2009). Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10, 186-198.

[9] Jehee, J.F.M. and Murre, J.M.J. (2008). The scalable mammalian brain: emergent distributions of glia and neurons. Biological Cybernetics, 98, 439–445.

[10] Sporns, O. (2011). Networks of the Brain. MIT Press, Cambridge, MA.

[11] Parter, M., Kashtan, N., and Alon, U. (2008). Facilitated Variation: How Evolution Learns from Past Environments To Generalize to New Environments. PLoS Computational Biology, 4(11), e1000206.

[12] Baldwin, J.M. (1896). A New Factor in Evolution. American Naturalist, 30(354), 441-451.

[13] Kashtan, N., Noor, E., and Alon, U. (2007). Varying environments can speed up evolution. PNAS USA, 104, 13711–13716.

[14] Antonelli, P.L. and Sammarco, P.W. (2009). Evolution Via Random Walk on Adaptive Landscapes. Open Systems & Information Dynamics, 6(1), 47-68.

[15] Reynolds, A. M., and C. J. Rhodes. 2009. The Lévy flight paradigm: random search patterns and mechanisms. Ecology, 90, 877–887.

[16] Lowe, C.B., Kellis, M., Siepel, A., Raney, B.J., Clamp, M., Salama, S.R., Kingsley, D.M., Lindblad-Toh, K., and Haussler, D. (2011). Three Periods of Regulatory Innovation During Vertebrate Evolution. Science, 333, 1019-1024.

Additional References (or, things that shaped my thinking when writing this post that are not directly cited)
Hayden, E.J., Ferrada, E., and Wagner, A. (2011). Cryptic genetic variation promotes rapid evolutionary adaptation in an RNA enzyme. Nature, 474, 92-95.

Rutherford, S.L. (2000). From genotype to phenotype: buffering mechanisms and the storage of genetic information. BioEssays, 22, 1095-1105.

Wagner, G. P. and Altenberg, L. (1996). Complex adaptations and the evolution of evolvability. Evolution, 50, 967–976.

Weinreich, D.M., Delaney, N.F., Depristo, M.A., and Hartl, D.L. (2006). Darwinian evolution can follow only very few mutational paths to fitter proteins. Science, 312, 111–114.

October 23, 2011

Papers from Bioinspiration and Biomimetics

Every so often, I post to this blog my reviews of selected articles from recent issues of specialized academic journals to get a sense of current thinking and emerging themes in these fields. This time, I am posting on three articles from the latest issue (circa September 2011) of Bioinspiration and Biomimetics.


Akanyeti, O., Venturelli, R., Visentin, F., Chambers, L., Megill, W.M. and Fiorini, P. (2011). What information do Karman streets offer to flow sensing? Bioinspiration and Biomimetics, 6, 036001.


In Akanyeti et.al, the authors use a sensor array that mimics the lateral lines of fish to sense complex hydrodynamic flows known as a Karman vortex street (KVS). According to the authors, flow information can be used to understand dipole fields, which are created by the flapping tail of a fish while swimming. Within this field, unsteady flows can be created that must be sensed and analyzed by swimming animals and machines alike to maximize performance [1, 2].


KVS structures form due to the presence of objects in a flow (such as prey or obstacles), and form as a columnar array of vortices which are propagated over time. KVS are turbulent, but can also be predictable. In this way, some fishes may exploit the information embedded in such flows to reduce energy expenditures. This has clear benefits to engineered systems which can sense and process these flows in real-time.

The authors remove the fluid-body interaction for purposes of establishing a baseline in building a predictive model for KVS structure and ultimately establishing a series of design principles. This is where the disembodied lateral line comes into play. The vortex profile of a KVS (Figure 3 in the paper) is shown below. A KVS consists of three regions: suction, vortex formation, and the vortex street. The street is shown in this diagram as the light blue region trailing from the area of yellows and reds which define vortex formation. The vortex detachment point (rightmost "X") is the point where the vortex formation region ends, and vortices travel off to the right as discrete packets of turbulence.

The authors also computed the power spectrum of their generated KVSs to evaluate periodicity in space and time. The authors use Fourier-based decomposition and deterministic noise to filter the KVS signature. The predominant frequencies are then recovered A KVS can be characterized by two dominant frequency components, but the boundary of a KVS requires many more.


Calisti, M., Giorelli, M., Levy, G., Mazzolai, B., Hochner, B., Laschi, C., and Dario, P. (2011). An octopus-bioinspired solution to movement and manipulation for soft robots. Bioinspiration and Biomimetics, 6(3), 036002.


In Calisti et.al, the authors review the state-of-the-art regarding soft robotics. Soft robots are a subset of continuum robotics, which bend easily (e.g. have a compliant physical structure), and have no rigid joints or other components. The "softness" of these machines is achieved by using strong, flexible materials for the structure and specialized actuators for the joints. One consequence of soft robots is that the biomimetic template must not be explicitly based on an animal design with skeletal structure. While this may seem to reduce the number of possible templates for engineering applications, there are actually several highly useful like designs in nature.

What kind of compliant yet highly-functional structures has the animal kingdom provided as a template? One example is the "muscular hydrostat" [3]. In the article, this is characterized by octopus tentacles (OctArm) and an elephant trunk (Active Hose). The authors also characterize soft robots as either locomotors or manipulators. The muscular hydrostat is capable of performing both of these functions. Muscular hydrostats use a series of linearly-coupled muscle ganglia that can be coordinated globally to produce a semi-rigid locomotory structure, or locally to enable manipulation and fine motor control.

The authors also offer a candidate algorithm, show examples of how the soft robotic structure deforms during movement and when interacting with objects (Figure 8 in the paper), and a dictionary of robotic movements that map to behaviors observed in nature. Finally, the authors relate their work on soft robots to the principle of embodiment [4], which suggests that there is a complex interplay between movement control, morphology, and the environment.


From left: algorithm proposed in paper, dictionary of robotic actions mapped to natural behaviors, and Figure 8 from paper (geometric deformation during movement).


Peterson, K., Birkmeyer, P., Dudley, R., and Fearing, R.S. (2011). A wing-assisted running robot and implications for avian flight evolution. Bioinspiration and Biomimetics, 6(3), 046008.


Petersen et.al have examined the design of flapping wings by testing a range of wing morphologies. These experiments were conducted in a wind tunnel, while lift and drag forces were measured to evaluate each design. As background, the authors point out that airflow instability that causes gliding microrobots robots to catastrophically fail, while ground-reaction force interactions can destabilize ground-based locomotory robots in a similar manner [5]. Consider that many of the animals who glide (birds, flying squirrels, etc.) also use a second mode of locomotion.

From left: running/hopping, gliding flight, and flapping flight

Flapping flight, with wingbeats that result in a highly-complex set of regulatory mechanisms which offset the effects of air turbulence, is superior to gliding flight in terms of minimizing the possible modes of failure. Yet flapping flight still does not totally remove the possibility of failure. Therefore, a hybrid robot capable of both flapping flight and ground locomotion is an optimal design [6]. This design principle is demonstrated by the author’s DASH robot (featured in this paper).

In theory, one mode should take over when the other fails. Therefore, various wing designs were tested in a static context to evaluate possible failure modes. One test involved understanding how the presence of flapping wings affects the stability of ground locomotion. Not only does this additional morphology make the robot heavier and not able to move as fast, but also results in a selective stiffening of the body. This lack of compliance seems to negatively impact the overall stability of gait. This was compensated for by adding simple polyester (e.g. compliant) feet to each leg.


To summarize some of their better experimental results, the authors compare two different modes of flight (flapping vs. gliding) for a range of wind speeds (see below). Based on the results of their experiments, the authors finish up with a discussion of the optimal mode for flying locomotion.

References:

[1] Liao, J.C., Beal, D.N., Lauder, G.V., and Triantafyllou, M.S. (2003). The Karman gait: novel body kinematics of rainbow trout swimming in a votex street. Journal of Experimental Biology, 206, 1059-1073.

[2] Muller, U.K., van den Heuvel, B., Stemhuis, E.J., and Videler, J.J. (1997). Fish footprints: morphology and energetics of the wake behind a continuously swimming mullet (Chelonlabrosus risso). Jounral of Experimental Biology, 200, 2893-2906.

[3] Gutfreund, Y., Flash, T., Fiorito, G., and Hochner, B. (1998) Patterns of arm muscle activation involved in octopus reaching movements. Journal of Neuroscience, 18(15), 5976–5987.

[4] Pfeifer, R., Iida, F., and Bongard, J. (2005) New robotics: design principles for intelligent systems. Artificial Life, 11, 99–120.

[5] Holmes, P., Full, R., Koditschek, D., and Guckenheimer, J. (2006). The dynamics of legged locomotion: models, analyses, and challenges, SIAM Review, 48, 207-304.

[6] Ijspeert, A.J., Crespi, A., Ryczko, D., and Cabelguen, J.M. (2007). From swimming to walking with a salamander robot driven by a spinal cord model. Science, 315, 1416-1420.

October 19, 2011

Artificial Life XIII is coming

Artificial Life XIII will be held in East Lansing, MI from July 19-22, 2012. It will be hosted by the BEACON group, the NSF-funded evolution in action project also based at Michigan State.

Tracks include: evolution in action, behavior and intelligence, collective dynamics, synthetic biology, and art, music, and philosophy. The paper lineup should resemble previous Alife conferences [1, 2].

[1] Alife XII proceedings

[2] Alife XI proceedings

October 17, 2011

Frontiers of Rapid Prototyping

I have been seeing an increasing number of a articles and proof-of-concept examples using of 3-D printers to make increasingly complicated objects. Scientists and engineers have been using rapid prototyping technology for years to make models and other specialized objects. Below are some 3D printing highlights that the Creative Machines Lab at Cornell have been working on.



One of their projects, spun off as the Endless Forms Website, allows you to "evolve" various objects (such as a screwdriver or lamp design) using a genetic algorithm through personal selection and randomly-generated variation. The goal is to iteratively design objects, and then use a 3D printer to instantiate them in the physical world. An example of this is shown below.



In more of a mass-production context, the vision of the future was put forth several years ago by Neil Gershenfeld in his book "Fab". According to this vision, tools and objects could be printed out with industrial-grade specs at any physical location on the planet given schematics stored on the internet.



While the vision presented out in "Fab" is typical Media Lab/futurist fare, there are now a number of cheap open-source solutions for 3-D printing. The key to printing complex objects is to provide a digital template for the model. This can be done using a CAD model. Fortunately, the people at MakerBot Industries have provided us with open-source software called Tinkercad and corresponding open-source hardware.

Tinkercad: CAD-based program for mocking up 3D designs for printing.

The applications are also quite interesting. Below are lock and key models that have been made using the Tinkercad software and d.i.y. hardware.





There are also some interesting applications to virtual reality research, particularly in the area of tangible computing. Tangible computing allows physical models to be placed in the context of a virtual space. For example, an architect can model the effects of wind and airflow on their building designs by placing a physical model on a digitization pad. The physical model is then integrated with a virtual model of the space, allowing for key parameters to be manipulated.

Advanced 3-D printing in the context of virtual world design would allow tighter integration between physical models and virtual worlds through iterative design and other refinements that account for real-world physics and processes.

In a related article from Maker Blog, work has been done on modeling biological phenomena, particularly things at the micro- and nano-scale. Below is an example of 3D DNA models that are used in the field of nanotechnology for scaffolding and other ultra-small structures.



The 3D DNA models above were not made using the MakerBot technology. Instead, a program called caNANOno [1] was used to design DNA structures using the principles behind DNA origami design. Physically, the designs are instantiated into DNA not by "printed" them out. Rather, DNA is annealed (heat, bent, and then cooled) so that it conforms to the desired shape.

Yet 3D printing can be used to better understand molecular biology. In this case, the interaction between actual molecular structures, virtual models, and physical models might be used to design new molecules for therapy and other applications using iterative design. This is an open (and seemingly fertile) area of research for both d.i.y.-minded engineers and biotechnologists alike.

References:

[1] Douglas, S.M. et.al (2009). Rapid prototyping of 3D DNA-origami shapes with caDNAno. Nucleic Acids Research, 37(15), 5001–5006. Link to paper

October 11, 2011

The "Growth and Form" of Pasta

This weekend, I found a really interesting book called "Pasta by Design", which approaches pasta with the eye of an industrial designer or engineer. Thinking it was a novelty at first, I found out later that there is a related book called the "Geometry of Pasta". I'm not sure if books like this are released to compete with each other, but there is apparently an interest in this topic.




I find the links between the quantification of pasta and the growth/development of living systems interesting. In "Pasta by Design", each type of pasta is cataloged and quantified, having their own geometry as determined by artificial selection (in this case human preference). Are these simply the most "elegant" forms, or are they the most efficient forms given the need to shape the pasta by hand and incorporate it into cuisine? In the end, I was most intrigued by the functions that describe the geometry of each type of shell. This method for quantifying form goes all the way back to D'Arcy Thompson and his seminal book "On Growth and Form".



Thompson's book featured many different conceptualizations of form, especially as they relate to development and variation observed in animal and plant morphology. Most importantly, he observed that growth trajectories can be distilled to a series of recursive, geometric functions. His examination of shell accretion in marine invertebrates is a direct parallel with pasta.


Picture from http://www.darcythompson.org/about.html

Of course, the most famous example of using simple geometric functions to produce complex geometries is the book "Fractal Geometry of Nature". This work was extended in the direction of discrete dynamical simulation by Steven Wolfram in his book "A New Kind of Science". What is intriguing about both of these books is that they assign something called intrinsic randomness to the role of self-organizer. While the equation or ruleset guides the system, random processes produce the fine structure. Is pasta partially the product of intrinsic randomness? That's sounds like an Ig Nobel prize-winning question.




Finally, a book called "On Growth, Form, and Computers" (1992) more explicitly extended the ideas of Thompson to evolutionary algorithms and other such simulations (e.g. artificial life). Such simple geometric functions can be used to render complex computer graphics. As in the case of biological development, these relatively simple equations provide a mechanism for self-organizing processes.



So, is making pasta a combination of self-organization and artificial selection? Probably not in the conventional way of thinking about these two things, especially by themselves. Yet by making comparisons with living complex systems, I think the answer could be "yes".

October 10, 2011

Clearly I need to tag these...

Ah, the foibles/joys of pattern recognition........ Pictures from the ground at Oval Beach (Saugatuck), Lake Michigan. But apparently, there is a face in the sand.



Hopefully in the future Facebook can fare better than the illusion of Cydonia.

1976, low resolution.


UPDATE: April 9, 2014.

With the new pattern recognition algorithm DeepFace [1], perhaps Facebook faces will be more distinguishable from Facebook visual interesting patterns from now on.



[1] Taigman, Y., Yang, M., Ranzato, M.A., and Wolf, L.   DeepFace: Closing the Gap to Human-Level Performance in Face Verification. Conference on Computer Vision and Pattern Recognition (CVPR), 2014

October 3, 2011

The Curse of Orthogonality

This is an idea I developed after reading a paper [1] on detecting pluripotency in different cell populations. The theory was developed in terms of cell biology, but may also be applicable to a broader range of biological systems and even social systems.

One of the goals in analyzing biological and social systems is to gain a complete understanding of a given system or context. One assumption people make is that the more variables you have, the more complete the level of understanding. This is the idea behind the phrase two heads are better than one. For example, combining different types of data (e.g. sensor output, images) or different indicators gives one multiple perspectives on a process. This is similar to a several people blindly feeling different parts of an object, and coming to a consensus as to its identity [2].

This model of consensus is popular because it is consistent with normative statistical models. In other words, the more types of measurement we have, the closer to an "average" description we will get. Furthermore, variables of different classes (e.g. variables that describe different things) should be separable (e.g. should not interact with one another). However, a situation may arise whereby multiple measurements of the same phenomenon yield contradictory results. While "two heads are better than one", we also intuitively know that there can also be "too much of a good thing". By contradictory, I mean that these variables will strongly interact with one another. Not only do these inconsistent variables interact, but also exhibit non-normative behaviors with respect to their distribution. This is consistent with the idea of "deviant" effects which cannot be easily understood using a normative model [3,4]. This is what I am calling the "curse of orthogonality", which is inspired by the "curse of dimensionality" identified by Bellman and others [5,6] in complex systems.

Orthgonality is usually defined as "mutually exclusive, or at a right angle to". While it is not equivalent to statistical independence, there is an unexplored relationship between the two. This is particularly true of underspecified systems, for which most biological and social systems qualify. The "curse" refers to orthogonality of a slightly different definition. For any two variables that on their own have predictive power, the combined predictive power is subadditive due to that predictive power being oriented in different directions. In cases where there are many such variables, the addition of variables to an analysis will dampen the increase in predictive power.

Non-additivity is a common attribute of epistatic systems (where multiple genes interact [7] during expression), drug synergies (e.g. where drugs interact [8] when administered together), and sensory reweighting (e.g. when multisensory cues are dynamically integrated in the brain [9] during behavior). However, there is no general principle that explains tendencies involving such nonlinear effects.

As an example, let us take two indicators of the same process, called A and B. Taken separately, A and B correlate well with the process in question. However, most processes in complex systems have many moving parts and undercurrents that interact. Therefore, the combination of A and B will not be additive. We can see this in the two figures (1 and 2) below.


Figure 1



Figure 2


In Figure 1, the existence or lack of co-occurrence for A and B can be classified according to their sensitivity and specificity. In an actual data analysis, this gives us a subset of true positives which varies in proportion based on the actual data. In this example, true positives are both A and B correctly predict a given state of the system under study. Seen as a series of pseudo-distributions (Figure 2), the overlap between A and B provides a region of true positives. What can be learn from this theoretical example? The first thing is that even though co-occurrence relations between A and B predict each category, one variable (variable B in the case of Figure 2) might exhibit more commonly-expressed variance than the other (e.g. more support in the center of the distribution). By extension, the variable (variable A in the case of Figure 2) might exhibit longer tails than the other (e.g. variance that contributes to rare events). It is predicted that the more heterogeneity observed between the distributions (e.g. different variables with different distributions) will result in a less discriminant result (e.g. a higher percentage of false positives).

In cases such as this, we can say that A and B are quasi-independently distributed. That is, even though their distributions are overlapping, observations of A and B's behavior in the same context can often mimic the behavior of two independent variables. This is because while A and B are part of the same process, they are not closely integrated in a functional sense. In other words, using a small number of variables to predict a highly complex system will often yield a sub-additive explanation of the observed variance.

Unlike the curse of dimensionality, the curse of orthogonality depends more on effects of interactions than number of dimensions or variables. Therefore, overcoming the curse of orthogonality is tied to a better understanding of sub- and super-additive interactions and the effects of extreme events in complex systems. While this "curse" does not effect all complex systems, it is worth considering when dealing with highly complex systems when different measurements are in disagreement and multivariate analyses yield poor results.

References:
[1] Hough, et.al (2009). A continuum of cell states spans pluripotency and lineage commitment in human embryonic stem cells. PLoS One, 4(11), e7708.

[2] Aitchison, J. and Schwikowski, B. (2003). Systems Biology and Elephants. Nature Cell Biology, 5, 285.

[3] Samoilov, M.S. and Arkin, A.P. (2006). Deviant effects in molecular reaction pathways. Nature Biotechnology, 24(10), 1235-1240.

[4] Evans, M., Hastings, N., and Peacock, B. (2000). Statistical Distributions. Wiley, New York.

[5] Bellman, R.E. (1957). Dynamic programming. Princeton University Press, Princeton, NJ.

[6] Donoho, D.L. (2000). High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality. Analysis, 1-33. American Mathematical Society.

[7] Cordell, H.C. (2002). Epistasis: what it means, what it doesn’t mean, and statistical methods to detect it in humans. Human Molecular Genetics, 11, 2463–2468.

[8] Tallarida, R.J. (2001). Drug Synergism: Its Detection and Applications. Journal of Pharmacology and Experimental Therapeutics, 298(3), 865–872.

[9] Carver, S., Kiemel, T., Jeka, J.J. (2006). Modeling the dynamics of sensory reweighting. Biological Cybernetics, 95, 123–134.

October 2, 2011

"Shocking" genomic mechanisms wanted

In the course of reviewing literature on transposons, I stumbled upon a classic article [1] by Barbara McClintock that was based on a Nobel Prize lecture. Finding this article (McClintock did some of the first studies of transposable elements in Zea mays) got me to thinking about the broader context of transposons and the architecture of the genome in general.

Even though McClintock's article is almost 30 years old, it features some fundamental ideas that the systems biology community would do well to reflect upon. For example, in the article she talks about genomic "shocks" of two classes: perturbations such as heat shock and DNA repair, and novel perturbations for which there is no pre-programmed genomic response to. Examples of the latter might include exposure to mutagenic agents or a large-scale injury.

While it is not mentioned in the article, this has been the basis for much work done in the area of mutational and organismal robustness. Yet current models of robustness [2] are still contingent upon specific mechanisms that confer the ability to adapt or recover from one of these shocks. What McClintock's article suggests (at least to me) that there is another side to the robustness coin. The alternate question to be asked is why generalized genomic mechanisms evolved in the first place, why/how they are maintained in evolution, and how they differ from more inducible, contextual responses.

This idea of being prepared for shocks could provide explanatory power to the understanding of regenerative capacity across animal species. Regeneration is widespread in marine invertebrate species, including totipotency in response to bodily injury. Experiments and more informal observations among vertebrates have suggested that fishes and amphibians have a more extensive set of regeneration mechanisms than do mammals. But what accounts for these differences? A formal mechanism has yet to be proposed. But the idea of a pre-programmed response for regeneration evolving in the fish/amphibian ancestor or perhaps the common ancestor of invertebrates/vertebrates is alluring.

This would not explain the loss of this response in mammalian genomes, but my guess would be that a Mammalian common ancestor traded-off this mechanism for something else. And while the idea of regenerative capacity-as-an-evolutionary tradeoff [3] is quite speculative and perhaps controversial, it is still worth considering regeneration in the context of a given genome's shock-absorbing capacity.

References:
[1] McClintock, B. (1984). The Significance of Responses of the Genome to Challenge. Science, 226(4676), 792-801.

[2] Kitano, H. (2004). Biological Robustness. Nature Reviews Genetics, 5, 826-837.

[3] Weinstein, B.S. (2009). Evolutionary trade-offs: emergent constraints and their adaptive consequences. PhD Dissertation, University of Michigan.

Printfriendly