Remember way back, when I mentioned genetic algorithms in the course of criticizing Michael Egnor? I described, conceptually, a way of using the mechanisms of mutation and selection to discover the structure of DNA, given X-ray crystallography data.
Let a â€œgeneâ€ in the computerâ€™s memory be the spatial locations of molecular units: sugars, phosphates, purines, pyrimidines â€” the small molecules which Franklin, Pauling et al. knew were the constituents of DNA. We create a “gene pool” of random variations, and then we iterate the genetic algorithm (GA), using as fitness function a comparison between a calculated X-ray diffraction pattern and the X-ray images taken experimentally.
Imagine tossing out a thousand random guesses about what DNA looks like. For each guess, we could calculate what the X-ray diffraction pattern would look like given that particular molecular structure. Most of the time, it wonâ€™t look anything like the X-ray pictures we take in the laboratory, but a few of them will by happy accident look a little more like the real thing. This slight preference becomes the starting point for selection. We let our ideas breed, giving favor to those which perform best. The irresistible logic of Darwin goes to work.
(This is a bit of a perspective shift. Instead of thinking like sane people do about DNA carrying genes, weâ€™re considering an abstract sort of gene which defines the shape of a hypothetical DNA structure.)
If weâ€™d invented fast and cheap computers before we knew about DNA â€” say, in some parallel Sliders world or steampunk fantasy where computers happened five decades sooner â€” this might well be how scientists would have tried to solve DNA. It requires much less cleverness, and correspondingly more computer time. As I mentioned before, people have applied this method to figuring out molecule shapes, although to my knowledge nobody has tried to “re-solve” DNA this way.
An interesting point: if our “genes” are molecular positions and our “phenotypes” are X-ray diffraction images, then it looks like we’ve got a non-trivial “morphology” between the two. Some “development” has to take place, although it rather happens “all at once.” It might be interesting to look at a GA in which structures are generated by a really non-trivial development process.
How about, say, the growth of neurons?
This is a 2005 talk by Terry Sejnowski of the Salk Institute, which he chose to title “Dendritic Darwinism.” Personally, I would have gone for “Dendritic Evo-Devo,” and not just because creationists have gotten me antsy about the word “Darwinism”: as Sejnowski says, the big step which his post-doc Klaus Stiefel made was to establish a “morphological” step in between genotype and phenotype. The method of choice was to build a neuron’s dendritic structure via a Lindenmayer system.
The researchers’ dendritic GA was able to hit on structures which perform some pretty interesting tasks. For example, by combining a thick dendrite with lots of thin ones, the GA made neurons which could tell which of two inputs arrived first. They also evolved “coincidence detector” neurons, which have considerable biological significance. In the barn owl, for example, there is a brain part called the inferior colliculus, which the owl uses to process sound. We can identify places in the inferior colliculus where neurons act like AND gates: they have two inputs and only produce an output when both inputs fire simultaneously.
For the paper version, see K. M. Stiefel and T. J. Sejnowski, “Mapping Function Onto Neuronal Morphology” (2007) J Neurophysiol 98: 513–526.
Neurons have a wide range of dendritic morphologies the functions of which are largely unknown. We used an optimization procedure to find neuronal morphological structures for two computational tasks: first, neuronal morphologies were selected for linearly summing excitatory synaptic potentials (EPSPs); second, structures were selected that distinguished the temporal order of EPSPs. The solutions resembled the morphology of real neurons. In particular the neurons optimized for linear summation electrotonically separated their synapses, as found in avian nucleus laminaris neurons, and neurons optimized for spike-order detection had primary dendrites of significantly different diameter, as found in the basal and apical dendrites of cortical pyramidal neurons. This similarity makes an experimentally testable prediction of our theoretical approach, which is that pyramidal neurons can act as spike-order detectors for basal and apical inputs. The automated mapping between neuronal function and structure introduced here could allow a large catalog of computational functions to be built indexed by morphological structure.
(Video via RBH at Good Math, Bad Math.)
UPDATE: I should have said, video via Adam Ierymenko courtesy RBH.