Saturday, July 31, 2010
Caught a pic of the bee
My sunflowers don't follow the sun. I have been collecting bee data for the Great Sunflower Project this summer. The first four flowers produced by my plants have all faced East, so that they are backlit by the sun in the afternoon when I can do my data gathering. I've wanted to photograph the variety of small bee that is the most frequent visitor, but I couldn't get a good picture until today.
A flower opened a couple days ago on the West side of a plant, and I have been able to take usable photographs. I think this is a variety of mason bee, but I'm not sure yet. It is smaller (less than half the weight) than a honey bee, about the size of a sweat bee or even smaller, but not as small as the tiny all-green bees that show up on occasion.
Visible at the top of the image is some damage to the flower's seed head. Goldfinches have been eating the immature seeds. That is OK. I have seldom had a chance to observe goldfinches before. Another side benefit to growing sunflowers is that they draw the occasional hummingbird. Even though this variety, Lemon Queen, has no nectar, it is big, bright and bold, so hummingbirds will come, buzz the flower for no more than a second, then zoom away.
I have observed mostly this kind of green-and-striped bee and bumble bees at my sunflowers, plus a few of the tiny green ones, and just two (so far) very small bees that are colored like a honey bee. Not a single honey bee as yet. In fact, I've looked for bees in lots of places this Spring and Summer, and have seen no more than three honey bees all season.
Friday, July 30, 2010
More galloping
I posted just two days ago on error propagation in the Logistic Equation
Xn+1 = r Xn (1 - Xn)I used an r of 3.625, which is well in the chaotic region, and has an exact representation in floating-point variables, and a starting X of 0.5. The equation, for any value of r less than 4.0, produces a series of numbers between 0 and 1. The factor X(1-X) makes this well-suited to minimize error propagation, because the error in X will be balanced by an equal and opposite error in (1-X).
I crafted a rational-number method for use in Excel that has double the precision of raw Excel variables. I used this to prepare a set of "standard" results from the Logistic recursion. I compared those results with ordinary calculations, and with reduced-precision calculations at four ROUND settings, of 15, 12, 9 and 6 digits. Though Excel displays up to 15 digits of a number, which implies about 50 bits of precision, internally, it does calculations to a higher precision, providing "guard digits" and making for fewer infelicities like 1/(1/3) producing 3.00000000000001.
As I expected, the column of raw Excel calculations subtracted from the "standard" showed that the internal representation is actually about 19 digits, or 64 bits of precision. This squares with a report or two I have read that claim Excel uses IEEE 80-bit floating point values, though I don't find any Microsoft publications that confirm this.
This chart shows the accumulating errors of each data set.
I truncated the lines for 6, 9, and 12 digit rounding at the point just beyond where the errors "saturate", or become equivalent in size to the data. Two issues are instantly visible. Firstly, 15-digit rounding has nearly no effect on total rounding error accumulation. Secondly, the 9-digit rounded sequence has fortuitously produced a highly accurate value at about step 75, which delayed the onset of ruinous error accumulation by about 20 steps.
I have observed in the past, for many recursive calculations, that rounding errors tend to double in absolute magnitude with each step. Only a well-crafted algorithm plays errors against one another to slow down this phenomenon. Such is the case with the Logistic recursion. The next chart shows the absolute values of the errors for each sequence, in semi-log coordinates.
Here we can see more clearly what is happening. Each data set has a characteristic trajectory. At step 75, the 9-digit-rounded sequence has been "reset" to a level characteristic of about twenty steps earlier. And though the Raw and 15-digit-rounded sequences start a little differently, they follow nearly identical paths after the tenth iteration.
I did a regression on the central 80% of each of these sequences (except for 9-digits, which I truncated at step 75), which confirmed that the errors take about ten steps to increase by a factor of ten, which is a doubling rate of about 3.3 steps per doubling. However, the details show an interesting phenomenon, shown in the third chart:
These are about the first thirty errors (steps 3-32; steps 1 and 2 were error-free) for the 9-digit and 12-digit sequences. The errors grow rapidly for four or five steps, then drop by an order of magnitude, then repeat. I looked at the numbers. In all the sequences, whenever the resulting X is close to 0.5, the next step will result in a much more accurate value. The steep slope of these mini-sawteeth is one of doubling at every step. This periodic resetting of the error propagation accounts for the ability of these calculations to proceed for so long (more than 100 steps for 15-digit-rounding and for raw without rounding) with so little error.
I expect that my rational algorithm has a precision of 30-40 digits, depending on the way Excel is handling the guard digits. I'll use these data to predict how far it can take the sequence without being swamped by error. The regressions I performed can be extrapolated to predict the point at which |error| equals 1. For the 6, 9, 12, and 15-digit algorithms, they are at step 63, 90, 116, and 156. These indicate that each increase by one digit in the accuracy of the calculations gains nine or ten steps of meaningful results. That means that a 30-digit-accurate calculation sequence can go about 300 steps in the Logistic recursion, at which point the values are no longer meaningful.
This points up the value of doing analytical math before starting any calculation that relies on a recursion, such as a numerical integration. It also underlies the mantra I heard from teachers in my advanced math courses: use the highest order method, with the longest steps, that you can. If, as I have frequently found, many algorithms experience a doubling in error with each step, you really have to plan for a method that will recur no more than 50 steps for an initial precision of 50 bits (15 digits). The Mandelbrot set is notorious for such ruinous error propagation. Many of the pretty pictures you see are highly magnified noise that has converged on an aesthetically pleasant configuration. More to the point, the wisest programmers have used better algorithms and even "bignum" calculations (my rational method is a simple bignum algorithm).
Some nitty gritty for those who are interested. My rational algorithm worked as follows. I used the ratio of two Excel variables to represent r, X and (1-X). The numerator and denominator (n and d) for r were 1.9258125 and 0.53125. The n value was chosen to be close to the square root of 3.625, but for both n and d to be exactly representable by Excel floating point values. The starting values for X were 1 and 2, representing 0.5.
At every step (1-X) was calculated by subtracting the n for X from its d and using the same d. All three n's and all three d's were multiplied to get the n and d for the next X. Though X stays between 0 and 1, its n and d can grow very large or very small, in synchrony. Whenever the exponents for the n and d values approached 100 or -100, I normalized the current X (multiplying its n and d by an appropriate value) and continued. That means that running this method needed manual intervention about every eight to ten steps. However, doing this manually gave me a number of good ideas about programming such a method in C or Perl should I want to investigate further.
Thursday, July 29, 2010
A coffee table math book
Tic Tac Toe is a known game. That means the number of possible games is small enough that a complete analysis has been performed, and optimal strategy has been determined. If both players are experienced and make no mistakes, every game will end in a draw.
In the process of analyzing the game, the following graphic was produced, which summarizes every possible game, even the stupid ones, ignoring symmetry. There are about a quarter million of them.
This illustration is found on page 39 of The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics by Clifford A. Pickover. Before discussing the book further, let's look closer at the illustration. Click on the images for a larger version. The closeup below shows all games that begin with X in the upper-left corner.
This allows us to see a little further into the matrix, but there is quite a lot of detail here. I scanned at full resolution for the following little clip, which shows another level or two, but now we've reached the limitations of the printed page.
Although there are about a quarter million possible games, this can be reduced a lot by symmetry. There are about 27,000 actual games possible, but only just over 1,400 if players at least have the wit to block whenever possible. See the Mathematical Recreations discussion for details.
Back to the book. Dr. Pickover has traced mathematical skills back to the ability of ants to count: they find their way back to the nest by counting their steps. And we thought it took a mammalian brain to count! But we know little of animal abilities, so three essays suffice to dispose of the nonhuman animals.
I was a little taken aback by the fourth essay, on knots. Tying a knot to keep a bead from slipping hardly seems like a mathematical activity to me. I think the use of knots as a data-storage medium by the Incas some 5,000 years ago better fits the bill.
Though this is a fairly large book, fully half is full-page photographs or graphics; there are 250 short essays and 250 images. A lot of care went into selecting them. In a few cases, such as a pixieish image of Martin Gardner with his books, the mathematician in question is pictured, particularly for contributions that don't lend themselves to meaningful visualization. Or, in the case of Gardner, which of his thousands of Scientific American columns do we pick an image from?
It would be nice to have a smooth brass loopy ring like this to dip into soap solution and produce this minimal surface. This one (page 193) was computer-generated, and you can be sure the computer took lots longer to fine-tune the surface than a soap bubble film would. This leads me to think of the pronouncements of some (such as Max Segmark, discussed on page 516), that we live in a universe that is mathematics. The universe and the things in it just do what they do. Mathematically modeling what they do is frequently a memory-hogging, time consuming enterprise.
Like this bubble film: produce any un-knotted closed loop you like, and dip it in soap solution (add a little glycerine to strengthen the film). Presto! The surface that results will be a minimal surface, slightly modified by the Earth's gravitational attraction (So, to get a true minimal surface, do the experiment on the ISS, though "dipping" there has complications not found down here). Consider the "shortest path" problem. It is computationally difficult to find the exactly shortest, or quickest, path from here to, say, the Toronto Zoo. But if you tie a bunch of strings together with the exactly scaled knot-to-knot distances (or time-durations), for all the likely roads between here and there, you can solve the conundrum by simply picking up the knot that represents where you are, and the shortest route will be the one that goes straight to the destination without sagging. Then just write down all the routes. By the way, Google Maps and your GPS use a near-optimal route finder that is a lot quicker than one that could find the exactly optimal route.
This looks like another minimal surface, but in fact it represents an inside-out sphere (Page 255). Ordinary spheres have a constant, positive curvature, by definition. This item has a constant negative curvature. The kicker is that it has to go to infinity in two directions, but does so in a way that it has a finite volume.
A lot of math is like that, dealing with infinite things that aren't infinite when looked at another way. This may be why many people shy away from it. As a working mathematician, I've had to do some problems that work out like this: Add up everything Outside some curve of interest, then subtract the Universe, to get the sum of what is Inside. The ability of certain mathematical techniques to either add or subtract the Universe is both fascinating and appalling!
Another thing that puts a lot of people off of math is that it takes something like five years of advanced education to understand why the expression eiπ+1=0 is in any way significant. Just consider this: π, pi, is the ratio of a circle's circumference to its diameter, while e, Euler's constant, is the basis of natural logarithms, which express things like our eye's adaptability: No matter how bright the illumination is, if it is cut in half, our response is the same. In a room dimly lit by a 10-candlepower light, reducing its brightness to 5 candlepower is quite noticeable. Go outside, where the illumination may be 10,000 candlepower. Shade half of it so there is now 5,000 candlepower, and the perceived reduction in light feels the same. This is a logarithmic response. So what do circles have to do with logarithms? Nobody really knows, but I do understand the math that makes that equation work.
The book, as any good history, is arranged in time order. Though the history of mathematics goes back at least 5,000 years, most of the discoveries have come in more recent times. As in any discipline, as new tools are discovered, it is possible to discover even more. The halfway essay, #125 (page 266) discusses a discovery from 1875, a mere 135 years ago, the Reuleaux Triangle, a curved, three-pointed shape that can be rotated inside a square such that it goes exactly into each corner in turn. It is the basis of drill bits that can cut a square hole. More recently, its ability to rotate in a potato shaped cavity is the basis of the Wankel engine (which the author does not discuss. I've owned a Mazda car with such an engine. Lovely machine).
Just for fun, I did another half-jump, to the two essays that straddle 137.5. They are dated 1946 and 1947. So one quarter of the mathematical discoveries discussed date since World War II and the development of electronic computing machinery. The computer is such a great tool for mathematicians, allowing us to see things we could only think about before. Not all are equally good at mental visualization. But the work of John Von Neumann, discussed in the 1946 essay, and the Gray Code (it is used for making communications less prone to error) of the 1947 essay are two of many breakthroughs that led to the computer-intensive world we have today. The lovely illustrations above are just two examples of things we can see that before we had to imagine.
Not only is the computer useful to show us things it is hard to draw, it excels in running simple, recursive processes that can produce quite complex patterns. Patterns like the ones on this Cone snail occur in cellular atomata, which have been studied since the 1940s, before there were computers to help out. If you know the algorithm to use, you can draw patterns like this on graph paper, just as Conway's "game of Life" can be done by hand on graph paper (yes, I'm just obsessive enough that, before the PC was affordable, I used to play Life on graph paper).
Today, the computer is used throughout mathematics, and there is a tendency to use it as a crutch. We need to remember that, to a mathematician, the "real" numbers used by a computer are actually rational numbers of a particular type. The computer can represent, in the "double precision" conventions in use now, either 264 (1.8×1019) or 296 (7.9×1028) values. That is "all" that can be expressed. That is a lot, true, but between any two of them there are infinitely many values that cannot be expressed. That means a calculation such as the tangent of ten million radians cannot be calculated with as much precision as we might like. You have to divide the ten million by a value of π that is accurate to either 15 or 24 decimal digits, and subtract the part larger than 1. That is a 7-digit value, leaving only 8 or 14 digits after the decimal with which to do further calculations. Things like this lead to our continual need to improve analytical techniques that do not depend on digital methods. Most of the last quarter of the book presents problems that were solved primarily using digital methods.
So it is good to see John Horton Conway, as late as 1986, coming up with the "audioactive sequence", which makes sense only when spoken aloud. It begins 1, 11, 21, 1211, 111221; it is pronounced "one", "one one", "two ones", "one two, one ones" and so forth. It has a sister sequence in which you don't count the numbers in sequence, but simply tally them up: 1, 11, 21, 1211, 3112, 211213, 312213; and a 4 has to show up before the items get longer than six digits. While one can do a lot of things with this sequence using a computer, it is quite a bit of fun to fool with "by ear". You get other sequences by starting with something other than 1. Try it!
There's lots of room for mathematics to grow. A book like this brings a variety of the familiar and the obscure to a more level playing field, and just may convince a few folks that they aren't so bad at math after all.
Wednesday, July 28, 2010
Galloping errors
I had occasion to consider the Logistic Equation today, recalling an analysis I began to do some years ago. At the time I had a DEC VAX 7600 available to me, and its FORTRAN language supported quad precision calculations. I am presently confined to ordinary "double precision", the 48-bit-plus-exponent reals used in Excel and most implementations of C or Perl. So I took a new approach.
I was interested in error propagation. Forty-eight bits of precision seems like a lot, and it approximates 15 decimal digits. But equations like the Logistic Equation can be pathological because you are subtracting quantities in the range [0,1] from 1.0, over and over. The equation has the form:
Xn+1 = r Xn (1 - Xn)As long as you choose a starting X between 0 and 1 and an r between 0 and 4, the equation won't blow up, because X(1-X) will be of the order of 0.25 or less.
This equation is famous for displaying chaotic behavior when r exceeds 3.57. For smaller values, it has various "attractors" that either settle down to an asymptote, or jump between two, four, or more numbers. But in the chaotic region, it never settles down (See this Wikipedia article for a full explanation).
When I began this analysis, I was wondering to what extent the chaotic behavior is a function of rounding error. As I wrote above, fifteen digits of precision is a lot. But in a recursive process, rounding errors accumulate. To see how quickly they might show up, and how quickly they would dominate the process, I started by calculating 100 recursions using r = 3.625 and X0 = 0.5, using Excel 2010. Then I did the same thing using the ROUND function, first rounding every calculation to seven digits, then rounding to twelve digits. An example function, for those who want to check, is
=$B$6*ROUND(E12,12)*ROUND(1-E12,12)B6 is where r was, and E12 was the prior X. I subtracted the results from the "full precision" column to produce this chart:
I had originally planned to increase the table size as needed, but found that 100 rows was plenty. The 7-digit-precision rounding errors become 1% of the data values at the 55th iteration, and the 12-digit-precision errors get that large at the 98th iteration. By the 66th iteration, rounding error has taken over the 7-digit calculation, and there is little doubt that the 12-digit calculation will be essentially taken over at about the 110th iteration.
Can there be any doubt that the 15-digit calculation has little chance of remaining reasonably error-free as long as 150 iterations? Yet I frequently see graphs showing the logistic equation's results for thousands of iterations. The majority of the results on such charts are clearly bogus. We need better ways of handling such calculations. I chose 0.5 and 3.625 carefully, since both are exactly representable in binary real variables. Yet the equations were swamped by rounding rather quickly. I suspect even a 16x precision (~480 bits) calculation using traditional styles of "real" arithmetic will not survive to the 1000th iteration.
I am thinking about ways to resolve the dilemma, but for the moment, I can do little more than present it.
Tuesday, July 27, 2010
My silly time
I just watched a rerun of the June 1, 2010 episode of Wipeout on ABC, one I had missed. Wipeout is not quite my favorite show, but it is close. The show I most hate to miss is America's Funniest Videos. Besides these two, the only other show I seldom miss is Nature on PBS.
I've never been much of a TV watcher. I am more of a reader, as this weblog demonstrates. But I began watching AFV almost ten years ago as laugh therapy while undergoing chemotherapy. I got hooked, and when Wipeout began a couple years ago, it provided a mid-week "fix" for me. My work is so serious that I really relish some silly time.
Monday, July 26, 2010
Gotta love this molecule
I am very susceptible to athlete's foot. For years I used Desenex®, with the active ingredient Miconazole, to keep it at bay and to (barely) drive it out when I got an infestation. Some months back, I tried an ointment, Neosporin AF®, which also contains Miconazole. Equally slow, but also effective.
Then, we picked up some Lamasil AF® spray, which contains Tolnaftate (shown). I used it for my most recent infestation, beginning a little less than two weeks ago. Boy, is it fast! 24 hours for the raw skin to de-inflame, and about a week to clear the infestation up completely. That compares with at least four weeks of diligent use needed for Miconazole to work. Great stuff.
Sunday, July 25, 2010
The farthest possible object
There was some excitement not long ago, among cosmologists, that a quasar with a redshift parameter Z=10 had been observed. However, this is not yet verified, so the highest redshifts currently accepted are in the range around Z=7.
Z is the wavelength ratio: 1+Z = λobsv/λemit. This is related to relativistic velocity thus: (1+Z)2 = [(1+v/c)/(1-v/c)]. By a little algebra, we find: v/c = [(1+Z)2-1]/[(1+Z)2+1]. Now, for an object with Z=10, v/c works out to 120/122, or approximately v=0.984c. That's getting pretty close to the observable limit.
The point I was thinking of, however, is that the actual limit of observation is not much beyond the point where Z=10. This is my reasoning:
- The universe is filled with dilute hydrogen. It forms 99% of the interstellar medium, at densities between 0.1 and 1 atom/cc.
- Hydrogen is ionized by radiation with an energy of 13.6eV or greater. That corresponds to a wavelength of 91.2nm (400nm is the bluest light the eye can see).
- Hydrogen thus absorbs all photons with wavelengths shorter than this. It also absorbs selected wavelengths of the Lyman series starting near 122nm. Spectral features with shorter wavelengths will not be observed at any distance greater than a few light years.
- Of the 1% of the interstellar medium that is not hydrogen, much is water, and much is also carbon dioxide. Water, in particular, has a series of broad absorption bands in the infrared beginning about 800nm, and absorbs everything beginning at 11,000nm (11 microns), until the very far infrared with wavelengths of a few mm.
- Over millions of light years, there is enough water to absorb most radiation longer than 800nm, and carbon dioxide will "get" much of what water misses.
- There happens to be a gap in water's absorption bands in the 900-1000nm range.
- For Z=10, you need the wavelength ratio to be 11. 91.2nm times 11 is 1003.
- While it is thus possible to observe light that began as Lyman series emissions, that has been stretched into the 1000nm range, this is the Redshift limit, for practical purposes.
Saturday, July 24, 2010
A lyric of less-peopled places
People have two opposed yearnings, to be comfortable, and to be one with nature. The title of Susan Hand Shetterly's book indicates she's trying to have the best of both worlds: Settled in the Wild: Notes from the Edge of Town. Nearly forty years ago, she and her husband moved to a cabin in Maine, with the stated desire to "get back to the land." She notes being amused by the word back; they hadn't been there before. And anyway, if you're "settled", is it "wild"?
She writes little of the struggles and privations of "the simple life". Indeed, as she and her husband both had "day jobs" that they could pursue from anywhere, they were not wholly dependent on their (slender) farming abilities. Ah, the age of telecommunications. Our author can write for Maine Times or Audubon Magazine while living anywhere at all. So she writes primarily of relationships, with both animals and people.
Two essays reveal her evolving relationships and accomodations with difficult neighbors. I don't mean bad neighbors, just good-hearted people with widely differing points of view. One of them, Ray, is a trapper and the author's dog was once caught in one of his traps. Though this got them started on rough footing, much of that essay is about Ray helping her clear her woodlot. She doesn't mention whether he worked for pay or was being neighborly. I suppose it doesn't matter. He and she were able to speak frankly of their opposed views on wetlands preservation, and of a growing accommodation that resulted.
For some years she was active in bird rescue, and she writes of raising this robin or that raven to self-sufficiency. She writes also of answering a call about a young bird "mourning" its dead mother. Upon arriving on the scene, she is able to instruct the onlookers that this is a Cooper's hawk, which is eating a larger bird that probably died from hitting a car.
There are hints of difficult times. The stresses of the lifestyle ended the marriage, and she writes a little about the extra planning one must do to cope with a Maine winter. But she primarily writes of enjoyment. There is much to see, and she wants to see it all. She writes of putting her children to bed when they were young, then throwing on a shawl to step outside to see the stars. There are few places left that you can do that and see a sky worth standing in the cold wind.
The reading is easy, and I finished the book all too soon. I am re-reading portions. Books like this make me thankful for the gift of written language. I could not long endure "going back to the land." But I can enjoy the experiences of one who has and could write such lyrical essays about it.
Thursday, July 22, 2010
A book to die for
What can I say, the book is about death. It is about attitudes toward it, particularly as expressed by philosophers and theologians, but not just Kant, Heidegger, Tillich and the rest. Your definition of philosopher must also include Woody Allen and Groucho Marx. You get them all, and more, in Heidegger and a Hippo Walk Through Those Pearly Gates: Using Philosophy (and Jokes!) to Explore Life, Death, the Afterlife, and Everything in Between by Thomas Cathcart and Daniel Klein.
The book is in seven sections, though four of the sections are a chapter apiece. The authors really do present succinct, witty accounts of many philosophers' attitudes toward death and any afterlife they may have believed in. They also tickle our funnybone with jokes, stories and asides, and explore the attitudes of the rest of us. Can anyone react with more indifference to death than Lena?
Ole the fisherman had died, and his wife Lena went to the newspaper office to post an obituary. She told the clerk, "Just put, 'Ole died.'" The clerk expressed his condolences, but asked, "Is that all? You were married fifty years. There are children and grandchildren. Isn't there more you want to say? If cost is an issue, the first five words are free." Lena thought a moment, and said, "OK, put, 'Ole died. Boat for sale.'"PS: Any jokes I quote are paraphrased. I use a briefer delivery than the authors.
Would we live differently if the only way any of us could die is by accident? Would we get a bunker mentality? Or would we live like teenagers, who can't really believe even an accident can kill them? A movie like The Bucket List would garner no viewers, and probably couldn't even be made. Who would write the screenplay?
The authors, who both admit to being "a bit" beyond the Biblical threescore and ten, had plenty of background to scour as they gathered the best (put that in big quotes in a few cases) of past writings. The Heidegger and Hippo joke you'll have to look up for yourself; it comes near the end of the book. It turns on Martin Heidegger's inability to write a sentence anyone else can decipher. He is popular with a certain set of folks who can make free with almost any interpretation, because nobody knows what he meant anyway.
But Marty H. comes in the middle. Exploring the soul for example, they start with Plato and Socrates, cruise through Descartes, and land squarely in the camp of Otto Rank, rare among modern philosophers in thinking that the soul really is more than some "mind" program running on the brain's hardware (or wetware). Which leads to a survey of Heaven, or of different concepts about it. Two genres of Heaven emerge: the leafy glade (Eden) and the cloudy castles, sometimes gated by "pearly gates". (Another PS: Look at your book of Revelation again: The holy city with pearls for gates is clearly stated as coming down from heaven to reside on Earth. I guess before that happens, Revelation's author would side with the castle in the sky).
Face it, though, we're mostly just scared of death, and do our best not to think about it. Christians, such as I, might say, "Death doesn't scare me, but I am afraid of dying." Considering that dying is usually painful, it makes sense. I'd like to find a way into resurrection life that doesn't involve a trip through the death-and-resurrection process. This aspect didn't make its way into the book, but what did was Woody Allen's comment, "I don't want to achieve immortality through my work. I want to achieve immortality through not dying."
But what if we (or our doctors and scientists) achieve genuine physical immortality? If you think there is a population problem now, what about then?! This is the subject of the next-to-last chapter of the book (The subject of the last, and thirteenth, is "The End", appropriately enough). The authors have a preface to the chapter, called "Stop the Presses!". It refers to the requirement to reduce the birth rate to match the death rate. This has been the subject of many pieces of fiction; things like, nobody is allowed to have a child until someone dies to "make room"; all kinds of political infighting take place to determine who has the opportunity to have that child.
But if we were really immortal, barring accident, would we care that much? Aren't all our children "hostages to fortune," our hope to send something of ourselves into a future we won't, personally, inhabit? I am really into genealogy, and have tracked down about 1,000 ancestors. I have only one living ancestor, my father. He has six grandchildren. It seems likely he'll have living descendants at least through the 21st Century. What hopes and dreams are deposited in the living by their ancestors! Suppose all those 1,000 were living, then what? Some would be 600-700 years old. Now that would sure screw up the Social Security system!
The doctor said, "Which do you want first, the good news of the bad news?"
The patient said, "The good news."
"The tests indicate you have 24 hours to live."
"That's the good news??? What's the bad news?"
"I forgot to call you yesterday."
Wednesday, July 21, 2010
Bad excuse
I really oughta blog daily. I took on a time-consuming pastime: breaking in a new computer, beginning with this post on the 14th. I am a finicky sort, so I spent lots of time getting the right pictures for my wallpaper (Windows 7 "Personalize" lets you have multiple desktop pictures), and even more pictures for my "personal photos" screensaver. There went yesterday.
I did one post a few days ago using Internet Explorer 8, but I don't like some of the things it does in Compose mode, so I have just downloaded Google Chrome, and I am trying that out. I am used to Mozilla Firefox, and will probably install that also.
Meantime, I like Windows 7. My son had Windows Vista on one of his computers until recently, when he upgraded to Windows 7. I have a laptop running Windows XP, which I will probably soon have to upgrade to Windows 7. That is a bigger task, because going from XP to 7 requires reformatting the hard drive, so I have to prepare to reload the full backup. Even if it didn't need reloading, I'd do a full backup before changing operating systems.
I have also installed MS Office 2010, and am getting used to using that. I had Office 2002 on the old machine, and Office 2003 at work. I like them so far, but have just barely got my toes wet.
The dust is dying down, so maybe I can get back to a regular posting schedule now. Stay tuned.
Monday, July 19, 2010
Make only photons you are going to use
I had a memory freeze yesterday while using a wind-up flashlight I have that uses LED's. I also have a hand-squeeze generator flashlight that uses a tiny incandescent bulb, but it has to be squeezed constantly to make any light. The LED one is would up for a half minute or a minute, then it works for about ten minutes.
I am glad for at least one kind of technological progress that makes better use of energy. I tracked down the luminous efficacy Wikipedia article while searching "lumens per watt", which contained the figures I needed to see historical progress, and speculate on the future.
First, a Lumen is a measure of the effective brightness of a light source. Its definition includes the spectral sensitivity of the human eye, which is at a maximum at a wavelength of 555nm, a yellowish-green. An ideal source, one that produced only 555nm photons, and made them with 100% efficiency, would have a luminous efficacy of 683 lumens per watt. By contrast, that now-obsolete 100 watt incandescent bulb that produces 1,400 lumens is producing 14 lumens per watt (lm/w), a total efficiency of just over 2%. Older office fluorescent tubes are 2.5 times as efficient, which is why the "standard" fluorescent tube has been the 40 watt size. Newer fluorescent tubes and compact fluorescent lamps (CFL's) are 5-6 times as efficient as the 100W bulb.
But I prefer a different standard of efficiency. The most efficient lamp available so far is the high-pressure sodium arc lamp, at 150 lm/w, or 22% total efficiency. But its strong yellow narrow-band spectrum produces a very ugly look to a scene, which is why they are only used for street lighting. The eye prefers broad-spectrum light, preferably full-spectrum white, that fills the 400-700nm visibility window. The most efficient possible "white", which contains only photons in the 400-700nm range, would yield 251 lumens per watt. That is 37% of the efficiency of a 555nm monochromatic source, but would be a lot easier on the eyes. Let us treat this source as the 100% standard in the discussion that follows.
By this standard, the old 100W incandescent bulb is 5.6% efficient. Although about 7% of the energy goes into visible photons, most of those are red, orange and some yellow, with very little being green and blue, so the overall efficiency is less than if it somehow produced an "equal energy white" spectrum. Incandescent technology got a small boost from the development of Halogen bulbs, which are filled not with vacuum (hmmm, kind of an oxymoron, that) but with argon and a little iodine. This allows the lamp to have a useful life while burning a little hotter, and the bulbs have 19 lm/w, an efficiency of 7.6%. That is almost 1.4 times as efficient as a bare 100W bulb. Now for some history.
The first light-producing technology was the campfire, soon followed by the candle. Both kinds of light source, converting BTU's to watts, produce about 0.3 lm/w, an efficiency near 0.1%. Two discoveries improved upon this, after the year 1800. First, the discovery of acetylene in the 1830s was followed within a generation by the development of gaslight for houses. It is three times as efficient as a candle or kerosene lamp. Then the gas mantle was developed in the 1880s, which is twice as efficient yet. Until Edison came along with the first carbon filament bulb (marginally more efficient), that was it. Tungsten began to be used for filaments in the 1920s, and the "early modern" incandescent bulb was on its way. Though fluorescent tubes were used in offices (similar time frame), they never became popular in homes because of their tendency to flicker. We just lived with lamps that wasted 94+% of their power.
CFL's of various shapes have about a 20-year history, in practical terms. It was about 1990 that they burgeoned into widespread use. They vary from 4-5 times as efficient as 100W incandescents. I have very few tungsten bulbs left in my house.
It is six years since I bought my first LED flashlight. It is a police-sized model with four D cells powering it, and 15 LED's. It is brighter than my old 2W flashlight, and the same four cells are still in it. The LED's are interesting. "White" LED's utilize a blue LED and a yellow phosphor that is efficiently excited by the blue wavelength. The yellow, consisting of moderately broadband red and green phosphors, mixes with the blue to produce a blue-white light. Newer screw-in LED bulbs in sizes from 4W to 8W, at prices of $70 or so, use the same technology.
There is a built-in inefficiency here, which I hope someone addresses. The conversion of blue to red and green entails a loss of about half the energy in the original blue light. I'd like to see a LED source that contained five or six LED's (or a multiple thereof), producing five or six wavelengths spread through the 400-700nm range. In other words, don't convert any photons, but produce only photons you are going to use. I suspect a nicely white lamp produced this way would have nearly twice the efficiency of the current LED's, perhaps 120-140 lm/w. This is in the range of 10x as efficient as a 100W bulb. How 'bout that? Light up a room using only 8-10 watts!
Then there is a further refinement. Just as most of the light from the 100W bulb is red and orange, the light from an LED source could be adjusted, but with a peak in the yellow-green instead of in the red. Such a "modulated" source might approach or exceed 150 lm/w, an efficiency of 60% or better. That is probably close to the ultimate that can be achieved, for a white-looking light. I can hardly wait.
Sunday, July 18, 2010
Getting Darwin wrong
I can just hear all the biologists and paleontologists saying, "For pity's sake, don't publish that! We've got trouble enough in the trenches against the Creationists!!" But no, they just had to do it: Enter What Darwin Got Wrong by Jerry Fodor and Massimo Piattelli-Palmarini. To jump to the chase, the statement that is sure to get everyone's attention is their conclusion, that "there isn't any theory of evolution."
So what did Darwin get so wrong? It wasn't proposing that evolution is happening and that new species are produced from existing ones. It wasn't even his lyrical descriptions of the spreading diversity of living things, as exemplified, for example, by the finches that had "obviously" radiated from a founder population that came to the Galapagos Islands not too many thousands of years ago. Rather, it was his proposal of a mechanism for evolutionary change and innovation, for which he proposed the name natural selection, as somehow analogous to the artificial, human-directed selection that has produced many dog breeds from the primordial wolf, many pigeon breeds from the rock dove, and many breeds of cattle from the aurochs.
There genuinely is a serious problem with this analogy: artificial selection is mindful, but natural selection is mindless. In fact, to be a viable theory of evolutionary change, it is required to be so mindless that it is very, very hard for us mindful creatures to imagine it. We mostly don't. Instead of doing the hard cognitive work to discern the natural history sequence that may have led to a particular hummingbird species, for example, most writers produce "just-so stories" filled with intentional language. The hummingbird's ability to nearly stop its heart overnight when it can't feed is typically described as a reaction to the bird's "need" to conserve energy. I have yet to see a depiction that begins with a pre-hummingbird, which was probably larger and had plenty of energy to survive the night, a description that shows how, by various stages, as a population of nectar-feeding opportunists became one of nectar specialists, the entire physiology of the birds changed, one part of which being the development of a more variable metabolism.
Generalists becoming specialists. Is such specialization some kind of natural law? It is sometimes stated as such, as though it were as infallible as the law of gravity. But this kind of language sounds like there is some mind directing things behind the scenes. Perhaps Darwin did not make his argument sufficiently clear. He proposed an entirely mindless process, which nevertheless produced a result (many, many results) that we, with our pattern-detecting and -generating minds, think of as "progress".
I could go very long here, and I gathered a lot of quotes from the book, with some intention to take up the cudgels on behalf of natural selection. I think the theory of natural selection is quite valid, but I have to agree with this book's authors that Darwinists and Neo-Darwinists are guilty of a great deal of sloppy thinking. So I will do my best to keep this short, and merely present a few points worth considering. For a longer discussion of related points, see my posts of July 13, July 15, and July 17.
First and foremost, if a creature is alive, it is taking advantage of a flow of energy from some source to some sink. Most life on earth lives courtesy of Solar energy, but some lives instead on energy released from Earth processes, or stored chemical energy left over from the formation of Earth. For simplicity, I'll consider some animal in the food chain that begins with plants that photosynthesize Solar energy and ends with the decomposition of the animal, or its predator, by fungi and bacteria. The requirements for this animal's sustenance are: something to eat; sufficient water, either for it to drink or such that it won't desiccate (maybe it lives in water); a range of temperature in which its proteins work right; a low enough population of predators that it can live a while; and safe places to hide when predators are around. But this animal was born, and if its species is to continue to exist, there are a few more requirements: other members of the species (or the population) sufficiently close by that it can find a mate (we're assuming a sexual species here) and a level of stability in its environment that is tolerable, during this animal's productive and reproductive life span. There may be requirements I haven't thought of, but this is a sufficient set to make the point.
That collection of requirements describes what is often called the "niche" for a species. This is useful shorthand, but we must use care to avoid circular language. Creatures tend to create the niches in which we find them. There is no pre-existing niche that yawns expectantly until just the right critter comes along. Consider rabbits and Australia.
There were no rabbits in Australia until some well-meaning humans brought a number of breeding pairs there in 1859. The conditions over a large part of the continent are ideal for rabbits. Accounts abound of the awful consequences of their spread there, to the point that two million could be killed yearly and have no noticeable effect on the population. Was there a niche for rabbits, awaiting their arrival? No, there were other grazing animals that had been making full use of the native plants. Their populations waned as rabbits spread. Though this is natural selection in action, no new species have so far been produced, though a few may have gone extinct. In time, perhaps the local predators will get better at hunting rabbits, and a new species of marsupial rabbit hound may arise. Unless, of course, humans find an appropriate rabbit disease with which to eliminate them.
Let us consider a longer span of time into the future, say 100,000 years. That is a lot of rabbit generations. If Australia still has rabbits, it is likely that a few species of more specialist rabbits will have evolved. Some might eat mostly grassy plants and their seeds (as the rabbit in my yard does); some may eat more woody fare; some may get better at swimming and eat marsh plants, though they'll have to watch out for crocs. Why is this? When an environment is "too rich", as Australia is, in rabbit terms, the development of specialist species is favored because the total of their populations exceeds the population of a more generalist species that is less efficient at metabolizing a wider range of foods.
This, by the way, is the kind of prediction that natural selection allows us to make. It is no empty theory, as our two philosophers would have us believe. Theories have two principal uses: to explain what we see, and to predict what might happen as a consequence. Some theories, such as Newton's theory of gravity, or Einstein's general theory of relativity, are sufficiently exact that they can be used to make very exacting predictions of such things as the dissipation rates of globular clusters of stars, or of the position of the planets and their moons for many centuries into the future.
The theories of natural history, and most particularly evolution's theory, natural selection, deal with much more complex systems, and their predictions are correspondingly less precise. Natural history and paleontology show us that mammal species tend to survive for between one and four million years, before either evolving into new species or going extinct. However, we know of mammal species that are quite a bit older than four million years, and we are finding out how easy it is to drive even young, thriving species to extinction by hunting or habitat destruction. So the "one to four million years" species lifetime is no more than a historical range, and cannot be taken as a natural law. But it allows us to say that, were we to invent a time machine and jump twenty million years into the future, very few of today's species of mammals would still be in existence, and while a similar total number of mammal species would likely be found (unless humans are still around and have really messed things up), most of them would be just one or two million years old.
Finally, I must agree with the authors in decrying the tendency of many to say that natural history can produce anything. They ask, "Will pigs fly?" and they answer, only if their weight, musculature, and number of limbs change (they also posit feathers, but bats fly without feathers). Let's see, reduce the weight so wings don't have to be 747-sized; instead of adding limbs, reconfigure the front limbs; change the forelimb and forehoof into some kind of flying surface...well, I can see the direction this is going, and that "niche" is already filled, with the flying foxes, the large tropical fruit bats.
But the point I'll make, enlarging one the authors make, is that natural selection has to work with the changes that mutations can produce from already-living creatures. You can't have a hand without an arm (well, you can, if your mother took Thalidomide during pregnancy, but it isn't a very useful hand). It seems that natural selection hasn't deviated from the humerus-ulna-radius scheme for arm bones of mammals and birds, but the number of "finger" bones has varied. Some humans have six fingers, and I know one man with no thumbs, just five fingers on each hand. Equids have one "finger" in each foot. While it can be thought of, to have an extra bone accompany the humerus in the upper arm, the present arrangement is "good enough" and has never been improved upon.
The authors do not propose a new theory of evolutionary mechanisms. They say there isn't one, and simply propose that we stick to descriptive natural history. I think a point or two that I have made, in an elementary way perhaps, show that natural selection is a useful theory. It gives us language with which to describe how populations change through time. It accompanies the fact of evolution and provides explanatory power. It is easy to misuse, and Drs. Fodor and Piattelli-Palmarini have done us good service to point out the many ways it has been misused. But let us not discard the theory just because it has been misused. We don't discard our hammers, just because a hammer is occasionally used as a murder weapon.
Saturday, July 17, 2010
Mutation is no fairy tale
kw: observations, natural history
From time to time science fiction will go through a wave of "Mutation" stories in which unsuspecting persons will suddenly develop new organs or limbs, gain new skills, or become superheroes. The current popularity of the Fantastic Four and the X-Men, a nostalgic relapse from the early 1960s, is a case in point. The X-Men are specifically called mutants in the Marvel comic series, while the FF superheroes are said to have experienced "changes to their DNA" due to a solar flare. In 1931, “The Man Who Evolved” by Edmond Hamilton depicts a man becoming first Homo superior, then devolving to an amoeba, from the influence of concentrated cosmic rays. Such literature goes back to about the year 1800.
Most people know innately that such sudden "evolution" doesn't happen. There is actually a greater body of literature, such as that by Olaf Stapledon, about more gradual evolutionary processes. The theory of Natural Selection proposed by Charles Darwin in 1859 is entirely gradualistic. At a time that the mechanisms of inheritance were entirely unknown, Darwin proposed that an unnamed agent of change introduces small variations in every newborn creature, that some of that generation will be more "fit" than their fellows, and thus that there will be a winnowing such that the more fit leave more progeny than the others.
The discovery of Mendelian "digital" (all-or-nothing) inheritance in the early 1900s provided the first hint that the stuff of inheritance came in discrete units. The discovery of DNA and the double helix by Watson and Crick in the 1950s gave us an actual data-recording molecule that can record and play back a creature's inherited characteristics. Tons of subsequent research have begun to show us just how it is all done.
For the purposes of this essay, only the following is needed:
- DNA consists of discrete "bases", four in number, called A, C, G, and T, that can be arranged in any order. The human genome consists of about three billion bases.
- These bases are translated by threes into amino acids. A 3-base code is called a codon.
- 43=64, so the "genetic code" could support 62 amino acids plus "start" and "stop". In reality there are 20 amino acids that are actually used to make proteins, so the code is degenerate; each amino acid has more than one corresponding codon.
- The codons in the DNA are copied to an intermediate RNA string, from one Start to one Stop codon.
- A mechanism that is unimportant here translates the RNA codons into amino acids and strings them together into proteins.
- There is a mechanism that assures that proteins fold correctly so as to be properly active.
- And finally, all this happens in the context of a living cell, which arose by a process we have yet to discover, but which is essential for the DNA to be decoded into proteins, and for those proteins to do their work or build what they need to build.
In its raw form, a mutation is a change to the DNA. Such a change might be in the form of a change from one base to another, such as from A to G. It might be the deletion of one or more bases, or a "stutter" that inserts a new base (or more) between existing bases. It might be that a chunk of DNA of arbitrary size is duplicated and inserted into an arbitrary location.
Some of these changes will change a protein, but not all. In particular, a change of a single base has only one chance in three of making a change in the amino acid that appears, because of the degeneracy of the code, and a much smaller chance of changing an amino acid codon to a Start or Stop codon. These single-base changes, or "point" mutations, are by far the most likely, and are called Single-Nucleotide Polymorphisms, or SNPs, pronounced "snips". SNPs that don't change the amino acid are called Silent mutations.
There is a second level of silence possible. By a kind of 90-10 rule, about 90% of most proteins is scaffolding, things like alpha helices and beta sheets that hold an active site in an appropriate position, and about 10% comprises the actual active site. For structural proteins, the active site is simply a very small region that allows proteins to link up into larger structures. Most changes to the scaffolding amino acids don't cause any change in the activity of the protein, so these are also silent. A change to the active site is more likely to change the functioning of the protein.
Grosser changes, mainly insertions and deletions, are more likely to cause major disruption, because they could change the entire amino acid sequence of a protein. These are rare, and most of those that occur cause the cell to die, unless it can get along without the affected protein, or can tolerate the entirely new protein that was produced. But the production of a new protein is an opportunity for a new function to arise.
It is also necessary to consider that the 24,000 or so known genes in humans (and quite variable numbers in other organisms) take up only about 2% of the genome: 60 million out of 3 billion total bases. The other 98% has been called "junk DNA", but the term is dropping out of use. It includes about 100 total virus genomes, some of them containing sequences brought in from other animals. It includes long strings of repititious DNA (like the "TATATATATA" strings which are so hard for genome sequencing methods to figure out). It also includes areas that are thought to encode for "catalytic RNA" and "regulatory sequences" which are the subject of lots of study, but not much is yet known. This "non-gene" DNA is a vast sea of possiblities; until many of them are figured out, we won't be able to give much of an answer to, "Why is it there?".
Let us first focus on SNP mutations that are not silent. They occur in a gene, they cause a different amino acid to be strung into the protein, or they encode "Stop" and cut it short, and they change the activity of the protein or eliminate it altogether. Then we must ask, what was that protein going to do if unchanged, and what, if anything, will it do now? Here, things can get complicated.
Perhaps the protein is insulin. If it is cut short or totally disabled, the cell will die; it can't metabolize glucose. If the activity is reduced, the cell will have a lower energy level. If this is an ovum or sperm cell that gets "used" to make a baby, that person, if viable, would be less energetic. If the activity is instead increased, the energy level might be higher, though this may reduce life span due to greater levels of free radical oxidants.
Suppose the protein is only expressed when making melanin for the iris of the eye: eliminate its activity, and the eye will be blue. I once knew a blue-eyed Sioux Indian. He said he thought it was a mutation. The other Indians thought his mother had a secret. Blue eyes is a recessive trait, so there had to be at least two secrets, if that were the case!
We could multiply examples, but we need to look at another level. Some genes are regulatory; they either produce proteins that regulate the expression of genes, or they produce an intermediate RNA product that does so. In multicellular creatures, there are special "homeotic" genes that control the shape of the body. In animals this is the Homeobox, in plants there is a homeotic sequence that is less specific, but equally crucial. A non-silent SNP in a homeobox gene is almost always fatal.
There are, in between the homeotic genes and the single-protein ones, several levels of regulation, perhaps as many as five or six. A non-silent SNP in any of these can cause major changes in the organism: extra or missing limbs or eyes or heads – well, suffice it to say, all kinds of things certain museums put on display as "monsters".
Now, let's step back to a high altitude. One-third, perhaps more, of fertilized human ova do not come to term. At some stage the embryo or fetus dies in utero. I don't know of anybody that has tried to do a genome sequence of an early-miscarried fetus, but it is thought that most of these are mutations that were not viable. But we all carry 50-100 SNPs that are not present in either of our parents. We, the living, were the lucky ones. Most of those SNPs are silent, and the rest at least caused little or no harm.
Let's now focus on another kind of mutation. Sometimes an entire gene is duplicated. Since only one copy is needed for the cell to function normally, a later change to one of them, no matter how disastrous, will seldom harm the cell. Let this happen to an ovum or sperm, and it frequently does, and you have an organism with a new chunk of DNA that can be changed in its descendants, perhaps into something good, but more likely into another piece of non-coding DNA (formerly "junk DNA"). Key point: we are all mutants, just not mutants "very much".
Traditional, or classic, natural selection is the name for a process that is otherwise hard to describe. It operates by the differential death between members of a population that are all mutants "not very much", but have each their 50-100 SNPs and perhaps one or two other kinds of DNA changes as compared with their parents. They all, at least, survived gestation and birth, or seed dispersal and sprouting. Some will die when very young. They will die one after another until all have died. In the meantime, some will have no offspring, some one, and some will have more.
Even having several offspring is no guarantee of anything. Abraham Lincoln had four children. He has no living offspring. In the language of evolutionary theory, his line was "selected against." In detail, some stayed single and never had children, and most of the rest died too young to have any, or their children did so. It took four generations for the family to die out.
Classic natural selection is wholly gradualistic. Changes to the way organisms grow and reproduce, however, all produced by natural selection, have led to mechanisms that increase the likelihood of less gradual changes. The homeobox in animals is a case in point. It must have arisen more than one billion years ago. It is different sizes (number of genes) in different animals, which indicates that whole sections of it were duplicated and re-spliced, many times, in the past. A worm whose "baby worms" have a double-size homeobox will simply produce longer worms with more segments. With more complex animals it gets more tricky, but it is the genes in the intermediate levels that enforce which segments become a head, or torso, or so forth.
A billion years is a long, long time. Most animals produce one or more generations per year. When times are good (lots of food, comfortable temperature and pressure, plenty of water, few or no predators), they multiply until there is no longer lots of food. Suppose you have a population of a million lemmings, and they remain rather stable for a thousand years. They reproduce twice yearly. That's two billion lemming babies, each a tiny bit different from their parents…except for the few hundred who are a lot different from their parents, due to changes in genes with more influence. How much is a lot? Not enough to change a lemming into a dog, or cat, or horse, or even mouse or vole. But enough so it can run faster, or is bigger and more aggressive, or can digest that bad-tasting flower that is taking over the over-grazed fields. There could then be a significant change in future generations.
In the literature, such mutations are called "saltations" meaning "jumps", and they are forbidden by classic theory. But they happen. The more radical a change is, the less likely it is to be helpful, so perhaps those few hundred lemmings are just not enough for any saltations to be useful in the long term. Now give those lemmings, not 1,000 years, but one million. A few hundred saltations became a quarter million. It is now much more likely that some few will be greatly beneficial, and lead to quite a shift in that population of lemmings.
Multiply by 50,000 small mammal species, and by more millions of years, and it is certain that saltation has had a lot, a LOT, to do with generating new species. We're not talking X-Men level changes, of course. That's not a saltation, that needs a new word! Compared with that, these mutated lemmings still represent a comparatively gradual change. But it is just this kind of change that can drive the "punctuated evolution" that S.J. Gould and his colleagues described just a couple of decades ago.
A mistake we all make with natural selection, is to think of it in anthropomorphic terms, as though it were intentional. Natural selection isn't a "thing", it is a term we use to describe a process which has operated since life began. Lots of babies are born, but not all grow up and have babies of their own. Combinations of factors that we describe as "fitness" are used to explain why certain individuals reproduced and certain ones did not. We can only do this after the fact, because there is no way to predict what kind of a difference leads to greater fitness.
For example, consider that lemming that is faster, a lot faster. Is that beneficial? We might think so. But consider that one defense of the lemmings against foxes is that the fox gets confused by a field full of them, and can't pick a target. Then young Speedy the Superlemming shows up, and the fox finds it easy to pick him out. "Snap!" Young Speedy is no more. He just got selected against. If only he'd been a whippet instead of a lemming; he'd be famous and make lots of little whippets. Natural selection can't tell "If only" stories. It just describes the process.
Mutations, of whatever magnitude, present numerous hostages to fortune. Some prosper and some do not. Those that prosper determine the complexion of the next generation. We can only describe what happened in hindsight.
The process of natural selection has actually resulted in a very creative tension between stasis and change. Those sections of the genome that are most strongly "conserved", that typically cannot be changed, even a little, without fatal damage to the organism, can only be changed by saltations. If Speedy the Superlemming is fast enough, he can outrun the fox. Then, as long has he can find a girl lemming that likes a fast mate, he just might sire a line of superfast lemmings, which might be enough to produce a new species. We can't predict things like this, and I don't thnk we'll ever have a theory that can, but we can say confidently that they have happened in the past, and are likely to happen again. We'll always be surprised.
Thursday, July 15, 2010
Adapted for what?
I am part way through a difficult book, and reading it spins off ideas every which way. The current chapter makes much of the concept of "selection-for", in the sense that physical traits manifest a direction imposed on natural selection so as to make a creature optimally adapted to its environmental niche. The notion of selection-for is advanced as a weakness of Darwinian (or Neo-Darwinist) evolutionary theory.
This illustrates a profound misunderstanding of natural selection. Darwin invoked artificial selection, which has produced hundreds of breeds from a few species of wild stock (such as Dobermans, Mastiffs and Pekingese from wolves), and used it as the basis for his introduction of natural selection. The problem is, one is purposeful, the other purposeless. So Darwin, if he leaned too hard on this analogy, as many say he did, left himself open to criticism. The core concept is this: artificial selection is purposeful, driven by conscious entities, and is indeed a series of selections for variations in traits that are desired by those entities; natural selection is selection against variations in traits that are less advantageous to the reproduction or existence of creatures as they cope with their environment.
The heart is used as an example. Two characteristics we might point out are that it pumps blood, and that it makes "lub-dub" sounds. Extreme adaptationism is held up as a straw man who says that both characteristics must have been selected for, because they both exist. I don't know anybody who believes the heart sounds were selected for explicitly. Natural selection is not influenced by possible futures; it cannot have "known", half a billion years ago, that one day doctors would listen to heart sounds to discern the health of their patients! The sounds are a side effect of the heart's action, just as the slapping sounds of your feet on the pavement are side effects of running or walking.
As a matter of fact, I suspect that loud heart sounds were selected against, because they'd make an animal more easily detected by a predator with good hearing. The present level of heart sound is a compromise between the energy needed to circulate blood and the risk of a predator hearing it happening. Perhaps this is why, though fight-or-flight terror causes the heart to race and get louder, total shock makes its beat weak, fluttery and almost silent.
A key point I have very rarely seen addressed, which is so far not mentioned in the book either (I'm halfway through), is that every creature is a work in progress. Evolution isn't finished with us yet. A few items in human evolution that I can think of off the cuff:
- Wisdom teeth. Many people have a mouth too small for them to fit, and have them removed (at least in the developed world; in poor places they just eat poorly and cry a lot). And some people are born with three or fewer, or even none. So if there were no dentists, never fear. In 50,000 years few people will have wisdom teeth anyway.
- Lower back pain. The back has been changing for the 2-3 million years that hominids and humans have walked upright. It is still changing. People 50,000-100,000 years from now will have a very different posture than we do, and there won't be any more chiropractors.
- The appendix. It serves an immune function in most mammals, but primates have a more efficient immune system and it is not needed. Its tendency to become infected, if there were no antibiotics or surgeons, would lead to its gradual elimination. As it is, however, future monkeys and apes are likely to have no appendices, but humans still will.
Introduction of a species into a relatively empty environment leads to adaptive radiation, like that seen for Darwin's finches. At one time, there were no birds on the Galapagos Islands. At some point a breeding colony of finches was established. There was variation in this population, and the plethora of kinds of food available led to variation in the finches. But it worked like this: pre-existing variations in the strength of the beak made some finches prefer grass seeds, while others could eat tougher seeds. Perhaps at first there was so much "easy food" that they all ate that. But of course the population outgrew the easy food source. The birds that could survive on the different foods gradually became several species, each adapted to a diet different from the others.
There were side effects to this. The finches that needed the strongest beaks, which were of course the thickest beaks, became physically larger. It takes a bigger body to support the heavier, larger skull needed to energize a real nutcracker of a beak. Larger body size wasn't "selected for" all by itself. It was part of the package needed to exploit the largest and hardest seeds.
It can be said that life is the ultimate energy filter. Energy from the Sun mostly just radiates to the edge of the Universe. Some strikes the Earth, energizing its biosphere. Left to itself, the Earth would absorb the radiation and re-radiate it at a longer wavelength, mostly to the edge of the Universe. But Earth is not a passive, vacuum-immersed globe. It has an atmosphere full of greenhouse gases such as water (by far the most efficient one). Just the presence of this wet atmosphere changes the energy flow, shifting the temperature of Earth's surface to a new equilibrium, some 30°C warmer on average, but with a much smaller total variation, compared to the vacuum-wrapt moon.
Then life gets involved. Chlorophyll converts some of the light energy into electric charges, enough to promote a chemical reaction that converts oxygen and carbon dioxide into carbohydrate. It incidentally reduces the planet's temperature a couple of degrees in the process. Chlorophyll-bearing plants, algae and cyanobacteria are eaten by animals (and protists and certain bacteria). And, of course, animals eat all of the above, up to top predators that are typically eaten only by scavengers and "decomposers" when they die of elderly infirmities.
Consider an early Earth, with lots of energy flow but little life, yet. The environment seems poor, but is actually rich for the creatures that exist. There is more energy available than they can make use of. Easy sources are used first. The population grows until all of the easiest source is taken. Those creatures that, because of chance variation, can exploit less easy sources, gradually become new species. Over time, at any trophic level, this leads to broad, rich niches becoming narrower and narrower, and species multiplying. This picture is complicated by all the side products of these burgeoning species, which provide energy to creatures that could not have existed before (lots of these become parasites).
No matter where you start looking at the situation, whether today, or during the age of dinosaurs, or a billion or two billion years ago, the central fact is that this is a situation of profound disequilibrium. If all humans were to leave the Galapagos Islands, and keep it quarantined for a million years (we should exist so long), there would then be more finch species than there are today, each zealously exploiting a narrow niche defined by food source. There would also be more species of most kinds of the creatures there, perhaps even of the giant tortoises.
Every one of those species would be a work in progress: well adapted to its niche, seemingly optimally so. But look beneath the surface. Finch species A's population is expanding, putting pressure on Finch species B, which doesn't quite eat the same sort of seed, but there is some overlap of food source. Over time, Finch species B could change, maybe just a little, so it can better withstand the competition from A…or it faces extinction. More likely, a sub-population from either A or B may begin to specialize on the "overlap" food source, and eventually lead to a new species, muscling out both A and B from it, until both of them quit using it (At that point, A and B may have become new species, replacing their former selves; there's more going on than just food preferences here).
Well, I've come far afield with this selection-for rant. This is the take-away. Every species is a work in progress, no environment is totally stable, and tomorrow will be different from today. Natural selection is our name for the differential death rate between variations that can either more easily or less easily take advantage of the resources provided by the present environment. Its partner, mutation, requires a rant of its own (Oh, goody! Now I have a subject for tomorrow or the next day).
Wednesday, July 14, 2010
The computer is dead, long live the computer
Here's where the time went:
- Saturday afternoon: boxes arrived, built computer, with much help from son (MB MSI P55-GD80, Intel I5-750 CPU). Got it to start booting, but CPU would not complete initialization sequence.
- Sunday afternoon: played around, finally gave up.
- Monday midday: took to a technician.
- Tuesday afternoon: picked it up, paid technician. I'd found the exact one way to mis-locate a jumper and get a partial bootup.
- Tuesday evening: fail to load operating system for a few hours, until I took out the DVD to check it. It was upside down! Load operating system (Win 7 64 bit).
- Wednesday evening: remove old computer from workstation; move new computer to workstation. Obtain antivirus software and get it set up. Load printer/scanner drivers and conrol utility for scanning. Start getting familiar with Win 7 (old computer ran XP).
- Now it is almost 10 PM. Will crash for the night. Must arise early tomorrow, as usual.
Tuesday, July 13, 2010
Steps on a spiral staircase
By an interesting coincidence, both the book I most recently reviewed and the one I am now reading place a certain emphasis on epigenetics. The word means "beyond genetics" and refers to influences on the development of a creature that depend on something more than pure genetics. One example given is of elk that inhabit the Yellowstone ecosystem, which is very optimal for elk, with lots of open space (they can see predators from far away) and forage. They grow big there. By contrast, elk that live in more forested areas such as the Black Hills are smaller, though genome sequencing shows they are the same species, even the same subspecies. Offspring of elk from a forested area taken to Yellowstone do not grow as big as Yellowstone "natives", which is taken as evidence for epigenetic inheritance.
I'd want to see how the grandchildren do, because maternal nutrition affects the development of her offspring. We see this in humans. Children of a poorly nourished mother are smaller and develop later than those of a better-fed mother. Conversely, many immigrants to the US who bear children at least a few years after immigration find that those children are larger and develop more rapidly than siblings born in the "home country". They frequently grow to tower over their parents. This is not evolution in action, it is the expression of potential that is either released or repressed by environmental influences.
In the current book, much is made in one chapter of the spiral patterns seen in many plants and animals, and their relationship—not always so faithful—to the Fibonacci Series. This famous mathematical series is produced by successive addition of terms: 1, 1, 2, 3, 5, 8, 13, and so forth, each term being the sum of the prior two. The ratio of successive terms converges on the value Φ (Phi), the Golden Mean or Golden Ratio, which is, to eight decimals, 1.61803399. Phi is also equal to one plus the square root of five, all divided by two, (1+√5)/2.
Do this: take off your shoes, and mark with a pencil on the wall the level of the base of your rib cage. This is your waist height. If you don't know your height accurately, mark that also. Measure. Divide your height by the height of your waist. For me, the two values are 72" and 44". 72/44 = 1.64. For most people, this ratio will be between 1.55 and 1.7. For a lot of people, the average is about 1.62, so that is often stated as an example of the Golden Ratio in human growth.
I guess Michael Phelps, the swimmer, is a big exception. His waist is 4" (10 cm) closer to the ground than it should be. In one article it was stated this way: He has the torso of a 6'-8" man, and the legs of a six-footer. He is 6'-4" (193 cm). From this I infer that his waist height is, rather than the 47" we'd expect, about 43", and his height/waist-height ratio must be about 1.77. This departure from the norm, however, makes him "built for swimming", with longer arms and stronger legs than anyone else his size.
Plants exhibit more exact Fibonacci Series values than animals. This sunflower bloom has spirals formed by the ovules (soon to be seeds). If you print this image and count the spirals, you'll find there are 55 that sweep to the left as they spiral outward, and 34 that sweep to the right. These are the 10th and 9th Fibonacci numbers, a Fibonacci pair. Different flower species have different numbers of spirals, but always, barring damage during growth, they form a Fibonacci pair.
We could say the flower head has a problem to solve, to develop seeds that pack together so as to fill the space without wasting space. The solution developed over time by natural selection is to have the seeds gradually increase in size and plumpness from center to edge, and to segment the seed head according to chemical gradients based on Fibonacci pairs. Underlying this segmentation is a mechanism that begins with 2 and 3, the 3d and 4th Fibonacci numbers, and advances stepwise to the values used to place the ovules, each of which develops into one seed.
The series doesn't have to proceed nearly as far to generate the spirals seen in the base of a pine cone. This one has 8 and 13 spirals. Even though the sunflower plant is an angiosperm and the pine is a gymnosperm, similar genetic mechanisms underlie the development of these structures. The use of Fibonacci pairs to pack things into spherical or circular spaces was developed very early.
It reminds me of the principle of a knurler. This is a machine tool used to put the crosshatch grooves on a metal knob. I learned to use one many years ago. It has two smooth rollers and one with the crosshatch knife pattern, and works like a plier. The piece to be knurled, typically an aluminum knob, is centered in a lathe and is rotated at about 1-2 turns per second. Beginning with a light touch, the knob is grasped with the knurler and the machinist looks at the pattern to see how well it fits around the knob. It helps to start with a knob with a dimension that will be a good fit, but the fit will change as the grooves are deepened, so you have to start with a slightly oversize knob.
The knurler's handles can be twisted to make the tool bite at a little different angle, to improve the fit. When there is a near match, the machinist squeezes hard, and the pattern entrains and deepens. Four or five seconds is enough to get a good knurl. When it is done right, you get a nicely knurled knob without chatter marks caused by misfit.
The pattern of a sunflower blossom or pine cone is determined very early. The growth of the entity then fixes the pattern and retains the fit. This is how natural systems work. The sunflower ovules or cone scales, as they grow, are controlled by their surrounding fellows, so all grow evenly and the final product is very uniform.
But the most important learning from these patterns is this: They result naturally from simple rules followed when you have a finite number of items to pack into a space. They are digital items, and only pack well when they are arranged according to a Fibonacci pair. You are welcome to try to make a sunflower head that contains 9 and 17 spirals, or 40 and 60, for example. Of course such things can be drawn, but they'll look rather odd. More to the point, they can't be generated by simple addition from starting numbers like 2 and 3.
Plants like this succulent look a lot like an unopened pine cone when they are very small. They express Fibonacci numbers in another way. The problem to be solved here is to give each leaf maximum sun exposure. This is done by using the square of Φ, which is about 2.618 (in exact terms, Φ² = Φ+1). 360°/Φ² ≈ 137.5°. If you start with the top leaf, the next one down is about 137.5° around the plant, and so forth. For this particular plant, I counted round and round until I found a leaf that is just below the small one that points down and to the right. It is the 13th leaf after that first one, and the number of turns around the plant stem is five. Five and 13 are not an immediate Fibonacci pair, but a pair with a skip.
Many plants have similar arrangements. Sunflower leaves tend to repeat after eight leaves and three turns. This leads to an angle of 135° from leaf to leaf. If any plant were to have exactly 360°/Φ² from leaf to leaf, then no leaf would ever be directly below any other. However, nature really isn't that exact. A 3-for-8 or 5-for-13 ratio is good enough.
If you make squares of dimension 1, 1, 2, 3, 5, 8 and so forth, they will pack together as shown in this diagram. The smooth curve is the Fibonacci spiral, and it is very close to a logarithmic spiral with a parameter of Φ4 (the parameter is the ratio between one turn's width and that of the prior turn). The turn-to-turn distance increases by Φ each quarter turn.
Curves that closely approximate logarithmic spirals abound in nature, because many creatures grow according to their current size, adding a certain percent each month, for example. This is different from the pre-patterning that leads to sunflower head spirals, but has a similar result: spirals that open up as they go out. If you compare the Fibonacci spiral here with the spiral patterns on the sunflower picture above, it is clear that the sunflower spirals of both directions are more open; they have larger parameters.
The chambered nautilus shell is a great example of a logarithmic spiral, as exact as nature is able to produce. Though it has often been stated that nautilus shells follow a Fibonacci spiral, this is not so. The blue line in the image is a Fibonacci spiral. Its parameter is Φ4, or about 6.85. The nautilus shell's spiral has a parameter nearer to 3. Snails also have spirals with parameters in the range around 3, which seems to be optimum for an animal to increase in size and have a living chamber that is deep enough to retreat into, but the shell doesn't get too heavy to lug around. No small-number addition here, just growing by proportion. Fibonacci ratios don't seem to have much to do with animal growth.
These examples are considered to express "laws of form" that nature imposes. I guess you can call them that. They emerge naturally from the growth and development of a creature subject to the constraints of its environment: gravity, nutrient availability and space. I suspect a chambered nautilus that experienced a period of near-starvation would have some kind of variation in the curve of its shell. Nothing stays the same, and just as one can grow a cubical watermelon by growing it into a Plexiglas box, we find that genetics provides the potential, and environment either nurtures or hinders it. But it would take one heck of an environmental upset to produce a sunflower that had straight radial lines and concentric circles instead of spirals, if it were possible at all!