kw: book reviews, science fiction, multiple genres
I read through most of this book on an airplane from Phoenix to Philadelphia. Sometimes when I fly I work puzzles the whole time in the air, first whatever is in the airline's magazine, then in a puzzle book. I like the books with a great variety of different kinds of puzzles, not just crosswords or Sudoku. This time I began to read right after push-back, and read pretty steadily through most of the flight.
Do you remember Apple's "Think Different" motto of about 20 years ago? They were criticized for not using "Differently", but the word was not intended as an adverb; it was a noun: "Think [things that are] Different". When I first saw it I recalled the century-old NCR/IBM motto "Think". But that one meant "Think [because nobody else is doing it]".
Well, Hugh Howey thinks Different. Though he has published more than 20 novels and novellas, and a passel of short stories, reading the collection Machine Learning was my first exposure to him. I'll make sure it is not the last.
The volume contains short stories and at least one novella made up of short story-length vignettes, in a few SciFi genres. The author supplied endnotes about the stories, of what he was thinking at the time. He'll think inside a character: What is going through the mind of a truly bug-eyed, tentacled alien in a force bent on attacking Earth? ("Second Suicide"); he takes a riff on his friend Kevin Kelly's statement, that when a machine first becomes self-aware, the first thing it will do is hide ("Glitch"); he considers the consequences of love between human and robot (Algorithms of Love and Hate, a 3-story sequence). This last reminded me a little of The Bicentennial Man by Isaac Asimov, but with a very different take on societal reactions. Finally, "Peace in Amber" is the author's memoir of going through 9/11 in the actual shadow of the twin towers (until they fell), interspersed with a truly weird alien zoo story. Based on his endnotes, I think the zoo story was needed to "spread out" the memoir so he could handle the flood of emotions.
The word "gripping" comes to mind. Read the book and see what word it evokes in you.
Tuesday, December 26, 2017
Sunday, December 24, 2017
Take Tyson's tour
kw: book reviews, nonfiction, science, astrophysics, popular treatments
What's not to like about Neil deGrasse Tyson? He has become the public face of science today. I love his updated Cosmos series. I have privately studied astrophysics and cosmology enough that perhaps I could have passed by his new book, but I couldn't pass by the enjoyable way he treats his subject. Astrophysics for People in a Hurry is well worth anyone's time, whether you know anything about the subject or not...particularly if not!
This is a rather small book, on purpose. Dr. Tyson knows that today's young adults want everything fast, they want it now, and they want it without fuss. If anyone can deliver up a basic survey of astrophysics and cosmology that meets these requirements, he can. He does so in 12 chapters.
When I think of astrophysics, I think mostly of stellar interiors, but there is much more to it than that. Clearly, from the flow of the book, astrophysics includes cosmology in its purview; probably 2/3 of the books content is cosmological. But he really does cover all the bases, from the reasons for roundness (gravity wins), to the shapes of galaxies (the tug-of-war between gravity and angular momentum), and to the reasons for modern cosmological theory to include both "dark matter" and "dark energy". Chapters 5 and 6 present these mysteries as well as I have ever seen, and explain why they seem to be required for the universe to work the way we observe it working.
I had the great pleasure to encounter a professional cosmologist on an airplane flight four days ago, and we had the chance to talk a little (he wasn't in my row, so our time was limited by physical endurance of turning heads rather sharply). I asked him a question I'd have asked Tyson if I had the chance, "If a unified quantum theory requires a quantum of gravity, how can a graviton get out of a black hole so as to interact with the rest of the universe? What is the emitting surface for a graviton?" He admitted that he hadn't thought of that before. After we talked a while of other things, then broke off for a while, he nudged me, saying, "Consider this. A black hole has three qualities: gravity, angular momentum, and electric charge, right?" I agreed. He continued, "The electric charge is carried by virtual photons, the bosons of electromagnetic force. Real photons cannot escape a black hole; that is why it is black. But the electric charge remains in effect anyway. Thus, the virtual photons do escape—and return to—the black hole to keep the electric charge in place." I thanked him for providing a marvelous "hole" in my considerations of gravitons and black holes. I suspect this is the same answer Tyson would give. Now, upon further thought, I wonder if the electric charge is held within the black hole, or remains attached somehow to the event horizon. From there (or very slightly above it), even real photons could escape if needed. But if virtual photons can indeed escape a black hole, then virtual gravitons could also.
This matter doesn't enter into the book. What does enter in, is how all the pieces fit together. Tyson gives us plenty of food for thought. One of my favorites is playing a numbers game with molecules and time. Here is my version of "Whose air are we breathing?":
Part 1
A particular aim of Dr. Tyson in everything he writes, and says in his programs, is to impress us with the power of the scientific method. We don't learn "how the world works" by guessing. We observe, make tentative conclusions based on observations, argue with others about it, eventually turn the conclusions into a hypothesis that we can test, and then repeat as needed. Now, in cosmology, a "test" would take billions of years. This isn't chemistry, for which you can mix a few things in a jar and take a measurement in a matter of seconds or minutes. Neither is it biology; we have no cosmological Gregor Mendel, crossbreeding stars as though they were peas. But we can work out the math and see how it squares with the things we see.
In science, more than in any other endeavor, "No man is an island." No woman either. The popular trope of the loner in a stained lab coat making a major discovery is simply unknown to real science. Even a few centuries ago, when chemistry was emerging from alchemy and astronomy was emerging from astrology, a "lonely genius" was really a highly social being, surrounded by helpers, colleagues, opponents, and many others. The quintessential scientific loner, Isaac Newton, spent much more time discussing his findings and theories with members of the Royal Society, including friends, "frienemies", and enemies, than he did carrying out observations or even thinking out his theories. Without a helpful gadfly-friend to prod him, he'd never have finished writing his Principia. So although Newton was famously anti-social, he still had to interact socially for his science to have usefulness and meaning. But that's the beauty of science. It is our great, collaborative enterprise of looking back at the Universe that birthed us, to see how it was done, and a great many more things of interest also.
This isn't a textbook. It provides not an education in the subject but a vision of what astrophysics is. If you treat it sort of like a textbook, and write down ideas that interest you as you go along, you'll gather fodder for any further studies you might wish to carry out. That's the kind of thing I've done all my life.
What's not to like about Neil deGrasse Tyson? He has become the public face of science today. I love his updated Cosmos series. I have privately studied astrophysics and cosmology enough that perhaps I could have passed by his new book, but I couldn't pass by the enjoyable way he treats his subject. Astrophysics for People in a Hurry is well worth anyone's time, whether you know anything about the subject or not...particularly if not!
This is a rather small book, on purpose. Dr. Tyson knows that today's young adults want everything fast, they want it now, and they want it without fuss. If anyone can deliver up a basic survey of astrophysics and cosmology that meets these requirements, he can. He does so in 12 chapters.
When I think of astrophysics, I think mostly of stellar interiors, but there is much more to it than that. Clearly, from the flow of the book, astrophysics includes cosmology in its purview; probably 2/3 of the books content is cosmological. But he really does cover all the bases, from the reasons for roundness (gravity wins), to the shapes of galaxies (the tug-of-war between gravity and angular momentum), and to the reasons for modern cosmological theory to include both "dark matter" and "dark energy". Chapters 5 and 6 present these mysteries as well as I have ever seen, and explain why they seem to be required for the universe to work the way we observe it working.
I had the great pleasure to encounter a professional cosmologist on an airplane flight four days ago, and we had the chance to talk a little (he wasn't in my row, so our time was limited by physical endurance of turning heads rather sharply). I asked him a question I'd have asked Tyson if I had the chance, "If a unified quantum theory requires a quantum of gravity, how can a graviton get out of a black hole so as to interact with the rest of the universe? What is the emitting surface for a graviton?" He admitted that he hadn't thought of that before. After we talked a while of other things, then broke off for a while, he nudged me, saying, "Consider this. A black hole has three qualities: gravity, angular momentum, and electric charge, right?" I agreed. He continued, "The electric charge is carried by virtual photons, the bosons of electromagnetic force. Real photons cannot escape a black hole; that is why it is black. But the electric charge remains in effect anyway. Thus, the virtual photons do escape—and return to—the black hole to keep the electric charge in place." I thanked him for providing a marvelous "hole" in my considerations of gravitons and black holes. I suspect this is the same answer Tyson would give. Now, upon further thought, I wonder if the electric charge is held within the black hole, or remains attached somehow to the event horizon. From there (or very slightly above it), even real photons could escape if needed. But if virtual photons can indeed escape a black hole, then virtual gravitons could also.
This matter doesn't enter into the book. What does enter in, is how all the pieces fit together. Tyson gives us plenty of food for thought. One of my favorites is playing a numbers game with molecules and time. Here is my version of "Whose air are we breathing?":
Part 1
- The air above 1 cm² of Earth weighs 1 kg.
- The average molecular weight of air is about 29.
- Thus each kg of air contains about 34.5 gm-moles.
- 1 gm-mole contains 6.02x1023 molecules (or atoms) of any substance.
- That comes to just over 2x1025 air molecules above each cm².
- The surface area of Earth is 510 million km² or 5.1x1018 cm².
- Thus the atmosphere contains a bit more than 1044 molecules.
- Our total lung capacity is around 6 liters (with a rather wide range).
- Our "tidal" capacity, the amount we usually take in with each breath, is about a half liter.
- That is about 0.022 gm-moles, or 1.3x1022 molecules.
- An average person breathes about 23,000 times daily, when not exercising a lot, or about 8.4 million breaths yearly.
- Napoleon Bonaparte lived 64 years.
- In a 60-year span, the number of breaths would come to about 500 million.
- All those breaths add up to 6.6x1030 air molecules.
- All the air that Napoleon breathed amounts to 1/15 trillionth of the atmosphere.
- 1/15 trillionth of one tidal breath is 880 million air molecules.
A particular aim of Dr. Tyson in everything he writes, and says in his programs, is to impress us with the power of the scientific method. We don't learn "how the world works" by guessing. We observe, make tentative conclusions based on observations, argue with others about it, eventually turn the conclusions into a hypothesis that we can test, and then repeat as needed. Now, in cosmology, a "test" would take billions of years. This isn't chemistry, for which you can mix a few things in a jar and take a measurement in a matter of seconds or minutes. Neither is it biology; we have no cosmological Gregor Mendel, crossbreeding stars as though they were peas. But we can work out the math and see how it squares with the things we see.
In science, more than in any other endeavor, "No man is an island." No woman either. The popular trope of the loner in a stained lab coat making a major discovery is simply unknown to real science. Even a few centuries ago, when chemistry was emerging from alchemy and astronomy was emerging from astrology, a "lonely genius" was really a highly social being, surrounded by helpers, colleagues, opponents, and many others. The quintessential scientific loner, Isaac Newton, spent much more time discussing his findings and theories with members of the Royal Society, including friends, "frienemies", and enemies, than he did carrying out observations or even thinking out his theories. Without a helpful gadfly-friend to prod him, he'd never have finished writing his Principia. So although Newton was famously anti-social, he still had to interact socially for his science to have usefulness and meaning. But that's the beauty of science. It is our great, collaborative enterprise of looking back at the Universe that birthed us, to see how it was done, and a great many more things of interest also.
This isn't a textbook. It provides not an education in the subject but a vision of what astrophysics is. If you treat it sort of like a textbook, and write down ideas that interest you as you go along, you'll gather fodder for any further studies you might wish to carry out. That's the kind of thing I've done all my life.
Monday, December 18, 2017
Growing up unique
kw: book reviews, science fiction, space opera, child prodigies
Fiction authors frequently write to explore. I first recognized this while reading one of Isaac Asimov's Robot stories, in which stories he explored the boundaries of the Three Laws of Robotics. He had hinted at them in Robbie and first stated them clearly in I, Robot. Years later I realized he was also exploring the boundaries of neurosis. As I learned of his life, including what he wrote in several memoirs, I understood that he was profoundly neurotic and he used his characters—the ever-more-perfect and godlike robots in contrast to the all-too-faulty humans—to work through the ramifications of neurosis in himself.
I have read novels by Orson Scott Card for about thirty years, beginning with Ender's Game. I don't know if I have read all the Ender series books. I did read all of the Homecoming books, and it is more than clear that in those Card is exploring the boundaries of morality and altruism. His character Nafai is pathologically altruistic.
When I read Ender's Game I wasn't ready for it. I was a mere 40-year-old. I took it at face value, as a coming-of-age novel in a space opera setting. Speaker for the Dead and other Ender series books also left me bemused. Now, just this year, more than thirty years later, Children of the Fleet adds another layer to the Ender saga, and I think I am beginning to understand.
The children in this novel, including the protagonist, Dabeet Ochoa, resemble those in earlier books in that they think rather consistently at an adult level, and perform certain adult tasks, though with some limitations because they are, after all, mostly pre-teens. None has yet hit the pubertal growth spurt, so they wear child-sized space suits, for example.
I was forcibly struck in this novel (and in retrospect, in Ender's Game) that Ender and Dabeet are victims of profound child abuse. Each is massively distorted from what he might have been in a more usual environment. Ender completed his mission, one supplied by others without his knowledge, by becoming the "Xenocide", the one responsible for annihilating the Formics, an insectile alien species. Dabeet's mission is only partly concealed, and he initially conceals it from others. In carrying it out, he brings life, not death (except indirectly, to a couple of all-too-human evildoers), and he prevents massive death.
Rather than dig further into the novel, I want to riff on the meaning of intelligence. We all think we know what intelligence is, but if asked to describe it, none can do so. For a few generations, tests of IQ (Intelligence Quotient) were thought to measure it, but they really tend to measure a small collection of cognitive and memory feats that are more machinelike than I care for. I wonder how the supercomputer Watson would fare on a Stanford-Binet test.
Further, the meaning of IQ has changed over the years. Originally, an IQ test was used with children ten years old and under, to compare their performance with sixteen-year-olds. I don't know how the test was normed (normalized), but apparently youngsters of ages between six and sixteen were tested to establish the "normal" performance of each year cohort. Then higher or lower performance could be compared with these norms to establish an IQ score: 100 for "normal for one's age". Based on the scatter displayed within each cohort, a Gaussian distribution was fitted and a standard deviation of 16 (later 15) was applied. So, when I was given an IQ test in third grade, at age 7, and my IQ score was 170, that supposedly meant that, in the memory and cognitive skills that were measured, I was performing at the level of a 12-year-old (11.9 to be precise). All I knew at the time was that, having begun to learn to read on my own when I began first grade as a 5-year-old (I turned 6 three months later), as a third grader I was indeed reading books usually seen in the book bags of seventh graders.
But how do you measure the IQ of an adult? When I was 20 did I have the "smarts" of a 34-year-old? Does such a question even have meaning? I think not. Others who considered cognitive psychology their calling thought about this quite deeply, and re-normed the test, making the standard deviation (σ) meaningful as a measure of scarcity. Thus, in any Gaussian distribution, the p statistic for ±2σ is 0.9545, or about 21/22. With σ = 15 and a mean of 100, the range ±2σ is from 70 to 130. So if you have a "normal" group of 44 people, one is likely to have an IQ of 70 or less, and one is likely to have an IQ of 130 or more.
I can tell you from experience, though, that IQ has little relation to street smarts. As an adult, my IQ has settled to 160, or 4σ above "average", a level achieved by one person in about 31,000. As a pre-teen and early teen, I finally realized I was not very likable. I began to work toward fixing that. I felt that if I did not have good social reactions automatically, as my age-mates did, I would have to observe, learn, and calculate those reactions. I did so. I used to look at that 170-to-160 shift as "giving up 10 IQ points for a better SQ" (Sociability Quotient). Thus, this paragraph found near the end of Children of the Fleet hit me with special resonance:
If Dabeet is a reflection of Card's view of himself, as I suspect, maybe he is in the midst of learning, or will soon learn, the same thing. Let's see where the next of Card's novels takes us, and him.
Fiction authors frequently write to explore. I first recognized this while reading one of Isaac Asimov's Robot stories, in which stories he explored the boundaries of the Three Laws of Robotics. He had hinted at them in Robbie and first stated them clearly in I, Robot. Years later I realized he was also exploring the boundaries of neurosis. As I learned of his life, including what he wrote in several memoirs, I understood that he was profoundly neurotic and he used his characters—the ever-more-perfect and godlike robots in contrast to the all-too-faulty humans—to work through the ramifications of neurosis in himself.
I have read novels by Orson Scott Card for about thirty years, beginning with Ender's Game. I don't know if I have read all the Ender series books. I did read all of the Homecoming books, and it is more than clear that in those Card is exploring the boundaries of morality and altruism. His character Nafai is pathologically altruistic.
When I read Ender's Game I wasn't ready for it. I was a mere 40-year-old. I took it at face value, as a coming-of-age novel in a space opera setting. Speaker for the Dead and other Ender series books also left me bemused. Now, just this year, more than thirty years later, Children of the Fleet adds another layer to the Ender saga, and I think I am beginning to understand.
The children in this novel, including the protagonist, Dabeet Ochoa, resemble those in earlier books in that they think rather consistently at an adult level, and perform certain adult tasks, though with some limitations because they are, after all, mostly pre-teens. None has yet hit the pubertal growth spurt, so they wear child-sized space suits, for example.
I was forcibly struck in this novel (and in retrospect, in Ender's Game) that Ender and Dabeet are victims of profound child abuse. Each is massively distorted from what he might have been in a more usual environment. Ender completed his mission, one supplied by others without his knowledge, by becoming the "Xenocide", the one responsible for annihilating the Formics, an insectile alien species. Dabeet's mission is only partly concealed, and he initially conceals it from others. In carrying it out, he brings life, not death (except indirectly, to a couple of all-too-human evildoers), and he prevents massive death.
Rather than dig further into the novel, I want to riff on the meaning of intelligence. We all think we know what intelligence is, but if asked to describe it, none can do so. For a few generations, tests of IQ (Intelligence Quotient) were thought to measure it, but they really tend to measure a small collection of cognitive and memory feats that are more machinelike than I care for. I wonder how the supercomputer Watson would fare on a Stanford-Binet test.
Further, the meaning of IQ has changed over the years. Originally, an IQ test was used with children ten years old and under, to compare their performance with sixteen-year-olds. I don't know how the test was normed (normalized), but apparently youngsters of ages between six and sixteen were tested to establish the "normal" performance of each year cohort. Then higher or lower performance could be compared with these norms to establish an IQ score: 100 for "normal for one's age". Based on the scatter displayed within each cohort, a Gaussian distribution was fitted and a standard deviation of 16 (later 15) was applied. So, when I was given an IQ test in third grade, at age 7, and my IQ score was 170, that supposedly meant that, in the memory and cognitive skills that were measured, I was performing at the level of a 12-year-old (11.9 to be precise). All I knew at the time was that, having begun to learn to read on my own when I began first grade as a 5-year-old (I turned 6 three months later), as a third grader I was indeed reading books usually seen in the book bags of seventh graders.
But how do you measure the IQ of an adult? When I was 20 did I have the "smarts" of a 34-year-old? Does such a question even have meaning? I think not. Others who considered cognitive psychology their calling thought about this quite deeply, and re-normed the test, making the standard deviation (σ) meaningful as a measure of scarcity. Thus, in any Gaussian distribution, the p statistic for ±2σ is 0.9545, or about 21/22. With σ = 15 and a mean of 100, the range ±2σ is from 70 to 130. So if you have a "normal" group of 44 people, one is likely to have an IQ of 70 or less, and one is likely to have an IQ of 130 or more.
I can tell you from experience, though, that IQ has little relation to street smarts. As an adult, my IQ has settled to 160, or 4σ above "average", a level achieved by one person in about 31,000. As a pre-teen and early teen, I finally realized I was not very likable. I began to work toward fixing that. I felt that if I did not have good social reactions automatically, as my age-mates did, I would have to observe, learn, and calculate those reactions. I did so. I used to look at that 170-to-160 shift as "giving up 10 IQ points for a better SQ" (Sociability Quotient). Thus, this paragraph found near the end of Children of the Fleet hit me with special resonance:
Maybe making and keeping friends will always require me to think through the steps of it … Maybe it will never be natural for me, never reflexive, never easy. So be it. I can't live without it, can't accomplish anything without it, so I will become adequate at forcing myself, against my inclinations, to be a friend to my friends. If I'm good at it, they'll never guess the effort that it requires.Dabeet's musings match mine at just about the same age. Now I'll tell you what happened after I was 40. No details, just this: I had occasion to learn, through a personality test, that my "calculated person" was pretty good; but also, because a part of the test elicited reactions that had to be too fast for my calculations, I learned that a "natural" personality was truly there, and it was also pretty good! I came away with a proverb, "You cannot build a tree." I had found out, after a few decades of tree construction and maintenance, that a perfectly adequate tree had grown up beneath my notice and could be relied upon to be a "me" that didn't need all the effort. I am happier and calmer as a result.
If Dabeet is a reflection of Card's view of himself, as I suspect, maybe he is in the midst of learning, or will soon learn, the same thing. Let's see where the next of Card's novels takes us, and him.
Thursday, December 14, 2017
Bill Nye the Climate Guy
kw: book reviews, nonfiction, scientific method, climate change, polemics
Bill Nye is one of my all-time favorite people. The fact that I was dismayed by some aspects of his recent book doesn't diminish my admiration for him. He is a top-notch science educator and a writer I enjoy reading.
Bill Nye's new book, Everything All At Once: How to Unleash Your Inner Nerd, Tap into Radical Curiosity, and Solve Any Problem, is ostensibly about that middle phrase: "Release your inner nerd." It is primarily an evangelical work, aimed at anyone on the fence between those who "believe" in climate change and the climate-change "deniers". Along the way, though, he offers great examples and advice for many folks who may be a bit tech-averse, to see how humans are by nature technical beings, and that solving problems is what we do best—or we can, if we go about it right.
I hope a great many people will indeed read this book. It is very well written. The author manages to press his pro-climate change case pretty hard without becoming entirely disagreeable. I will address my concerns in a moment.
Let me first state my background in the matter; it is a subject I have followed for nearly sixty years.
When I was a child I heard about the "Greenhouse Effect". It was already old news, because the term was used by Svante Arrhenius in 1896 to describe his calculations that a doubling of CO2 concentration in the atmosphere would raise average global temperature by about 5°C (that is 9°F to us Americans). At the age of twelve I was able to learn enough math to reproduce Arrhenius's result.
In actuality, "greenhouse effect" is not an entirely accurate metaphor. In a greenhouse, the glass physically traps air warmed by the sun, while also providing spectral emissivity to enhance the effect. A "greenhouse gas" cannot physically trap warm air, but causes extra heating solely via spectral emissivity.
The terms "Global Warming" and "Climate Change" began to be used by some in about 1975, and their use ramped up greatly after 1985. "Greenhouse Effect" also took off about that time, when the atmospheric effects they all refer to became a political football. Then a funny thing happened. Looking at the Google Ngram Viewer, I find that since 1992 "Greenhouse Effect" rapidly fell out of favor, "Climate Change" became the term of choice, with "Global Warming" running a rather distant second.
The problem with all this is that "Greenhouse Effect" denotes a possible cause, while the other two terms refer to effects. So now let us back up and examine the term I threw in earlier, "Spectral Emissivity". For solid materials, this refers to a departure from the spectral behavior of a blackbody or graybody. If we could produce a paint that was perfectly gray—at any level of grayness—throughout the electromagnetic spectrum, we could paint it on a surface and it would cause an amount of heating, when the sun shined upon it, directly correlated to the total emissivity. To be specific, a perfect blackbody surface will heat up to a temperature that depends only on the energy being radiated to it. It has an emissivity of 1. A perfect reflector will not be heated at all. It has an emissivity of 0. A perfect graybody surface with emissivity of 0.5 will heat up to an intermediate temperature according to a proportional constant times the Boltzmann factor t4.
Now, consider a "step-spectral" surface. Suppose it has an emissivity of 1 for visible light, and an emissivity of 0 for infrared light. Let's put the cutoff at 700 nm. A surface with this characteristic, in a vacuum so air will not carry off any heat, and with only visible light shined upon it, would heat up until it was hot enough to radiate away that same amount of radiant energy. In visible light it would appear black. It absorbs light, but if it is cool, emits nearly none. Thus it must heat up. You might know from experience that the heating element in an oven gets to about 600°C before it begins to glow reddish, and at 800°C it is getting orange-red. The great majority of its radiation, however, is at infrared wavelengths longer, much longer, than the 700 nm radiation we call "deep red". If it is prevented by the step-spectral emissivity from radiating at those longer wavelengths, it must, perforce, heat up until it is radiating a lot of visible light, to balance the incoming light. Thus a step-spectral surface tends to get very hot indeed, hotter than an oven element.
Now we can consider gases. Oxygen and nitrogen hardly absorb any light at any wavelength of interest to us as we consider the heat balance of our atmosphere. There is a common gas, however, that does absorb a lot of light, at a range of wavelengths that make it a strong greenhouse gas. That is water vapor. Surprised? We will look at some spectra in a moment. First, qualitatively, we find that water vapor absorbs a lot of ultraviolet light, but absorbs even more strongly in several ranges throughout the infrared, with narrow absorption bands at about 1.2 and 1.9 microns, a wider band from 2.5-3 microns, and a wide, almost total absorption feature from 5 to 7.5 microns. The result of this is that if Earth had no atmosphere it would be 32°C (about 60°F) cooler than it is. A perpetual ice age without the ice. So water vapor is by far the strongest greenhouse gas, and is responsible for life being able to exist on earth.
"Climate Change" is all about carbon dioxide (CO2). What does this gas do? It also has spectral emissivity, with an absorption band at about 2.7 microns, a stronger one near 4.2 microns, and a third between 12-16 microns. This last one is of primary interest. It is perfectly placed to absorb about 10% of the thermal radiation from warm dirt, meaning that the dirt has to get a little warmer to radiate that extra energy at other wavelengths. And that is what is behind Arrhenius's greenhouse effect calculation.
Greenhouse gases operate a little differently from painted surfaces. Dirt and other stuff on Earth's surface has spectral emissivity, of course, but not nearly with the perfection of the step-spectral material discussed earlier. So it reflects a lot of light, absorbs some, and gets warm enough to radiate some infrared. In a vacuum, dirt with sunlight shining on it would have some specific temperature. Now put a layer of greenhouse gas above it, an atmosphere containing water vapor. The incoming sunlight is not affected much. But the outgoing infrared from the warm dirt is partly absorbed by the water vapor, which heats up and radiates also, with half going up and half going down. This causes the dirt to get warmer, until it is able to radiate enough to balance its thermal outflow with the radiative inflow from sunlight and also the re-radiated infrared from the warm air above it. How does CO2 modify this picture? It absorbs a little more infrared radiation, in portions of the spectrum in which water is rather transparent. So CO2 strengthens the greenhouse effect. Now, here are the spectra:
I don't know the original source of this graph. It is found all over the place. It also shows a tiny contribution from oxygen and ozone, but we won't consider those here (in the "ozone layer" the temperature goes up significantly, however).
The blue line is for water vapor. The curve marked 255K shows the thermal radiation from a piece of ice at -18°C or 0°F. "Room temperature" is close to 300K or 27°C (81°F). Its radiation curve would be a little to the left of the one shown.
The point is, water vapor reflects back a lot of the radiation from the earth and even from glaciers. Yes, glaciers radiate infrared also. The blue line is for water vapor with a content near 0.3% of the atmosphere, or near saturation (100% relative humidity) at ice temperature. The CO2 curve is for a few hundred ppm; the sources I read didn't state exactly. The result of increasing the amount of CO2 would be to widen the bands, as their "wings" absorbed more and more. This shows what happens when these two gases lead to greenhouse warming.
Now it is a separate issue, whether this is actually causing climate change. "Deniers" say not so, proponents of the idea that CO2 is a "pollutant" say it is. I won't get into that. We have measured that, from the time I was a little child and there was less than 300 ppm CO2 in the atmosphere, and today, when the amount is 400 ppm, global atmospheric average temperature has risen just under 1°C.
Is that a lot, one degree C? Let's look at one factor. Water expands when heated. Heating water by 1°C yields an expansion of 0.000214, or 0.0214%. The ocean averages four km in depth. If the entire ocean were warmed by 1°C, it would be 0.000214x4,000m = 0.856m deeper (33.7 inches). That is enough to force the evacuation of some low-lying areas and certain island nations such as Tuvalu. "Climate evacuation" has already started. But has the whole ocean heated by that much? Not yet. Give it time. The early evacuations were the result of less than one-third of this figure.
I'll stop there. These are not easy points to make with a public that largely doesn't care. Thus, Bill Nye's passion. He wants to make everyone care. But as I read I took careful note: will he mention water vapor? He does not, except for a throwaway phrase in a late chapter. We can't ignore water, for another reason. Trapping a little more heat means adding energy to the system. That means more water could evaporate. Whether it will or not is a huge area of controversy in the climate modeling arena. Water is complex. It might be the most complex substance there is. It is possible that the added energy will yield a net drying rather than adding more water. We might see more rain, or less rain, overall, and nobody yet has a good handle on which areas might experience greater or reduced rainfall. Oh, I've seen a few predictions, but none is well supported by robust evidence.
I agree with Bill Nye, though, that we need to be reducing our dependence on "convenient" energy from burning stuff (mainly fossil fuels), and toward solar, wind and other "alternatives". A generation ago the oil companies began calling themselves energy companies. But they are really still oil and coal and gas companies, with only tiny amounts being spent on non-carbon energy production. They could become the heroes of the 22nd Century. But I fear they will more likely be the goats. I just don't know who else has money enough to do the research to make solar and wind as ubiquitous as they need to become. And there, I think the Science Guy might agree. Read the book. Agree with Bill Nye or not, you're in for a fun ride.
Bill Nye is one of my all-time favorite people. The fact that I was dismayed by some aspects of his recent book doesn't diminish my admiration for him. He is a top-notch science educator and a writer I enjoy reading.
Bill Nye's new book, Everything All At Once: How to Unleash Your Inner Nerd, Tap into Radical Curiosity, and Solve Any Problem, is ostensibly about that middle phrase: "Release your inner nerd." It is primarily an evangelical work, aimed at anyone on the fence between those who "believe" in climate change and the climate-change "deniers". Along the way, though, he offers great examples and advice for many folks who may be a bit tech-averse, to see how humans are by nature technical beings, and that solving problems is what we do best—or we can, if we go about it right.
I hope a great many people will indeed read this book. It is very well written. The author manages to press his pro-climate change case pretty hard without becoming entirely disagreeable. I will address my concerns in a moment.
Let me first state my background in the matter; it is a subject I have followed for nearly sixty years.
When I was a child I heard about the "Greenhouse Effect". It was already old news, because the term was used by Svante Arrhenius in 1896 to describe his calculations that a doubling of CO2 concentration in the atmosphere would raise average global temperature by about 5°C (that is 9°F to us Americans). At the age of twelve I was able to learn enough math to reproduce Arrhenius's result.
In actuality, "greenhouse effect" is not an entirely accurate metaphor. In a greenhouse, the glass physically traps air warmed by the sun, while also providing spectral emissivity to enhance the effect. A "greenhouse gas" cannot physically trap warm air, but causes extra heating solely via spectral emissivity.
The terms "Global Warming" and "Climate Change" began to be used by some in about 1975, and their use ramped up greatly after 1985. "Greenhouse Effect" also took off about that time, when the atmospheric effects they all refer to became a political football. Then a funny thing happened. Looking at the Google Ngram Viewer, I find that since 1992 "Greenhouse Effect" rapidly fell out of favor, "Climate Change" became the term of choice, with "Global Warming" running a rather distant second.
The problem with all this is that "Greenhouse Effect" denotes a possible cause, while the other two terms refer to effects. So now let us back up and examine the term I threw in earlier, "Spectral Emissivity". For solid materials, this refers to a departure from the spectral behavior of a blackbody or graybody. If we could produce a paint that was perfectly gray—at any level of grayness—throughout the electromagnetic spectrum, we could paint it on a surface and it would cause an amount of heating, when the sun shined upon it, directly correlated to the total emissivity. To be specific, a perfect blackbody surface will heat up to a temperature that depends only on the energy being radiated to it. It has an emissivity of 1. A perfect reflector will not be heated at all. It has an emissivity of 0. A perfect graybody surface with emissivity of 0.5 will heat up to an intermediate temperature according to a proportional constant times the Boltzmann factor t4.
Now, consider a "step-spectral" surface. Suppose it has an emissivity of 1 for visible light, and an emissivity of 0 for infrared light. Let's put the cutoff at 700 nm. A surface with this characteristic, in a vacuum so air will not carry off any heat, and with only visible light shined upon it, would heat up until it was hot enough to radiate away that same amount of radiant energy. In visible light it would appear black. It absorbs light, but if it is cool, emits nearly none. Thus it must heat up. You might know from experience that the heating element in an oven gets to about 600°C before it begins to glow reddish, and at 800°C it is getting orange-red. The great majority of its radiation, however, is at infrared wavelengths longer, much longer, than the 700 nm radiation we call "deep red". If it is prevented by the step-spectral emissivity from radiating at those longer wavelengths, it must, perforce, heat up until it is radiating a lot of visible light, to balance the incoming light. Thus a step-spectral surface tends to get very hot indeed, hotter than an oven element.
Now we can consider gases. Oxygen and nitrogen hardly absorb any light at any wavelength of interest to us as we consider the heat balance of our atmosphere. There is a common gas, however, that does absorb a lot of light, at a range of wavelengths that make it a strong greenhouse gas. That is water vapor. Surprised? We will look at some spectra in a moment. First, qualitatively, we find that water vapor absorbs a lot of ultraviolet light, but absorbs even more strongly in several ranges throughout the infrared, with narrow absorption bands at about 1.2 and 1.9 microns, a wider band from 2.5-3 microns, and a wide, almost total absorption feature from 5 to 7.5 microns. The result of this is that if Earth had no atmosphere it would be 32°C (about 60°F) cooler than it is. A perpetual ice age without the ice. So water vapor is by far the strongest greenhouse gas, and is responsible for life being able to exist on earth.
"Climate Change" is all about carbon dioxide (CO2). What does this gas do? It also has spectral emissivity, with an absorption band at about 2.7 microns, a stronger one near 4.2 microns, and a third between 12-16 microns. This last one is of primary interest. It is perfectly placed to absorb about 10% of the thermal radiation from warm dirt, meaning that the dirt has to get a little warmer to radiate that extra energy at other wavelengths. And that is what is behind Arrhenius's greenhouse effect calculation.
Greenhouse gases operate a little differently from painted surfaces. Dirt and other stuff on Earth's surface has spectral emissivity, of course, but not nearly with the perfection of the step-spectral material discussed earlier. So it reflects a lot of light, absorbs some, and gets warm enough to radiate some infrared. In a vacuum, dirt with sunlight shining on it would have some specific temperature. Now put a layer of greenhouse gas above it, an atmosphere containing water vapor. The incoming sunlight is not affected much. But the outgoing infrared from the warm dirt is partly absorbed by the water vapor, which heats up and radiates also, with half going up and half going down. This causes the dirt to get warmer, until it is able to radiate enough to balance its thermal outflow with the radiative inflow from sunlight and also the re-radiated infrared from the warm air above it. How does CO2 modify this picture? It absorbs a little more infrared radiation, in portions of the spectrum in which water is rather transparent. So CO2 strengthens the greenhouse effect. Now, here are the spectra:
I don't know the original source of this graph. It is found all over the place. It also shows a tiny contribution from oxygen and ozone, but we won't consider those here (in the "ozone layer" the temperature goes up significantly, however).
The blue line is for water vapor. The curve marked 255K shows the thermal radiation from a piece of ice at -18°C or 0°F. "Room temperature" is close to 300K or 27°C (81°F). Its radiation curve would be a little to the left of the one shown.
The point is, water vapor reflects back a lot of the radiation from the earth and even from glaciers. Yes, glaciers radiate infrared also. The blue line is for water vapor with a content near 0.3% of the atmosphere, or near saturation (100% relative humidity) at ice temperature. The CO2 curve is for a few hundred ppm; the sources I read didn't state exactly. The result of increasing the amount of CO2 would be to widen the bands, as their "wings" absorbed more and more. This shows what happens when these two gases lead to greenhouse warming.
Now it is a separate issue, whether this is actually causing climate change. "Deniers" say not so, proponents of the idea that CO2 is a "pollutant" say it is. I won't get into that. We have measured that, from the time I was a little child and there was less than 300 ppm CO2 in the atmosphere, and today, when the amount is 400 ppm, global atmospheric average temperature has risen just under 1°C.
Is that a lot, one degree C? Let's look at one factor. Water expands when heated. Heating water by 1°C yields an expansion of 0.000214, or 0.0214%. The ocean averages four km in depth. If the entire ocean were warmed by 1°C, it would be 0.000214x4,000m = 0.856m deeper (33.7 inches). That is enough to force the evacuation of some low-lying areas and certain island nations such as Tuvalu. "Climate evacuation" has already started. But has the whole ocean heated by that much? Not yet. Give it time. The early evacuations were the result of less than one-third of this figure.
I'll stop there. These are not easy points to make with a public that largely doesn't care. Thus, Bill Nye's passion. He wants to make everyone care. But as I read I took careful note: will he mention water vapor? He does not, except for a throwaway phrase in a late chapter. We can't ignore water, for another reason. Trapping a little more heat means adding energy to the system. That means more water could evaporate. Whether it will or not is a huge area of controversy in the climate modeling arena. Water is complex. It might be the most complex substance there is. It is possible that the added energy will yield a net drying rather than adding more water. We might see more rain, or less rain, overall, and nobody yet has a good handle on which areas might experience greater or reduced rainfall. Oh, I've seen a few predictions, but none is well supported by robust evidence.
I agree with Bill Nye, though, that we need to be reducing our dependence on "convenient" energy from burning stuff (mainly fossil fuels), and toward solar, wind and other "alternatives". A generation ago the oil companies began calling themselves energy companies. But they are really still oil and coal and gas companies, with only tiny amounts being spent on non-carbon energy production. They could become the heroes of the 22nd Century. But I fear they will more likely be the goats. I just don't know who else has money enough to do the research to make solar and wind as ubiquitous as they need to become. And there, I think the Science Guy might agree. Read the book. Agree with Bill Nye or not, you're in for a fun ride.
Friday, December 08, 2017
The most popular snails
kw: species summaries, natural history, natural science, museums, research, photographs
For the current series of projects at the Delaware Museum of Natural History, I have worked through several families of terrestrial gastropods (land snails and tree snails). Many of these are quite inconspicuous, being small and not colorful, though they are in general a little more various than the little brown "mud snails" (freshwater gastropods) I worked with for most of 2016.
You know, in any group of creatures, most are rather inconspicuous and poorly known. The "typical" mammal is a "little brown furry thing" such as a mouse, vole, shrew or lemming. The "typical" bird is a "little brown feathered thing" such as a wren or sparrow. The "typical" insect is a "little dark beetle" about the size of a grain of rice. The world is full of little brown things and we hardly notice them.
But we really like the colorful "charismatic" ones. Among the land snails, that would be the tree snails of Florida and the Caribbean, of the genus Liguus.
This is part of a drawer of "unidentified" lots of Liguus fasciatus, the poster child for pretty tree snails. Though these have been identified as to species, L. fasciatus has many "forms" or "varieties", which we provisionally catalog as subspecies, but they probably aren't really subspecies. We usually call them color forms.They hybridize freely, but a particular color form is usually physically separated from most others, being endemic to a few "hammocks", as small patches of raised and heavily vegetated ground are known in the area.
These are mostly from an area of the Everglades called Pinecrest, named for a ghost town tucked away in the middle of a couple of hundred hammocks. You can clearly see that most of these lots are in need of splitting into their color forms. Any particular hammock may be inhabited by a few color forms. A collector in a hurry will gather a couple of dozen shells, put them in a box or bag with a label (date, provisional ID, and location, at the least, to be a useful specimen lot), and move on to the next hammock a few minutes' walk away in the dry season, or a short airboat ride away the rest of the year.
Here is the prettiest of the color forms, in my opinion:
On your computer screen this may be a bit larger than life size. The paper label is 3 inches long, so these shells are about 2 inches long, a little bigger than the average for the species. Liguus fasciatus splendidus Frampton, 1932, must have been Henry Frampton's favorite also. These are indeed splendid! This lot was collected by Erwin Winte a few years after Frampton described them, and in the 1980's it wound up at DMNH.
These shells are so sought after that, though they are prolific and widespread, many color forms are getting hard to find. In the southeast U.S. and the Caribbean, a whole subset of shell collectors are called "Liguus collectors". We are loving them to death!
This only serves to introduce these lovely shells. I hope soon to gather pictures of several color forms, and also to compare L. fasciatus with its sister species in the genus.
For the current series of projects at the Delaware Museum of Natural History, I have worked through several families of terrestrial gastropods (land snails and tree snails). Many of these are quite inconspicuous, being small and not colorful, though they are in general a little more various than the little brown "mud snails" (freshwater gastropods) I worked with for most of 2016.
You know, in any group of creatures, most are rather inconspicuous and poorly known. The "typical" mammal is a "little brown furry thing" such as a mouse, vole, shrew or lemming. The "typical" bird is a "little brown feathered thing" such as a wren or sparrow. The "typical" insect is a "little dark beetle" about the size of a grain of rice. The world is full of little brown things and we hardly notice them.
But we really like the colorful "charismatic" ones. Among the land snails, that would be the tree snails of Florida and the Caribbean, of the genus Liguus.
This is part of a drawer of "unidentified" lots of Liguus fasciatus, the poster child for pretty tree snails. Though these have been identified as to species, L. fasciatus has many "forms" or "varieties", which we provisionally catalog as subspecies, but they probably aren't really subspecies. We usually call them color forms.They hybridize freely, but a particular color form is usually physically separated from most others, being endemic to a few "hammocks", as small patches of raised and heavily vegetated ground are known in the area.
These are mostly from an area of the Everglades called Pinecrest, named for a ghost town tucked away in the middle of a couple of hundred hammocks. You can clearly see that most of these lots are in need of splitting into their color forms. Any particular hammock may be inhabited by a few color forms. A collector in a hurry will gather a couple of dozen shells, put them in a box or bag with a label (date, provisional ID, and location, at the least, to be a useful specimen lot), and move on to the next hammock a few minutes' walk away in the dry season, or a short airboat ride away the rest of the year.
Here is the prettiest of the color forms, in my opinion:
On your computer screen this may be a bit larger than life size. The paper label is 3 inches long, so these shells are about 2 inches long, a little bigger than the average for the species. Liguus fasciatus splendidus Frampton, 1932, must have been Henry Frampton's favorite also. These are indeed splendid! This lot was collected by Erwin Winte a few years after Frampton described them, and in the 1980's it wound up at DMNH.
These shells are so sought after that, though they are prolific and widespread, many color forms are getting hard to find. In the southeast U.S. and the Caribbean, a whole subset of shell collectors are called "Liguus collectors". We are loving them to death!
This only serves to introduce these lovely shells. I hope soon to gather pictures of several color forms, and also to compare L. fasciatus with its sister species in the genus.
Friday, December 01, 2017
Drones fly - monsters die
kw: book reviews, nonfiction, memoirs, soldiers, drones
Brett Velicovich passed by a protest a few years ago. People were wearing or waving mockups of the Predator drone and chanting, "Drones fly, babies die." The colossal ignorance they displayed got through his post-traumatic apathy like nothing had since he returned from five combat tours over more than a decade, in the elite Delta unit, the one that flies the Predator, Reaper and other military drones in Iraq, Afghanistan and other places where the most vicious terrorists operate. He knew what really happens, who really dies, and more importantly, who really doesn't die (that would be most of us! Babies included). He got help from Christopher S. Stewart to write a book about the reality of drone warfare, Drone Warrior: An Elite Soldier's Inside Account of the Hunt for America's Most Dangerous Enemies.
Brett V. was the intelligence specialist on a drone team. He led the work of gathering information and deciding how and when to arrest, of if needed, kill an enemy. After President Obama was elected, the autonomy of the drone teams was reduced, and the President mandated that he must personally authorize each kill, whether by drone or by a raid. I saw a video from late in the Obama presidency in which he was discussing the more than 3,000 killed at his say-so. He said, "It turns out I am really good at killing. I never thought that would be an item on my résumé."
There were, and are, several drone teams. It doesn't become clear in the book how many kills and arrests occurred under the author's purview. But certain numbers stand out, and this one is primary: for every kill there were twenty arrests, and most of them led to useful intelligence. So the 3,000 terrorist leaders whose death was authorized by President Obama are accompanied by the arrest and interrogation of about 60,000 others. That is the key to a war against ISIS and similar enemy groups.
No matter what you think about the use of military drones, you have to read this book. Furthermore, the author portrays unblinkingly what was happening to him. This kind of work leads to estrangement from everyone, from all of us who can never know what it is really like. Every returned warrior is changed. This is still early days of drone war, which changes someone even more than traditional warfare. I hope that can be improved upon.
Mr. Velicovich nearly lost his way after returning to civilian life. He has found something productive to do with his skills. The book ends at the beginning of this new beginning for him. I wish him success in using drone-intel skills for positive things in the civilian sector.
Brett Velicovich passed by a protest a few years ago. People were wearing or waving mockups of the Predator drone and chanting, "Drones fly, babies die." The colossal ignorance they displayed got through his post-traumatic apathy like nothing had since he returned from five combat tours over more than a decade, in the elite Delta unit, the one that flies the Predator, Reaper and other military drones in Iraq, Afghanistan and other places where the most vicious terrorists operate. He knew what really happens, who really dies, and more importantly, who really doesn't die (that would be most of us! Babies included). He got help from Christopher S. Stewart to write a book about the reality of drone warfare, Drone Warrior: An Elite Soldier's Inside Account of the Hunt for America's Most Dangerous Enemies.
Brett V. was the intelligence specialist on a drone team. He led the work of gathering information and deciding how and when to arrest, of if needed, kill an enemy. After President Obama was elected, the autonomy of the drone teams was reduced, and the President mandated that he must personally authorize each kill, whether by drone or by a raid. I saw a video from late in the Obama presidency in which he was discussing the more than 3,000 killed at his say-so. He said, "It turns out I am really good at killing. I never thought that would be an item on my résumé."
There were, and are, several drone teams. It doesn't become clear in the book how many kills and arrests occurred under the author's purview. But certain numbers stand out, and this one is primary: for every kill there were twenty arrests, and most of them led to useful intelligence. So the 3,000 terrorist leaders whose death was authorized by President Obama are accompanied by the arrest and interrogation of about 60,000 others. That is the key to a war against ISIS and similar enemy groups.
No matter what you think about the use of military drones, you have to read this book. Furthermore, the author portrays unblinkingly what was happening to him. This kind of work leads to estrangement from everyone, from all of us who can never know what it is really like. Every returned warrior is changed. This is still early days of drone war, which changes someone even more than traditional warfare. I hope that can be improved upon.
Mr. Velicovich nearly lost his way after returning to civilian life. He has found something productive to do with his skills. The book ends at the beginning of this new beginning for him. I wish him success in using drone-intel skills for positive things in the civilian sector.
Saturday, November 25, 2017
Media Schmedia
kw: book reviews, nonfiction, media, social media, news, fake news, sharing
Everything has a life cycle. I can't recall what I expected when I saw All Your Friends Like This: How Social Networks Took Over News. It has three authors, Hal Crawford, Andrew Hunter, and Domagoj Filipovic, who were colleagues at nine.com.au, formerly ninemsn, an Australian online news website that is a lot like Google News might be if it were on its own.
Folks in the vaunted Northern hemisphere pay little attention to what goes on "down under", but these fellows appear to have gotten a finger on the pulse of a generation, learned what it means, and run with it. How do you measure the relative effectiveness of a new style of media? There is the obvious metric: newspapers are going broke, broadcast media are scrambling to keep from dropping off the ratings chart, newsroom staffs are shrinking, and even mediocre podcasts are apparently reaching larger audiences than large TV networks.
These guys wanted something more, and hit upon measuring Shares on FaceBook and related vehicles (I used to think the "News Feed" at FB was a bit of a joke, but I've noticed that its news content is growing). They produced a site (or method?) called Share Wars, and Mr. Filipovic developed a software system, Likeable, that scrapes social media news feeds to gather sharing statistics. It was available for public access until mid-2016, but is now in the background of the trends they report.
The book chronicles these aspects of the replacement of "push" media with "personal push" media, driven by the Share buttons we find on every web site purporting to convey newsworthy items. Publishing is now so easy and pervasive, it has of course greatly increased the production and distribution of lies and scams including "fake news" (which isn't news at all: a lie by any other name is still a lie). When one of the authors spoke of his "War of the Worlds" moment, I realized that "fake news" has been around as long as "real news".
I don't know what else to say. It is a very interesting book, but didn't resonate with me the way I'd hoped. Buggy whips are still being manufactured, but as a specialty item for history buffs and collectors of horse-drawn vehicles. The Times (of wherever) will be with us for a long time, but the introduction of Sharing has changed the landscape of all media, forever, or at least until something even more compelling arrives. Maybe Crawford, Hunter and Filipovic can help us see the next big change coming.
Everything has a life cycle. I can't recall what I expected when I saw All Your Friends Like This: How Social Networks Took Over News. It has three authors, Hal Crawford, Andrew Hunter, and Domagoj Filipovic, who were colleagues at nine.com.au, formerly ninemsn, an Australian online news website that is a lot like Google News might be if it were on its own.
Folks in the vaunted Northern hemisphere pay little attention to what goes on "down under", but these fellows appear to have gotten a finger on the pulse of a generation, learned what it means, and run with it. How do you measure the relative effectiveness of a new style of media? There is the obvious metric: newspapers are going broke, broadcast media are scrambling to keep from dropping off the ratings chart, newsroom staffs are shrinking, and even mediocre podcasts are apparently reaching larger audiences than large TV networks.
These guys wanted something more, and hit upon measuring Shares on FaceBook and related vehicles (I used to think the "News Feed" at FB was a bit of a joke, but I've noticed that its news content is growing). They produced a site (or method?) called Share Wars, and Mr. Filipovic developed a software system, Likeable, that scrapes social media news feeds to gather sharing statistics. It was available for public access until mid-2016, but is now in the background of the trends they report.
The book chronicles these aspects of the replacement of "push" media with "personal push" media, driven by the Share buttons we find on every web site purporting to convey newsworthy items. Publishing is now so easy and pervasive, it has of course greatly increased the production and distribution of lies and scams including "fake news" (which isn't news at all: a lie by any other name is still a lie). When one of the authors spoke of his "War of the Worlds" moment, I realized that "fake news" has been around as long as "real news".
I don't know what else to say. It is a very interesting book, but didn't resonate with me the way I'd hoped. Buggy whips are still being manufactured, but as a specialty item for history buffs and collectors of horse-drawn vehicles. The Times (of wherever) will be with us for a long time, but the introduction of Sharing has changed the landscape of all media, forever, or at least until something even more compelling arrives. Maybe Crawford, Hunter and Filipovic can help us see the next big change coming.
Sunday, November 19, 2017
Lamp spectra - first try
kw: analysis, spectroscopy, lighting
In the past few years we have tried several lower-wattage "bug lights" as an alternative to the yellow 40-watt incandescent bulbs we've used before in our porch light fixture. When the one we had 4 years ago burnt out we got a 13 watt, yellow compact fluorescent spiral lamp by Sylvania. Though it was not marketed as a bug light, it worked pretty well, though some insects came to it. The next year I saw a 6 watt LED bug light, marketed as such by Feit, so we got that. It worked about equally well. Then I went looking for something that might be a little bit better, and got a 3 watt amber bug light, also by Feit. It doesn't draw insects, but it is pretty dim.
I decided to find out whether a little blue light is getting out of these lamps, so I made a crude spectroscope from a piece of diffraction grating and a short length of PVC pipe plus some odds and ends. In this photo it is on a tripod aimed at a test lamp. I aim a camera with a telephoto lens at the black aperture at the left, where the spectrum emerges.
I cut the end of the PVC for the grating at an angle so the spectrum would exit at right angles to the grating. It has the added benefit that, for visual use, looking the "back way" yields a spectrum about twice as wide. But the focal plane is strongly tilted, making it a poor choice for photography (though I tried!). The instrument has a number of shortcomings, but I think I know how to produce a better next version. For one thing, I'll use a different exit angle, so the diffraction grating doesn't reflect the camera and photographer! (see below)
I photographed the spectrum of nine lamps, the three test bug lights and several others either for spectrum reference or to see the spectral coverage of both incandescent and non-incandescent lamps. Eight of the lamps are shown here, and their spectra are tagged in the next image, followed by some explanation.
These are in the order listed in the spectra image.
The first three spectra are for reference. The 4000K (cool white) CFL shows a combination of spectral lines for mercury (Hg) and for the phosphors used to "whiten" the harsh blue-green light of raw Hg lamps. Mercury has a strong green spectral line at 546 nm, as seen in both this lamp and the 13W yellow CFL (A nearby strong green line is from a phosphor) and a strong blue-violet line at 405 nm, which excites some of the fluorescence, but a stronger near-UV line at 365 nm does most of that. The strong red-orange line at or near 615 nm is from a phosphor, as are the yellow-orange-green and green-blue-violet bands. The 40W incandescent lamp shows the smooth spectrum characteristic of a thermal source. The near-lack of yellow in this spectrum is because a camera's sensor sees colors differently from our eyes, but this is only evident when photographing spectra! The 60W "Reveal" lamp has a filter that cuts out most of the yellow and yellow-orange, making the light appear bluer and closer to daylight.
The next three spectra are for the bug lights. The 13W yellow CFL has the same spectrum as the white CFL from green through red, but with extra yellow and orange, and the green-blue-violet phosphor is left out. Also, a filter removes the blue and violet lines of Hg. The two LED's have nearly identical spectra. The blue-violet light from the fluorescence-exciting blue LED is filtered out, leaving only light from the broad band phosphors. The 3W lamp has a little more red-orange than the 6W lamp, and this is visible when they are lit side-by-side; the 3W lamp's color is amber. In the photo of the lamps above, the filter is inside the 3W lamp's envelope, which is white. For these three spectra, the brownish features seen below the green band are reflections of either me or the camera off the diffraction grating film.
The 8.5W LED is the kind of "warm white" bulb we have begun to use around the house. It has a spectrum very similar to incandescent; it just has a dip in the mid-blue range, and a bright band in the blue-violet range, which is from the fluorescence-exciting LED. The UV CFL is a "black light", very similar to old black light fluorescent tubes used at parties, but in spiral form. Most of the visible light is filtered out. The green and violet lines at 546 and 405 nm are a little visible anyway, and the camera is barely able to record the 365 nm line that does all the work of making fluorescent things glow. I am puzzled by the line in between, at about 385 nm. I don't know what it could be from. However, I know that these lamps use a phosphor that responds to a strong Hg line at 254 nm and converts it to longer-wave UV, to get more "black light". Perhaps it is the source of the 385 nm line and other faint features in that space, but I think it mainly adds more 365 nm light.
Finally, the 40W fluorescent tube is of the kind that has been in use for nearly my whole life (7 decades), now mostly supplanted by CFL's and LED's. The two lines of Hg in blue-violet and green come through, but broad-band phosphors fill out the light making these pretty good for most uses. They actually have better color rendering values than CFL's, at the cost of using nearly twice the power: a 40W "tube" and a 23W CFL both emit about 1,600 lumens, but strongly colored items may look a little odd with the CFL.
As crude as it is, this simple spectroscope helped me understand these lamps better. I think the reason that some insects still come to the three non-incandescent bug lights is that they can see the green light. I don't have an incandescent bug light, but I suspect it to have less green light than the CFL or the LED's. This has been an instructive exercise.
In the past few years we have tried several lower-wattage "bug lights" as an alternative to the yellow 40-watt incandescent bulbs we've used before in our porch light fixture. When the one we had 4 years ago burnt out we got a 13 watt, yellow compact fluorescent spiral lamp by Sylvania. Though it was not marketed as a bug light, it worked pretty well, though some insects came to it. The next year I saw a 6 watt LED bug light, marketed as such by Feit, so we got that. It worked about equally well. Then I went looking for something that might be a little bit better, and got a 3 watt amber bug light, also by Feit. It doesn't draw insects, but it is pretty dim.
I decided to find out whether a little blue light is getting out of these lamps, so I made a crude spectroscope from a piece of diffraction grating and a short length of PVC pipe plus some odds and ends. In this photo it is on a tripod aimed at a test lamp. I aim a camera with a telephoto lens at the black aperture at the left, where the spectrum emerges.
I cut the end of the PVC for the grating at an angle so the spectrum would exit at right angles to the grating. It has the added benefit that, for visual use, looking the "back way" yields a spectrum about twice as wide. But the focal plane is strongly tilted, making it a poor choice for photography (though I tried!). The instrument has a number of shortcomings, but I think I know how to produce a better next version. For one thing, I'll use a different exit angle, so the diffraction grating doesn't reflect the camera and photographer! (see below)
I photographed the spectrum of nine lamps, the three test bug lights and several others either for spectrum reference or to see the spectral coverage of both incandescent and non-incandescent lamps. Eight of the lamps are shown here, and their spectra are tagged in the next image, followed by some explanation.
These are in the order listed in the spectra image.
The first three spectra are for reference. The 4000K (cool white) CFL shows a combination of spectral lines for mercury (Hg) and for the phosphors used to "whiten" the harsh blue-green light of raw Hg lamps. Mercury has a strong green spectral line at 546 nm, as seen in both this lamp and the 13W yellow CFL (A nearby strong green line is from a phosphor) and a strong blue-violet line at 405 nm, which excites some of the fluorescence, but a stronger near-UV line at 365 nm does most of that. The strong red-orange line at or near 615 nm is from a phosphor, as are the yellow-orange-green and green-blue-violet bands. The 40W incandescent lamp shows the smooth spectrum characteristic of a thermal source. The near-lack of yellow in this spectrum is because a camera's sensor sees colors differently from our eyes, but this is only evident when photographing spectra! The 60W "Reveal" lamp has a filter that cuts out most of the yellow and yellow-orange, making the light appear bluer and closer to daylight.
The next three spectra are for the bug lights. The 13W yellow CFL has the same spectrum as the white CFL from green through red, but with extra yellow and orange, and the green-blue-violet phosphor is left out. Also, a filter removes the blue and violet lines of Hg. The two LED's have nearly identical spectra. The blue-violet light from the fluorescence-exciting blue LED is filtered out, leaving only light from the broad band phosphors. The 3W lamp has a little more red-orange than the 6W lamp, and this is visible when they are lit side-by-side; the 3W lamp's color is amber. In the photo of the lamps above, the filter is inside the 3W lamp's envelope, which is white. For these three spectra, the brownish features seen below the green band are reflections of either me or the camera off the diffraction grating film.
The 8.5W LED is the kind of "warm white" bulb we have begun to use around the house. It has a spectrum very similar to incandescent; it just has a dip in the mid-blue range, and a bright band in the blue-violet range, which is from the fluorescence-exciting LED. The UV CFL is a "black light", very similar to old black light fluorescent tubes used at parties, but in spiral form. Most of the visible light is filtered out. The green and violet lines at 546 and 405 nm are a little visible anyway, and the camera is barely able to record the 365 nm line that does all the work of making fluorescent things glow. I am puzzled by the line in between, at about 385 nm. I don't know what it could be from. However, I know that these lamps use a phosphor that responds to a strong Hg line at 254 nm and converts it to longer-wave UV, to get more "black light". Perhaps it is the source of the 385 nm line and other faint features in that space, but I think it mainly adds more 365 nm light.
Finally, the 40W fluorescent tube is of the kind that has been in use for nearly my whole life (7 decades), now mostly supplanted by CFL's and LED's. The two lines of Hg in blue-violet and green come through, but broad-band phosphors fill out the light making these pretty good for most uses. They actually have better color rendering values than CFL's, at the cost of using nearly twice the power: a 40W "tube" and a 23W CFL both emit about 1,600 lumens, but strongly colored items may look a little odd with the CFL.
As crude as it is, this simple spectroscope helped me understand these lamps better. I think the reason that some insects still come to the three non-incandescent bug lights is that they can see the green light. I don't have an incandescent bug light, but I suspect it to have less green light than the CFL or the LED's. This has been an instructive exercise.
Wednesday, November 15, 2017
Libraries - Don't even try to live without 'em!
kw: book reviews, nonfiction, libraries, librarians
In 2014 Kyle Cassidy was invited to a librarians' conference, where he photographed and interviewed a number of the attendees. A project was born. He went to more conferences and eventually obtained portraits and quotes from more than 300 librarians. This is What a Librarian Looks Like: A Celebration of Libraries, Communities, and Access to Information couples the photos and quotes with ten essays about specific libraries by Mr. Cassidy and a baker's dozen remembrances by authors and others who share how their lives were made or molded by libraries.
Spoiler (I suppose): Librarians look like everybody else. It is how they think that makes them different. Random quotes:
When I was 19 years old, having just moved from Ohio to California, I went to the nearest library and began checking out books. At that time of my life I needed escape and I needed it badly. The library had all the science fiction books in one section, a large shelf section seven feet high. I took the first five books at the upper left and checked them out. A few days later I returned them and checked out the next five. For the next year or more I continued this until I had read the entire section of about 500 sci-fi novels and short story collections. Thereafter I slowed down and branched out. When the library began to mix fantasy in with sci-fi, and then horror (Lovecraft was popular at the time), I backed off the fiction and began reading mostly nonfiction, primarily in science (Dewey Decimal numbers 500-599).
These days, even though I am retired, I am sufficiently busy that I seldom finish more than one book weekly. Looking back at recent blog posts (more than 70% are book reviews for the past four years), I find that I average about five books monthly.
I don't use the library only to check out books, though that is behind 90% of my visits. I have attended lectures and programs; I took a guitar to their Poetry Night several years ago and sang one of my songs, which led to a special program featuring my music; the genealogy club meets there and I have attended from time to time.
During the last ten years of my career at Dupont, I was a kind of librarian. I transferred from IT to IS (info science) and I was put in charge of upgrading the software used to index and retrieve technical documents in the Electronic Document Library (EDL). For the final couple of years, we had an upper manager who thought "Google can do anything," and cut way back on the indexing staff. Indexing is the highly specialized craft of determining the major themes of an article or report, and devising an appropriate set of key terms to attach to it in a Metadata portion of its electronic version. Professional indexers (I became one) also determine when a new key term is needed in the controlled vocabulary we were using. Human indexing is still the gold standard, and no "search engine" can yet extract the right set of key terms from any document substantial enough to warrant storing in an electronic library.
When Dupont was "only" a chemical company, the term "rust" was unequivocal. It referred to an oxidation process that corroded metals, particularly iron and some similar metallic elements. But someone who was creating the earliest controlled vocabulary for Dupont was wise enough to realize that "rust" could have wider meaning, and thus an entry in the list is:
That is one illustration of a phenomenon that is common in human languages. Words have multiple usages, and their context may be clear to us but not so to software. Even now, no Google search, not even using the Advanced Search page (if you can find it), is able to robustly distinguish articles about rusting of metals from agricultural rusts.
A growing problem today goes by the misleading moniker Fake News (If it is Fake, it isn't News; it's just a Lie). Things on the Internet were bad enough when the main issue with material was ignorance on the part of the writers, the "creators of content". I think nearly any random adult knows that advertising is biased. Gather all the ads you can on toothpaste, for example, and it seems that there are at least five brands that are "recommended" by more than half of all dentists. No toothpaste ad will mention that the surveys used to gather such recommendations consisted of questions of this form:
Which of the following brands of dentifrice would you recommend (Check all that apply)?
But what do we do when a larger and larger proportion of the "news" is truly a pack of lies? When I was young it was clear that the news media were biased to the left. Now the majority of them are left-leaning with actual malice. So what can we do? I suggest: Ask a librarian how to do your own research, how to track down the source of a story. That will take more than just looking it up on snopes.com (staffed by a very busy couple who are really, really good at research).
One of the most helpful humanities courses I ever took, with a title I no longer remember, taught us how to determine the bias in any publication. We read a very wide variety of journals, from Commonweal and The Wall Street Journal to National Review and The New York Times. We were to find diverse articles about the same recent event and compare them. It was the best course in critical thinking I've encountered.
I'll avoid digging further into the fake news conundrum. We need librarians' expertise and tool set to learn how to know what we know and how to know if what we know is worth knowing. 'Nuff said.
In 2014 Kyle Cassidy was invited to a librarians' conference, where he photographed and interviewed a number of the attendees. A project was born. He went to more conferences and eventually obtained portraits and quotes from more than 300 librarians. This is What a Librarian Looks Like: A Celebration of Libraries, Communities, and Access to Information couples the photos and quotes with ten essays about specific libraries by Mr. Cassidy and a baker's dozen remembrances by authors and others who share how their lives were made or molded by libraries.
Spoiler (I suppose): Librarians look like everybody else. It is how they think that makes them different. Random quotes:
"Without librarians and instructors teaching students how to do research, many students never learn that there is a better way to do and learn things." —Lindsay Davis, University of California, Merced
"I want to nurture curiosity, feed knowledge, lay a foundation for information." —Katie Lewis, Drexel University
"Everything comes down to information. Librarians know how to use it, find it, and share it with the world, and they're ready to help everyone else do the same." —Topher Lawton, Old Dominion University.
"In the morning, I'm a rock star to a room full of preschoolers; midday, I'm a social worker assisting a recently unemployed patron in finding resources; in the afternoon, I'm an educator leading kids through an after-school science workshop. Librarians serve so many purposes and wear so many hats, but all of them change lives." —Sara Coney, San Diego County LibraryMy favorite quote about libraries is by Jorge Luis Borges: "I have always imagined that paradise will be a kind of library." In case, dear reader, you haven't run across an earlier mention of this: the Polymath at Large blog would not exist without libraries. To date I have written 2,075 posts. 55% of them are book reviews. Other than the ongoing series of presenting The Collected Works of Watchman Nee, I own no more than a dozen of the books I have reviewed. The rest were borrowed from one of a handful of local libraries.
When I was 19 years old, having just moved from Ohio to California, I went to the nearest library and began checking out books. At that time of my life I needed escape and I needed it badly. The library had all the science fiction books in one section, a large shelf section seven feet high. I took the first five books at the upper left and checked them out. A few days later I returned them and checked out the next five. For the next year or more I continued this until I had read the entire section of about 500 sci-fi novels and short story collections. Thereafter I slowed down and branched out. When the library began to mix fantasy in with sci-fi, and then horror (Lovecraft was popular at the time), I backed off the fiction and began reading mostly nonfiction, primarily in science (Dewey Decimal numbers 500-599).
These days, even though I am retired, I am sufficiently busy that I seldom finish more than one book weekly. Looking back at recent blog posts (more than 70% are book reviews for the past four years), I find that I average about five books monthly.
I don't use the library only to check out books, though that is behind 90% of my visits. I have attended lectures and programs; I took a guitar to their Poetry Night several years ago and sang one of my songs, which led to a special program featuring my music; the genealogy club meets there and I have attended from time to time.
During the last ten years of my career at Dupont, I was a kind of librarian. I transferred from IT to IS (info science) and I was put in charge of upgrading the software used to index and retrieve technical documents in the Electronic Document Library (EDL). For the final couple of years, we had an upper manager who thought "Google can do anything," and cut way back on the indexing staff. Indexing is the highly specialized craft of determining the major themes of an article or report, and devising an appropriate set of key terms to attach to it in a Metadata portion of its electronic version. Professional indexers (I became one) also determine when a new key term is needed in the controlled vocabulary we were using. Human indexing is still the gold standard, and no "search engine" can yet extract the right set of key terms from any document substantial enough to warrant storing in an electronic library.
When Dupont was "only" a chemical company, the term "rust" was unequivocal. It referred to an oxidation process that corroded metals, particularly iron and some similar metallic elements. But someone who was creating the earliest controlled vocabulary for Dupont was wise enough to realize that "rust" could have wider meaning, and thus an entry in the list is:
rust USE corrosionAlso, two companion entries can be found:
corrosion USE FOR oxidative decaySure enough, if you look up "oxidative decay" you will find:
corrosion USE FOR rust
oxidative decay USE corrosionWouldn't you know it: Several decades ago Dupont began producing crop protection chemicals, and some of its anti-fungal chemicals were aimed at dealing with various fungi called "rust" such as "wheat rust". Thus, some newer terms referring to fungi were added to the controlled vocabulary.
That is one illustration of a phenomenon that is common in human languages. Words have multiple usages, and their context may be clear to us but not so to software. Even now, no Google search, not even using the Advanced Search page (if you can find it), is able to robustly distinguish articles about rusting of metals from agricultural rusts.
A growing problem today goes by the misleading moniker Fake News (If it is Fake, it isn't News; it's just a Lie). Things on the Internet were bad enough when the main issue with material was ignorance on the part of the writers, the "creators of content". I think nearly any random adult knows that advertising is biased. Gather all the ads you can on toothpaste, for example, and it seems that there are at least five brands that are "recommended" by more than half of all dentists. No toothpaste ad will mention that the surveys used to gather such recommendations consisted of questions of this form:
Which of the following brands of dentifrice would you recommend (Check all that apply)?
□ Beaver BriteThere may be 10 or 20 on the list. So, of course, if you're selling DentiGood and 64% of dentists happened to check it, along with five or eight others, you can claim, "2/3 of dentists recommend DentiGood!", thinking that nobody will mind if you round 64% up to 2/3. Of course, you would never, ever mention that 3/4 or more of those same dentists also "recommend" Sani-Kleen!
□ DentiGood
...
□ Sani-Kleen
...
□ Yello-Gone!
But what do we do when a larger and larger proportion of the "news" is truly a pack of lies? When I was young it was clear that the news media were biased to the left. Now the majority of them are left-leaning with actual malice. So what can we do? I suggest: Ask a librarian how to do your own research, how to track down the source of a story. That will take more than just looking it up on snopes.com (staffed by a very busy couple who are really, really good at research).
One of the most helpful humanities courses I ever took, with a title I no longer remember, taught us how to determine the bias in any publication. We read a very wide variety of journals, from Commonweal and The Wall Street Journal to National Review and The New York Times. We were to find diverse articles about the same recent event and compare them. It was the best course in critical thinking I've encountered.
I'll avoid digging further into the fake news conundrum. We need librarians' expertise and tool set to learn how to know what we know and how to know if what we know is worth knowing. 'Nuff said.
Wednesday, November 08, 2017
A dreadfully posh mystery
kw: book reviews, mysteries, aristocrats
Strangely enough, this book was next to a "Sneaky Pie Brown" mystery that I reached for rather absent-mindedly, but I didn't notice I'd mis-aimed until I got to checkout. I decided to consider it an adventure and see what was in store.
On Her Majesty's Frightfully Secret Service by Rhys Bowen shares only its take-off title with any Ian Fleming novel. The mandatory secret agent character is decidedly secondary to Lady Georgina Bannoch, who has all the adventures, solves mysteries, and generally just avoids becoming another victim. Other titles by the author typically take off from sundry titles and tropes (e.g., Her Royal Spyness, The Twelve Clues of Christmas).
Lady Georgie is a poor relation to the royal family, cousin to Queen Mary (mother of Elizabeth). Thus, although she has hardly any family money to go on, she gets pulled into aristocratic intrigues. In this volume, in the spring of 1935 she goes to Italy to care for an ailing friend, but also spends a few days at a house party in a large villa, one attended by the crown prince (her cousin David, who abdicated as Edward VIII) and his intended, Wallis Simpson, and a number of Italian and German grandees. She has a special reason for being at the house party, sent by the Queen to spy on the prince and Mrs. Simpson. Pre-WWII intrigue forms the backdrop.
Initially I found the aristocratic milieu rather tiring, but warmed to it in time. It is a fun sort of lingo to imitate, as fans of Jane Austen well know. I was also a bit taken aback by the characterization of the prince and his intended. If Mrs. Simpson was the spoiled harridan portrayed in the book, one wonders what the prince could possibly have seen in her, though he is portrayed as utterly devoted to her, albeit anxiously. I don't have sufficient knowledge to judge how accurate this may be.
The plot is a classically structured closed-door mystery, solved at the last moment and pretty much by accident by Lady Georgie. A bit of idle fun to read, a break from my usual diet of nonfiction.
"Rhys" is the Welsh version of "Reese" (as in Witherspoon); Ms Bowen is of the British Isles and knowledgeable enough about aristocratic habits of nearly a century ago to pull this off.
Strangely enough, this book was next to a "Sneaky Pie Brown" mystery that I reached for rather absent-mindedly, but I didn't notice I'd mis-aimed until I got to checkout. I decided to consider it an adventure and see what was in store.
On Her Majesty's Frightfully Secret Service by Rhys Bowen shares only its take-off title with any Ian Fleming novel. The mandatory secret agent character is decidedly secondary to Lady Georgina Bannoch, who has all the adventures, solves mysteries, and generally just avoids becoming another victim. Other titles by the author typically take off from sundry titles and tropes (e.g., Her Royal Spyness, The Twelve Clues of Christmas).
Lady Georgie is a poor relation to the royal family, cousin to Queen Mary (mother of Elizabeth). Thus, although she has hardly any family money to go on, she gets pulled into aristocratic intrigues. In this volume, in the spring of 1935 she goes to Italy to care for an ailing friend, but also spends a few days at a house party in a large villa, one attended by the crown prince (her cousin David, who abdicated as Edward VIII) and his intended, Wallis Simpson, and a number of Italian and German grandees. She has a special reason for being at the house party, sent by the Queen to spy on the prince and Mrs. Simpson. Pre-WWII intrigue forms the backdrop.
Initially I found the aristocratic milieu rather tiring, but warmed to it in time. It is a fun sort of lingo to imitate, as fans of Jane Austen well know. I was also a bit taken aback by the characterization of the prince and his intended. If Mrs. Simpson was the spoiled harridan portrayed in the book, one wonders what the prince could possibly have seen in her, though he is portrayed as utterly devoted to her, albeit anxiously. I don't have sufficient knowledge to judge how accurate this may be.
The plot is a classically structured closed-door mystery, solved at the last moment and pretty much by accident by Lady Georgie. A bit of idle fun to read, a break from my usual diet of nonfiction.
"Rhys" is the Welsh version of "Reese" (as in Witherspoon); Ms Bowen is of the British Isles and knowledgeable enough about aristocratic habits of nearly a century ago to pull this off.
Saturday, November 04, 2017
Relating for communicating
kw: book reviews, nonfiction, communication, improvisation
An actor who is any good must become an expert at relating with an audience. This usually means inducing people to care about the character. The best actors may not win all the Oscars, but they are the ones people care about the most. This is distinct from the odd quality of being a "celebrity".
If people watching a play or movie empathize with the character, does that mean that the actor portraying that character also has a lot of empathy? Sometimes, maybe most of the time. Of course, some actors are totally faking empathy, having learned to induce sympathetic feelings in a cynical way, even a psychopathic way (psychopaths are frequently very charming, but it is surface only).
Alan Alda had learned to act what he feels, and became the host of Scientific American Frontiers and several other series because of his unparalleled ability to genuinely relate to the people in the episodes and to the audiences. In his book If I Understood You, Would I Have This Look on My Face?: My Adventures in the Art and Science of Relating and Communicating, Alda relates that it was not always so. Even after a successful career in improv, stage, and screen acting, when he first interviewed a scientist, he made at least five blunders that he never would have made had he made the connection between how an actor projects a character to an audience, and how an interviewer relates to the subject of the interview and to the audience who will watch it. (At 23 words, the book's title is one of the longest on record, and it has my personal "Bravo!" for projecting clarity in a title that long!) The book describes many of the tools, borrowed primarily from improvisational theater, and the "games" used by improv coaches, that Alda and his colleagues at his Center for Communicating Science (now at Stony Brook University) use to improve the communications skills of those least likely to have developed any: working scientists.
I was in drama club in high school, and acted in a repertory company my first two years of college, but I never learned improv. I was strictly a "by the script" actor. But as I read I gradually learned how to relate to the stories Alda tells, and the principles they embody.
For most of us, breakdown of communication has one source: FEAR. I once took a "Business Writing" class my company sponsored, and the pre-assignment was to "improve" a badly-written business letter. I turned in two versions. One was a re-write based on principles of business writing that I knew already. The second was much shorter: brief, to the point, and totally forthright; to it I attached a note, "Here is how we would write if we didn't fear one another."
The games Alda describes and the other methods he uses for breaking down barriers between any two people who want to communicate to one another, all drive out fears in one way or another. For example, one of the first "games", Mirroring, gradually shows the participants that they are not so different. The better the "follower" gets at following the actions of the "leader", even learning to anticipate and thus mirror without delay, the more both learn how similar they are. An advanced version, "leaderless mirroring", drives the point even deeper.
I am such a purist, I had a harder time than most will, to "get" what the author is sharing. Finally, though, the message on one significant point became clear to me: most "lecturing" is answering questions that have not been asked, just as most "help" is presented so as to help the helper (or how the helper imagines needing to be helped); rather, effective communication requires knowing, or learning, enough about the opposite party, so that we elicit the right questions, spoken or not, and then the other is ready to receive the "answers". This solidified a realization I had about the "Golden Rule", which grew into several steps of increasing value:
Alda writes much about empathy and Theory of Mind, which allow us to, in part, "read" others' minds. If we know how to listen, though, nothing beats a well-crafted question.
Though I feel quite dull of senses, in an emotional sense at least, I got much from this book, so I think practically anyone can gain much.
An actor who is any good must become an expert at relating with an audience. This usually means inducing people to care about the character. The best actors may not win all the Oscars, but they are the ones people care about the most. This is distinct from the odd quality of being a "celebrity".
If people watching a play or movie empathize with the character, does that mean that the actor portraying that character also has a lot of empathy? Sometimes, maybe most of the time. Of course, some actors are totally faking empathy, having learned to induce sympathetic feelings in a cynical way, even a psychopathic way (psychopaths are frequently very charming, but it is surface only).
Alan Alda had learned to act what he feels, and became the host of Scientific American Frontiers and several other series because of his unparalleled ability to genuinely relate to the people in the episodes and to the audiences. In his book If I Understood You, Would I Have This Look on My Face?: My Adventures in the Art and Science of Relating and Communicating, Alda relates that it was not always so. Even after a successful career in improv, stage, and screen acting, when he first interviewed a scientist, he made at least five blunders that he never would have made had he made the connection between how an actor projects a character to an audience, and how an interviewer relates to the subject of the interview and to the audience who will watch it. (At 23 words, the book's title is one of the longest on record, and it has my personal "Bravo!" for projecting clarity in a title that long!) The book describes many of the tools, borrowed primarily from improvisational theater, and the "games" used by improv coaches, that Alda and his colleagues at his Center for Communicating Science (now at Stony Brook University) use to improve the communications skills of those least likely to have developed any: working scientists.
I was in drama club in high school, and acted in a repertory company my first two years of college, but I never learned improv. I was strictly a "by the script" actor. But as I read I gradually learned how to relate to the stories Alda tells, and the principles they embody.
For most of us, breakdown of communication has one source: FEAR. I once took a "Business Writing" class my company sponsored, and the pre-assignment was to "improve" a badly-written business letter. I turned in two versions. One was a re-write based on principles of business writing that I knew already. The second was much shorter: brief, to the point, and totally forthright; to it I attached a note, "Here is how we would write if we didn't fear one another."
The games Alda describes and the other methods he uses for breaking down barriers between any two people who want to communicate to one another, all drive out fears in one way or another. For example, one of the first "games", Mirroring, gradually shows the participants that they are not so different. The better the "follower" gets at following the actions of the "leader", even learning to anticipate and thus mirror without delay, the more both learn how similar they are. An advanced version, "leaderless mirroring", drives the point even deeper.
I am such a purist, I had a harder time than most will, to "get" what the author is sharing. Finally, though, the message on one significant point became clear to me: most "lecturing" is answering questions that have not been asked, just as most "help" is presented so as to help the helper (or how the helper imagines needing to be helped); rather, effective communication requires knowing, or learning, enough about the opposite party, so that we elicit the right questions, spoken or not, and then the other is ready to receive the "answers". This solidified a realization I had about the "Golden Rule", which grew into several steps of increasing value:
- The SILVER rule (attributed to Confucius and others): "Do not do to another anything that you don't want done to you."
- The GOLDEN rule (from the sayings of Jesus in the Bible): "Whatever you wish that others would do to you, do also to them."
- The PLATINUM rule: "Do unto others as they wish to have done to them."
- The DIAMOND rule: "Ask first".
Alda writes much about empathy and Theory of Mind, which allow us to, in part, "read" others' minds. If we know how to listen, though, nothing beats a well-crafted question.
Though I feel quite dull of senses, in an emotional sense at least, I got much from this book, so I think practically anyone can gain much.
Friday, October 27, 2017
One Tree, One Year, One Book
kw: book reviews, nonfiction, natural history, trees, forests, phenology
A woman once wrote of a talk with her daughter, who was being affected by the licentious, free-love, promiscuous atmosphere of the late 1970's. The daughter had asked, "How can you be satisfied to be with just one person for a lifetime?" The woman reminded her daughter of her youthful experience living near the sea shore: "Every day that you could, you went to the little cove. You explored it over and over. You never tired of it. If we still lived there, don't you think you would still enjoy it, just that one little cove, so full of things to see, that changed a bit every day, but was always the same?"
These are wise words. They explain how a couple can, with a bit of imagination, remain fascinated with one another for 40 or 50 years or more (my parents were married 58 years; so far for me, 42 years). I thought of this story when I began to read Witness Tree: Seasons of Change with a Century-Old Oak by Lynda V. Mapes. She spent a year living in visitors' quarters at Harvard Forest near Petersham, Massachusetts, lending much of her attention to a single tree, a red oak about 100 years old and more than 80 feet tall.
The Harvard Forest is home to dozens, perhaps hundreds of experiments in forestry, botany, climatology and a number of other disciplines. Many have been going on for decades, though it is unlikely any have continued since the founding of the Forest in 1907. The map pin in this image is the approximate location of the Witness Tree.
One of the author's mentors has been walking the same route at least weekly, sometimes twice weekly, recording his observations of selected plants—trees and shrubs, mostly—and has compiled a record of more than 25 years of the phenology of those plants and that bit of the forest.
Phenology! I had to look it up (I'd seen the word before and could guess its meaning, but…):
Such a record for a perennial plant, particularly a shrub or tree, would include numerous events throughout the year, for year after year, not only of what the plant is doing but what significant weather and other environmental events occurred, and the plant's response to them. Disease or locust invasion or hailstorm? It all goes in the record.
The author makes clear throughout the book her interest in climate change and how this tree and those around it are responding, and have responded over the past half century or so. A core sample taken at the beginning of "her" year showed that for the past decade the tree has been doing very well, adding thicker rings than at any similar period in its past. In a sense, the changing climate has been good for this tree. Also, the tree is doing what it can to absorb carbon dioxide and make wood out of it, which mitigates the rapidity of climate change.
I want to deal with a quibble before going on: In Chapter 9 the author does her best to explain the greenhouse effect that carbon dioxide participates in, to warm the earth. A good biologist is not necessarily a good physicist, and the following statement needs amendment:
About a century ago Svante Arrhenius determined, using calculations so simple that I have done them myself, that if atmospheric carbon dioxide were doubled from 300 ppm to 600 ppm, global average temperature would rise by about 4°C (7°F). However, the conclusion that doubling it again to 1,200 ppm (0.12%) would cause an 8°C rise is false. It is not a linear relationship. Better calculations show that carbon dioxide cannot drive warming beyond a level of about 5.5°C (10°F), even with several percent of the atmosphere being composed of carbon dioxide. At that point we would find our breathing affected! Also, The Arrhenius calculations don't take weather into account. When energy is added to the system, some of it goes into stronger winds and more frequent extreme weather events. These can reduce the extra warming by about half. This is a good-news-bad-news situation: global temperature rise is limited to about 3°C, but insurance companies are going to be paying out more claims related to floods, tornadoes and hurricanes.
Now, back to the wonderful tree in Harvard Forest. You can follow it day by day here. It is currently the last in a list of 13 "phenology cameras" (this image was captured an hour or two before I began writing this post).
Ms Mapes, with the help of several colleagues, measured the tree, studied the animals and plants that lived nearby, in, or on it, and climbed it a few times. Such a tree is no easy climb. The lowest branches begin about 40 feet above the ground. Just throwing a bean bag on a string over a branch to start hauling up a climbing rope is no easy feat. Once she learned to do that, her tree-climbing mentor pulled out a large slingshot that can rather accurately place a bean bag over a limb of choice!
Dozens of insects and other small animals depend on forest trees. A "trail camera" also showed her the various animals, from badgers and skunks to deer and coyotes, that passed by the tree, usually without paying attention to it. Trees are remarkable, having coping mechanisms of many kinds because they don't have the option of going indoors when it snows, or of packing up and moving elsewhere when threatened. They must just sit and take it. Two chapters on the way trees "talk", including the way they send signals to one another about new insect depredations, and how trees that receive such signals change their leaf chemistry to discourage the attackers, show that they are far from passive receivers of whatever nature dishes up.
We have, most of us, a certain affinity for trees. We like them in our yards: A house on a quarter-acre lot that hosts 20-30 trees sells for 20% more than one with a tree-free lawn. Eight of the ten most-visited National Parks are forested (see here). While I love the desert, even there I most enjoy areas with "large vegetation", such as at Joshua Tree in the Mojave or the Saguaro-studded areas around Tucson.
In pre-electronic days, a Witness Tree was a landmark used by surveyors, from which the survey of a neighborhood-sized area was conducted. Even in an age of GPS and ubiquitous cell phone towers (which are sometimes camouflaged as rather odd pine trees), it is no waste to give time to observe the changes of a single tree, or a small stand, as they respond to the seasons of the year, and the changes from year to year. When we slow down to not just "smell the roses" but to truly see what is going on, we are all natural-born phenologists.
A woman once wrote of a talk with her daughter, who was being affected by the licentious, free-love, promiscuous atmosphere of the late 1970's. The daughter had asked, "How can you be satisfied to be with just one person for a lifetime?" The woman reminded her daughter of her youthful experience living near the sea shore: "Every day that you could, you went to the little cove. You explored it over and over. You never tired of it. If we still lived there, don't you think you would still enjoy it, just that one little cove, so full of things to see, that changed a bit every day, but was always the same?"
These are wise words. They explain how a couple can, with a bit of imagination, remain fascinated with one another for 40 or 50 years or more (my parents were married 58 years; so far for me, 42 years). I thought of this story when I began to read Witness Tree: Seasons of Change with a Century-Old Oak by Lynda V. Mapes. She spent a year living in visitors' quarters at Harvard Forest near Petersham, Massachusetts, lending much of her attention to a single tree, a red oak about 100 years old and more than 80 feet tall.
The Harvard Forest is home to dozens, perhaps hundreds of experiments in forestry, botany, climatology and a number of other disciplines. Many have been going on for decades, though it is unlikely any have continued since the founding of the Forest in 1907. The map pin in this image is the approximate location of the Witness Tree.
One of the author's mentors has been walking the same route at least weekly, sometimes twice weekly, recording his observations of selected plants—trees and shrubs, mostly—and has compiled a record of more than 25 years of the phenology of those plants and that bit of the forest.
Phenology! I had to look it up (I'd seen the word before and could guess its meaning, but…):
Phenology is the study of periodic plant and animal life cycle events and how these are influenced by seasonal and interannual variations in climate, as well as habitat factors. (from Wikipedia)A phenological record for a simple annual plant such as a sunflower might include when the seed first sprouted, the first true leaf emerged, the height and spread of the plant on various dates, when each flower appeared, when the seeds ripened, when goldfinches began to eat the seeds and when they were all finished off, the date of the first killing frost, and when the stem fell over.
Such a record for a perennial plant, particularly a shrub or tree, would include numerous events throughout the year, for year after year, not only of what the plant is doing but what significant weather and other environmental events occurred, and the plant's response to them. Disease or locust invasion or hailstorm? It all goes in the record.
The author makes clear throughout the book her interest in climate change and how this tree and those around it are responding, and have responded over the past half century or so. A core sample taken at the beginning of "her" year showed that for the past decade the tree has been doing very well, adding thicker rings than at any similar period in its past. In a sense, the changing climate has been good for this tree. Also, the tree is doing what it can to absorb carbon dioxide and make wood out of it, which mitigates the rapidity of climate change.
I want to deal with a quibble before going on: In Chapter 9 the author does her best to explain the greenhouse effect that carbon dioxide participates in, to warm the earth. A good biologist is not necessarily a good physicist, and the following statement needs amendment:
"As it enters our atmosphere, the radiant shortwave energy of the sun is transformed to long-wave radiation – heat. Molecules of carbon dioxide in the atmosphere absorb this heat and vibrate as they warm, creating even more heat." (emphasis mine)It takes a moment to understand what she is saying here, because heat-induced warming of any gas does not create more heat. What is happening is that the molecules absorb radiation of medium wavelengths (near infrared), which induces vibration in the molecules so that they re-radiate longer wave energy, a broad spectrum of medium-to-far infrared. No "heat" is created. Infrared radiation is not specifically "heat" radiation, because all radiation heats up anything that absorbs it, in equal measure. A beam consisting of one watt of green, blue, ultraviolet, or whatever radiation, will cause just as much heating as a beam consisting of one watt of infrared radiation. So, more accurately, and specifically related to the greenhouse mechanism:
…the shortwave energy of the sun is absorbed by the earth's surface—dirt, plants, pavement, water—which warms them so that they emit long-wavelength infrared. Some gases in the atmosphere, primarily water vapor, carbon dioxide, and methane, absorb a lot of infrared, which warms them so their molecules vibrate and re-radiate infrared. Half of it is directed generally back down, and half generally outward into space. This redirection is a barrier to the infrared being radiated directly outward, so the earth's surface must get a little warmer and radiate more infrared, bringing about a balance between all the light that was originally absorbed and what is radiated back outward.Nowhere does she mention that the primary greenhouse gas is water vapor. Though she does say that without the greenhouse effect the earth would be 33 degrees C cooler, and thus mostly frozen, nearly all of that warming to temperatures we consider "comfortable" is because of water vapor. The 280 ppm (0.028%) of carbon dioxide in the pre-industrial atmosphere added about 2°C. Now its level is about 400 ppm, and this has added another degree C.
About a century ago Svante Arrhenius determined, using calculations so simple that I have done them myself, that if atmospheric carbon dioxide were doubled from 300 ppm to 600 ppm, global average temperature would rise by about 4°C (7°F). However, the conclusion that doubling it again to 1,200 ppm (0.12%) would cause an 8°C rise is false. It is not a linear relationship. Better calculations show that carbon dioxide cannot drive warming beyond a level of about 5.5°C (10°F), even with several percent of the atmosphere being composed of carbon dioxide. At that point we would find our breathing affected! Also, The Arrhenius calculations don't take weather into account. When energy is added to the system, some of it goes into stronger winds and more frequent extreme weather events. These can reduce the extra warming by about half. This is a good-news-bad-news situation: global temperature rise is limited to about 3°C, but insurance companies are going to be paying out more claims related to floods, tornadoes and hurricanes.
Now, back to the wonderful tree in Harvard Forest. You can follow it day by day here. It is currently the last in a list of 13 "phenology cameras" (this image was captured an hour or two before I began writing this post).
Ms Mapes, with the help of several colleagues, measured the tree, studied the animals and plants that lived nearby, in, or on it, and climbed it a few times. Such a tree is no easy climb. The lowest branches begin about 40 feet above the ground. Just throwing a bean bag on a string over a branch to start hauling up a climbing rope is no easy feat. Once she learned to do that, her tree-climbing mentor pulled out a large slingshot that can rather accurately place a bean bag over a limb of choice!
Dozens of insects and other small animals depend on forest trees. A "trail camera" also showed her the various animals, from badgers and skunks to deer and coyotes, that passed by the tree, usually without paying attention to it. Trees are remarkable, having coping mechanisms of many kinds because they don't have the option of going indoors when it snows, or of packing up and moving elsewhere when threatened. They must just sit and take it. Two chapters on the way trees "talk", including the way they send signals to one another about new insect depredations, and how trees that receive such signals change their leaf chemistry to discourage the attackers, show that they are far from passive receivers of whatever nature dishes up.
We have, most of us, a certain affinity for trees. We like them in our yards: A house on a quarter-acre lot that hosts 20-30 trees sells for 20% more than one with a tree-free lawn. Eight of the ten most-visited National Parks are forested (see here). While I love the desert, even there I most enjoy areas with "large vegetation", such as at Joshua Tree in the Mojave or the Saguaro-studded areas around Tucson.
In pre-electronic days, a Witness Tree was a landmark used by surveyors, from which the survey of a neighborhood-sized area was conducted. Even in an age of GPS and ubiquitous cell phone towers (which are sometimes camouflaged as rather odd pine trees), it is no waste to give time to observe the changes of a single tree, or a small stand, as they respond to the seasons of the year, and the changes from year to year. When we slow down to not just "smell the roses" but to truly see what is going on, we are all natural-born phenologists.
Sunday, October 22, 2017
Spider, spider on the wwweb
kw: blogging, blogs, spider scanning
I checked the blog stats today. Had I waited a day or two I'd have missed another love note from Russia. All quiet since some time on October 17th, though.
I checked the blog stats today. Had I waited a day or two I'd have missed another love note from Russia. All quiet since some time on October 17th, though.
Thursday, October 19, 2017
Seven edges of knowledge
kw: book reviews, nonfiction, science, theories, knowledge
Can we (collectively) know everything, or will some things remain forever beyond our ken? The answer depends on how much there is to know. If knowledge is, quite literally, infinite, then given the universe as we understand it, there is no possibility that everything can be known. But there is another way to look at the question, one taken by Professor Marcus du Sautoy in The Great Unknown: Seven Journeys to the Frontiers of Science: are some things forever unknowable by their very nature? His "Seven Journeys" are studies of the cutting edge of seven scientific and socio-scientific disciplines; they are explorations into what can be known accordingly.
The seven disciplines are simply stated: Chaos (in the mathematical sense), Matter, Quantum Physics, The Universe, Time, Consciousness, and Infinity (again, in the mathematical sense). Six of these are related to the "hard" sciences, while Consciousness is considered a "soft" problem by many, but in reality, it may be the hardest of all!
I knew beforehand of the three great theoretical limits to the hard sciences that were gradually elucidated in the past century or so: Heisenberg Uncertainty, Schrödinger Undecidability, and Gödel Incompleteness. Each can be considered from two angles:
The only way we know to study consciousness is to study our own and that of a small number of animals that seem to be self-aware. Some would posit that we can create conscious artificial intelligence (AI), but this is questionable because all known methods in the sphere of AI studies are algorithmic, even if the algorithm is "hidden" inside a neural network. Since we do not yet know if natural intelligence (NI) is algorithmic, we cannot compare AI to NI in any meaningful sense!
One consequence of a possibly infinite universe is that everything we see around us might be duplicated an endless number of times, right down to the atomic and subatomic level. Thus there could be infinite numbers of the Polymath at Large typing this sentence, right now, in an infinite number of places, though very widely separated, to be sure (say, by a few trillions or quadrillions of light years, or perhaps much, much more). But, if I understand the proposition correctly, that is only possible if space is quantized. Quantization of space is based on the discovery of the Planck length and the Planck time about a century ago. They are the smallest meaningful units of length and time known. The Planck length is about 1.62x10-35 m, or about 10-20 the size of a proton. If space is quantized, it is most likely quantized on this scale. The Planck time is the time it takes a photon to travel a Planck length, or about 5.4x10-44 sec.
If space is quantized with the space quantum being a Planck length, that means that positions can be represented by very large integers, and that those positions will be not just very precise, but exact. How large an integer? If we consider only the visible universe, which has a proper radius of about 75 billion light years, or 7.1x1026 m, you'd need a decimal integer of 44+26+1 = 71 digits, or a binary word (for the computer) containing 236 bits or 29.5 → 30 bytes.
The trouble comes when you want to learn positions to this kind of precision/exactitude. To learn a dimension to an accuracy of one micron you need to use light (or another sort of particle such as an electron) with a wavelength of a micron, or smaller, to see it. To see the position of a silicon atom in a crystal, you need x-ray wavelengths smaller than 0.2nm (or 200 pm), which comes to 6,200 eV per photon. X-rays of that energy are a little on the mild side. But to "see" a proton, you are getting in the sub-femtometer range, which requires gamma ray photons with several million eV each. Twenty orders of magnitude smaller yet, to be able to distinguish a Planck length, would require such energetic gamma rays (about an octillion eV each) that two of them colliding would probably trigger a new Big Bang.
By the way, photon energies of billions to trillions of eV would be needed to pin down the locations of the quarks inside nucleons, which is what would actually be needed to get a "Star Trek Transporter" to work, at both the scanning and receiving end. Each such photon has the energy of a rifle bullet. You would need several per quark of your sample to transport. Maybe that's why the transporter hasn't been invented yet, and probably never could be…even if Dilithium and Rubindium get discovered one day.
Also, just by the bye, in a quantized universe there would be no irrational numbers, not truly. I am not sure how lengths "off axis" could be calculated, but they would somehow have to be jiggered to the next quantum of space. There goes Cantor's Aleph-1 infinity!
OK, I got so wrapped up in all of this that I hardly reviewed the book. It's a great read, so get it and read it and go do your own rant about the limits of knowledge!
Can we (collectively) know everything, or will some things remain forever beyond our ken? The answer depends on how much there is to know. If knowledge is, quite literally, infinite, then given the universe as we understand it, there is no possibility that everything can be known. But there is another way to look at the question, one taken by Professor Marcus du Sautoy in The Great Unknown: Seven Journeys to the Frontiers of Science: are some things forever unknowable by their very nature? His "Seven Journeys" are studies of the cutting edge of seven scientific and socio-scientific disciplines; they are explorations into what can be known accordingly.
The seven disciplines are simply stated: Chaos (in the mathematical sense), Matter, Quantum Physics, The Universe, Time, Consciousness, and Infinity (again, in the mathematical sense). Six of these are related to the "hard" sciences, while Consciousness is considered a "soft" problem by many, but in reality, it may be the hardest of all!
I knew beforehand of the three great theoretical limits to the hard sciences that were gradually elucidated in the past century or so: Heisenberg Uncertainty, Schrödinger Undecidability, and Gödel Incompleteness. Each can be considered from two angles:
- Heisenberg Uncertainty (H.U.) is the principle that the combination of momentum and position can be known to a certain level of precision, but no further. It primarily shows up in the realm of particle physics. Thus, if you know with very great accuracy where a particle is or has been (for example, by letting it pass through a very small hole), you cannot be very certain of its momentum, in a vector sense. In the realm of things on a human scale, diffraction of light expresses this. If you pass a beam of light through a very small hole, it fans out into a beam with a width that is wider the smaller the hole is. This has practical applications for astronomers: the large "hole" represented by the 94-inch (2.4 meter) aperture of the Hubble Space Telescope prevents the "Airy circle" of the image for a distant star from being smaller than about 0.04 arc seconds in visible light, and about 0.1 arc seconds in near-infrared light. The mirror for the James Webb Space Telescope will be 2.7 times larger, and the images will therefore be 2.7 times sharper. But no telescope can be big enough to produce images of "infinite" sharpness, for the aperture would need to be infinite. All that aside, the two interpretations of H.U. are
- The presence of the aperture "disturbs" the path of the particle (in the case of astronomy, each photon), which can somehow "feel" it and thus gets a random sideways "kick".
- The Copenhagen Interpretation, that the particle is described by a wave equation devised by Schrödinger that has some value everywhere in space, but the particle's actual location is not determined until it is "observed". The definition of "observer" has never been satisfactorily stated.
- Schrödinger Undecidability, proposed originally as a joke about a cat that might be both dead and alive at the same moment, is the principle that the outcome of any single quantum process cannot be known until its effect has been observed. The "cat" story places a cat in a box with some poison gas in a flask which has a 50% chance of being broken open in the next hour according to some quantum event such as the radioactive decay of a radium nucleus. Near the end of the hour, you are asked, "Is the cat dead or alive?" You cannot decide. Again that pesky "observer" shows up. But nowhere have I read that the cat is also an observer! Nonetheless, the principle illustrates that, while we can know with a certain accuracy the average number of quantum events of a certain kind that might occur, we have no way to know if "that nucleus over there" will be the next to go. Two ways of interpreting this situation are given, similar to the above, firstly that the event sort of "decides itself", and the other, also part of the Copenhagen Interpretation, that only when an outcome has been observed can you know anything about the system and what it has done.
- Gödel Incompleteness is described in two theorems that together proved mathematically that in any given algorithmic system, questions can be asked, and even their truth can be described, but those questions' veracity cannot be proven within that algorithmic system. Most examples you'll find in the literature are self-referential things such as a card that reads on one side, "The statement on the other side of this card is true" and on the other, "The statement on the other side of this card is false." Such bogeys are models of ways of thinking about the Incompleteness theorems, without really getting to their kernel. A great many of them were discussed in gory detail by Doug Hofstadter in his book Gödel, Escher, Bach: The Eternal Golden Braid, without getting to the crux of the matter: Is our own consciousness an algorithmic system? because it seems we can always (given time) develop a larger system in which previously uncrackable conundrums are solvable. But then of course, we find there are "new and improved" conundrums that the tools of the new system cannot handle. An example given in The Great Unknown is the physics of Newton being superseded and subsumed into the two theories of Relativity developed by Einstein. Again, there are two ways this principle is thought of. Firstly, that given time and ingenuity we will always be able to develop another level of "meta system" and solve the old problems. But secondly, we get into the realm of the "hard-soft" problem of consciousness: Is consciousness algorithmic? for if it is, we will one day run out of meta systems and can go no further.
The only way we know to study consciousness is to study our own and that of a small number of animals that seem to be self-aware. Some would posit that we can create conscious artificial intelligence (AI), but this is questionable because all known methods in the sphere of AI studies are algorithmic, even if the algorithm is "hidden" inside a neural network. Since we do not yet know if natural intelligence (NI) is algorithmic, we cannot compare AI to NI in any meaningful sense!
One consequence of a possibly infinite universe is that everything we see around us might be duplicated an endless number of times, right down to the atomic and subatomic level. Thus there could be infinite numbers of the Polymath at Large typing this sentence, right now, in an infinite number of places, though very widely separated, to be sure (say, by a few trillions or quadrillions of light years, or perhaps much, much more). But, if I understand the proposition correctly, that is only possible if space is quantized. Quantization of space is based on the discovery of the Planck length and the Planck time about a century ago. They are the smallest meaningful units of length and time known. The Planck length is about 1.62x10-35 m, or about 10-20 the size of a proton. If space is quantized, it is most likely quantized on this scale. The Planck time is the time it takes a photon to travel a Planck length, or about 5.4x10-44 sec.
If space is quantized with the space quantum being a Planck length, that means that positions can be represented by very large integers, and that those positions will be not just very precise, but exact. How large an integer? If we consider only the visible universe, which has a proper radius of about 75 billion light years, or 7.1x1026 m, you'd need a decimal integer of 44+26+1 = 71 digits, or a binary word (for the computer) containing 236 bits or 29.5 → 30 bytes.
The trouble comes when you want to learn positions to this kind of precision/exactitude. To learn a dimension to an accuracy of one micron you need to use light (or another sort of particle such as an electron) with a wavelength of a micron, or smaller, to see it. To see the position of a silicon atom in a crystal, you need x-ray wavelengths smaller than 0.2nm (or 200 pm), which comes to 6,200 eV per photon. X-rays of that energy are a little on the mild side. But to "see" a proton, you are getting in the sub-femtometer range, which requires gamma ray photons with several million eV each. Twenty orders of magnitude smaller yet, to be able to distinguish a Planck length, would require such energetic gamma rays (about an octillion eV each) that two of them colliding would probably trigger a new Big Bang.
By the way, photon energies of billions to trillions of eV would be needed to pin down the locations of the quarks inside nucleons, which is what would actually be needed to get a "Star Trek Transporter" to work, at both the scanning and receiving end. Each such photon has the energy of a rifle bullet. You would need several per quark of your sample to transport. Maybe that's why the transporter hasn't been invented yet, and probably never could be…even if Dilithium and Rubindium get discovered one day.
Also, just by the bye, in a quantized universe there would be no irrational numbers, not truly. I am not sure how lengths "off axis" could be calculated, but they would somehow have to be jiggered to the next quantum of space. There goes Cantor's Aleph-1 infinity!
OK, I got so wrapped up in all of this that I hardly reviewed the book. It's a great read, so get it and read it and go do your own rant about the limits of knowledge!
Monday, October 09, 2017
The die of a trillion faces
kw: analysis, radioactivity, quantum physics, chaos
I'm halfway through a book about the edges of scientific knowledge, which I'll review anon. In the meantime, two of the chapters got me thinking: one on mathematical chaos and the other on quantum randomness as it relates to radioactivity.
Mathematical chaos does not refer to utter randomness, but to mathematical process that are completely deterministic but "highly sensitive to initial conditions." Such systems are typically studied by running computer simulations, which brings out an amusing feature: many such systems are also overly prone to amplify rounding errors in the calculations. For example, numerically solving a set of stiff differential equations frequently results in the solution "blowing up" after a certain point, because the rounding errors have accumulated and overwhelm the result.
Natural systems, being analog and not digital, can be described by sets of differential equations. Digital simulations of such systems can proceed only so far before descending into nonsense. The most famous of these is forecasting the weather. Many computer scientists and meteorologists have labored for decades to produce weather models that run longer and longer, farther and farther into the future, before "losing it." So now we have modestly reliable seven-day forecasts (and Accuweather.com has the temerity to show 90-day forecasts); a decade ago or so, no forecast beyond three or four days was any good.
Quantum randomness is a beast of another color, indeed, of a different spectrum of colors! These days the classic illustration is the ultra-low-power two-slit interference pattern. You can produce a visible (and thus moderate-power) pattern with a laser pointer, a pinhole or lens, and a little piece of foil with two narrow slits a short distance apart. The pinhole or lens will spread the beam so you can see it hit both slits. On a screen a few inches behind, a pattern of parallel lines will appear, similar to this image.
The ultra-low-power version is to set this up with the lens/pinhole and the slits and the laser held in stands, and the screen replaced by sensitive photographic film. Then a strong filter is put at the laser's output, calculated to make the beam so weak that no more than one photon will be found in the space between the laser and the film at any one time. Such an arrangement requires an exposure of a few hours to get the beginnings of a record, and several days to get an image like the one above. Whereas this experiment with strong light seems to show the wave nature of light, the ultra-low-power version shows that a photon has a wave nature all by its lonely self!
A "short" exposure of an hour or less will show just a few dots where single photons were captured by the emulsion. They appear entirely random. The longer the exposure, the more a pattern seems to emerge, until a very long exposure will produce a clear pattern. The pattern shows that you can predict with great precision what the ensemble of many photons will do, but you cannot predict where the next photon to pass through the apparatus will strike the film.
Radioactivity also obeys certain quantum regularities (I hesitate to write "laws"). Half-life expresses the activity of a radioactive material in reciprocal terms. A long half life indicates low activity. In the book I was reading the author wrote of a little pot of uranium 238 (U-238) he bought, which contains just enough of the element to experience 766 alpha decays per minute. My first thought was to see how much U-238 he had bought. U-238 has a half life of 4.468 billion years. Working out the math, I determined that he had just over one milligram of uranium. The amount was very close, which made me suspicious that there was a typo: If he actually bought exactly one milligram, the activity would be 746 decays per minute…and that might be the true amount.
What is happening inside a uranium nucleus that leads a certain one to emit a helium (He-4) nucleus (and thus turn into thorium 234, Th-234)? Scattering experiments carried out decades ago showed that although the atomic nucleus is incredibly tiny, it is mostly empty space! I learned this as a physics student in the late 1960's. I had found it hard enough to wrap my mind around the view of an atom as a stadium with a few gnats buzzing around the periphery, centered on a heavy BB. So the protons and neutrons, while not being effectively "dimensionless" like electrons, are still much tinier than the space they can "run around" in. The propensity of proton-heavy elements such as U-238 to decay by emitting helium nuclei indicates that the protons and neutrons "run around" in subgroups.
The standard explanation is that at some point one of the He-4 nuclei "tunnels" through the "strong force barrier", finds itself outside the effective range of the force, and thus is accelerated away by electromagnetic repulsion to an energy of 4.267 MeV. What determines when it tunnels through?
Back in the chapter on chaos, the author spoke of dice with various numbers of faces, though he illustrated the randomness of a die's fall using a "normal" 6-sided die he got in Las Vegas. I guess they make them more accurate there, where large stakes are wagered on their "fairness". But dice with various numbers of faces are produced for board-based role playing games. This illustration, from aliexpress.com/, shows one such set of ten different kinds of die, ranging from 4 to 20 faces.
Put two thoughts together, and you can get some interesting products. Can the randomness of alpha decay be related to the randomness of a tumbling die? We can set up a model system with a box of cubical, 6-sided dice, perhaps 100. Here are the steps:
100, 81, 69, 58, 49, 41, 35, 30, 24, 21, 18, 14, 11, 9, 8, 6, 5
100, 90, 78, 64, 53, 46, 37, 31, 26, 22, 18, 15, 12, 10, 9, 8, 7
The calculated half life of these dice, with "activity" of 1/6 per throw, is 4.16 throws. As seen above, small number statistics cause a certain variation, so that after four throws, 49 and 53 are left; after 8 throws, 24 and 26; and so forth. If instead you use 20-sided dice, the half life would be 13.9 throws.
This led me to think of the He-4 (alpha particle) "cores" bouncing around inside the strong-force boundary around a U-238 nucleus as being governed by a die with an immense number of faces, perhaps a trillion. Rather than numbers from one to a trillion on the faces, the only thing that matters is the "get out of here" face, which we might consider to be green (for "go"), the rest being red. On average, once per trillion "bounces" the die momentarily has its green face at the boundary, and the alpha particle flies free. Since the decay constant for U-238 is ln(2)/half life of 4.468 billion years, or one decay yearly per 6.45 billion nuclei, a trillion-sided die would imply a "bounce" time of about two days. The actual transit time for an "orbiting" He-4 is closer to 10-18 sec, which implies a die with a whole lot more than a trillion faces; say, ten trillion trillion faces.
Can it be that quantum randomness and mathematical chaos are related? Could one cause the other … in either direction?!?
That is as far as I have taken these ideas. I don't know (does anyone?) whether the internal, dynamic structure of a large nucleus is dominated by lone nucleons, by clusters such as He-4 and others, or what. The lack of decay products other than alpha particles, except in cases of spontaneous fission, for nuclei that are proton-rich, indicates that any nucleic clusters don't exceed the He-4 nucleus in size (and beta decay is a subject for another time!).
I'm halfway through a book about the edges of scientific knowledge, which I'll review anon. In the meantime, two of the chapters got me thinking: one on mathematical chaos and the other on quantum randomness as it relates to radioactivity.
Mathematical chaos does not refer to utter randomness, but to mathematical process that are completely deterministic but "highly sensitive to initial conditions." Such systems are typically studied by running computer simulations, which brings out an amusing feature: many such systems are also overly prone to amplify rounding errors in the calculations. For example, numerically solving a set of stiff differential equations frequently results in the solution "blowing up" after a certain point, because the rounding errors have accumulated and overwhelm the result.
Natural systems, being analog and not digital, can be described by sets of differential equations. Digital simulations of such systems can proceed only so far before descending into nonsense. The most famous of these is forecasting the weather. Many computer scientists and meteorologists have labored for decades to produce weather models that run longer and longer, farther and farther into the future, before "losing it." So now we have modestly reliable seven-day forecasts (and Accuweather.com has the temerity to show 90-day forecasts); a decade ago or so, no forecast beyond three or four days was any good.
Quantum randomness is a beast of another color, indeed, of a different spectrum of colors! These days the classic illustration is the ultra-low-power two-slit interference pattern. You can produce a visible (and thus moderate-power) pattern with a laser pointer, a pinhole or lens, and a little piece of foil with two narrow slits a short distance apart. The pinhole or lens will spread the beam so you can see it hit both slits. On a screen a few inches behind, a pattern of parallel lines will appear, similar to this image.
The ultra-low-power version is to set this up with the lens/pinhole and the slits and the laser held in stands, and the screen replaced by sensitive photographic film. Then a strong filter is put at the laser's output, calculated to make the beam so weak that no more than one photon will be found in the space between the laser and the film at any one time. Such an arrangement requires an exposure of a few hours to get the beginnings of a record, and several days to get an image like the one above. Whereas this experiment with strong light seems to show the wave nature of light, the ultra-low-power version shows that a photon has a wave nature all by its lonely self!
A "short" exposure of an hour or less will show just a few dots where single photons were captured by the emulsion. They appear entirely random. The longer the exposure, the more a pattern seems to emerge, until a very long exposure will produce a clear pattern. The pattern shows that you can predict with great precision what the ensemble of many photons will do, but you cannot predict where the next photon to pass through the apparatus will strike the film.
Radioactivity also obeys certain quantum regularities (I hesitate to write "laws"). Half-life expresses the activity of a radioactive material in reciprocal terms. A long half life indicates low activity. In the book I was reading the author wrote of a little pot of uranium 238 (U-238) he bought, which contains just enough of the element to experience 766 alpha decays per minute. My first thought was to see how much U-238 he had bought. U-238 has a half life of 4.468 billion years. Working out the math, I determined that he had just over one milligram of uranium. The amount was very close, which made me suspicious that there was a typo: If he actually bought exactly one milligram, the activity would be 746 decays per minute…and that might be the true amount.
What is happening inside a uranium nucleus that leads a certain one to emit a helium (He-4) nucleus (and thus turn into thorium 234, Th-234)? Scattering experiments carried out decades ago showed that although the atomic nucleus is incredibly tiny, it is mostly empty space! I learned this as a physics student in the late 1960's. I had found it hard enough to wrap my mind around the view of an atom as a stadium with a few gnats buzzing around the periphery, centered on a heavy BB. So the protons and neutrons, while not being effectively "dimensionless" like electrons, are still much tinier than the space they can "run around" in. The propensity of proton-heavy elements such as U-238 to decay by emitting helium nuclei indicates that the protons and neutrons "run around" in subgroups.
The standard explanation is that at some point one of the He-4 nuclei "tunnels" through the "strong force barrier", finds itself outside the effective range of the force, and thus is accelerated away by electromagnetic repulsion to an energy of 4.267 MeV. What determines when it tunnels through?
Back in the chapter on chaos, the author spoke of dice with various numbers of faces, though he illustrated the randomness of a die's fall using a "normal" 6-sided die he got in Las Vegas. I guess they make them more accurate there, where large stakes are wagered on their "fairness". But dice with various numbers of faces are produced for board-based role playing games. This illustration, from aliexpress.com/, shows one such set of ten different kinds of die, ranging from 4 to 20 faces.
Put two thoughts together, and you can get some interesting products. Can the randomness of alpha decay be related to the randomness of a tumbling die? We can set up a model system with a box of cubical, 6-sided dice, perhaps 100. Here are the steps:
- Cast the dice on a table top (with raised sides so none fall off, perhaps).
- Remove each die that shows a 6.
- Return the rest to the box.
- Repeat from step 1.
100, 81, 69, 58, 49, 41, 35, 30, 24, 21, 18, 14, 11, 9, 8, 6, 5
100, 90, 78, 64, 53, 46, 37, 31, 26, 22, 18, 15, 12, 10, 9, 8, 7
The calculated half life of these dice, with "activity" of 1/6 per throw, is 4.16 throws. As seen above, small number statistics cause a certain variation, so that after four throws, 49 and 53 are left; after 8 throws, 24 and 26; and so forth. If instead you use 20-sided dice, the half life would be 13.9 throws.
This led me to think of the He-4 (alpha particle) "cores" bouncing around inside the strong-force boundary around a U-238 nucleus as being governed by a die with an immense number of faces, perhaps a trillion. Rather than numbers from one to a trillion on the faces, the only thing that matters is the "get out of here" face, which we might consider to be green (for "go"), the rest being red. On average, once per trillion "bounces" the die momentarily has its green face at the boundary, and the alpha particle flies free. Since the decay constant for U-238 is ln(2)/half life of 4.468 billion years, or one decay yearly per 6.45 billion nuclei, a trillion-sided die would imply a "bounce" time of about two days. The actual transit time for an "orbiting" He-4 is closer to 10-18 sec, which implies a die with a whole lot more than a trillion faces; say, ten trillion trillion faces.
Can it be that quantum randomness and mathematical chaos are related? Could one cause the other … in either direction?!?
That is as far as I have taken these ideas. I don't know (does anyone?) whether the internal, dynamic structure of a large nucleus is dominated by lone nucleons, by clusters such as He-4 and others, or what. The lack of decay products other than alpha particles, except in cases of spontaneous fission, for nuclei that are proton-rich, indicates that any nucleic clusters don't exceed the He-4 nucleus in size (and beta decay is a subject for another time!).
Sunday, October 01, 2017
Learn all about fat. Get depressed.
kw: book reviews, nonfiction, physiology, fat, weight loss
I have known for a long time that for many of us in affluent countries, weight management is a fierce challenge. We can see this from the very existence of Weight Watchers, Nutrisystem, Jenny Craig and literally hundreds of other clinics, systems, and plans, and the $60 to $150 billion that Americans spend on weight loss proves it. If weight management were easy it would be cheap, and we wouldn't need all those clinics and "life coaches" and the rest.
Now we can learn in great detail just what we are up against…if we really want to know. I suspect many folks don't want to! I am not sure if I am happy about knowing, either. Like it or not, I just finished reading The Secret Life of Fat: The Science Behind the Body's Least Understood Organ and What it Means For You, by Sylvia Tara, PhD.
You read right: Dr. Tara calls our fat system an organ. It is the largest and most complex endocrine organ in our body, except perhaps for a very few people who cannot deposit fat and as a consequence must eat tiny meals two to four times hourly to stay alive and comparatively pain-free. Do you think you'd like to be truly fat free? Without a system of depositing fat, which our liver and other organs produce continually, the blood gets milky with circulating lipids that just go 'round and 'round until they are used up by metabolic processes. Heart attack at a young age is the typical fate. People with this affliction who try to eat "more normally" wind up with painful lipid deposits in the skin, rather than normal layers of healthy fat, and in either case they look like walking skeletons, like it or not.
Fat does a lot more than regulate our energy stores. As an endocrine organ, it communicates with the rest of the endocrine system, regulates appetite and metabolism, determines our fertility (or its lack), and stands ready to help us stave off a famine. In babies, the "brown" and "beige" varieties of fat produce extra energy to keep the little body warm. When you have the weight-to-skin area ratio of a house cat, but no fur, you need to produce a lot more energy per pound to keep from freezing to death at "normal" temperatures in the 70's (or the low 20's in Celsius). A strange therapy that turns some "white" fat to "beige" fat is being tested to shift people's energy balance for weight loss. It promises to be even more costly than staying at a Mayo Clinic Weight Loss residence.
You may have heard of ghrelin, leptin and adiponectin. These are just three of the signaling molecules that make us hungry, or not. Leptin turns down our "appestat", the others raise it. Several other signals shift our cravings here and there. Others "tell" fat to deposit itself in our subcutaneous layer ("safe" fat) or viscerally ("dangerous" fat). Guess what can shift all of these in a healthy direction? Exercise. Lots of it. Nothing else is as effective.
Also, as we are only recently learning (partly because of a genuine conspiracy carried out 50 years ago), sugar is much more of a culprit in making us fatter and making that fat less healthy than we used to think, as compared to dietary fat. To be clear, trans fat is truly evil (and all of us who grew up eating Margarine rather than butter must shed a tear here), and also, while saturated fat is a little better and some is actually necessary, saturated fat has to be balanced with the mono- and poly-unsaturated varieties or it does cause problems. But excess sugar is the worst, and sugar substitutes, oddly enough, are almost as bad, because the insulin system kicks in when we taste sweetness, regardless of source. An insulin spike causes fat to be deposited.
During my last ten years working, I got in the habit of drinking about a liter of sugar-free cola daily (Pepsi Max had the best taste). Upon retirement I stopped drinking soda almost entirely, and lost 15 pounds. At first I thought the weight loss was because I was under much less stress; chronic stress also causes weight gain. But now I think it is probably at least half due to stopping my soda pop habit.
After nine chapters of the science of fat—and fascinating science it is—the last four chapters are the "how to" section. The author is a woman, she is descended from an ethnic group in India that endured repeated famines for millennia, and both of these work to make her metabolically fitted to gain weight and hold it, waiting for the next famine. She has also done a certain amount of yo-yo dieting. Guess what? If you have never been overweight, you have a metabolism that matches the calculations at sites such as the Basal Metabolic Rate Calculator. There I find that my basal metabolism is about 1,750 Cal/day, with a dietary intake need of between 2,400 and 3,000, depending on how active I am. Were I female, these numbers would be 1,550, 2,050 and 2,650 (Note: I rounded the numbers from the overly-exact calculations. Also, when I write Calorie, I refer to kilocalories. The calorie of physics is 1/1000 of a Calorie).
What if you have dieted, and regained your weight? Your fat system changes, permanently, so that maintaining the weight now requires fewer Calories, a lot fewer (20-30%). So, you struggled with a 1,200 Calorie per day diet and lost 25 pounds. You used to eat 2,400 Calories daily. If you go back to a 2,400 Cal/day diet, you'll gain it all back, and then some. You'll even gain it back, more slowly, at 2,000 Cal/day! If the BMR Calculator says your dietary need at the weight you want to maintain is 2,250 Calories, you'll actually now barely be able to hold the new weight at 1,700 Cal/day. That is a cruel fact of weight loss-and-regain.
Chapter 12 is titled "Fat Control II: How I Do It". Dr. Tara eats no dinner. Ever (hardly ever!). She chronicled, almost pound-by-pound, how she lost a certain amount of weight over about a year, and how she did it using a "partial fast" of no food intake for 18 of the 24 hours a day, and small high-fiber meals in that 6-hour "eating window". She also boosted her activity level, mightily. She recommends 5 workouts per week of 45 minutes' duration, sufficiently vigorous to make us sweat and have a hard time talking (none of this treadmill-walking while holding a conversation on the phone!).
I decided to check something. I used the short-form Longevity Calculator at Wharton twice, making one change between. The first time, I put "1-2 workouts per week", the second "5+ workouts per week". My life expectancy in the first instance is 91, and in the second 92. In either case, the tool reports that I have a 75% chance to live beyond age 84. Going back and changing activity to "rarely", returned a life expectancy of 90. Hmm. I am 70 now. If I hold up, and am able to do those vigorous workouts 5x/week, I'd spend an extra three hours weekly working out. That is 156 hours yearly or, in 15 years (until age 85, when I'd probably have to slow down!), 2,340 hours. My waking hours in one year are 6,570 (I sleep 6 hours on average, in spite of trying to stay abed longer).
So I can gain another year of life if I spend about a third of it working out. Would I be healthier? Certainly, as long as I don't tear up my body doing all those workouts. I'd have to get into it gradually. So it is likely that those 15 years would be pretty good ones. On the other hand, it would have to go hand-in-hand with less eating, meaning I'd be living with being chronically hungrier. That is not an easy choice, but this is the kind of cost/benefit analysis we need to do. Unless the FDA approves an economical form of Leptin treatment to help us manage appetite, it's the best hope I have of being svelte again. That's mildly depressing.
I have known for a long time that for many of us in affluent countries, weight management is a fierce challenge. We can see this from the very existence of Weight Watchers, Nutrisystem, Jenny Craig and literally hundreds of other clinics, systems, and plans, and the $60 to $150 billion that Americans spend on weight loss proves it. If weight management were easy it would be cheap, and we wouldn't need all those clinics and "life coaches" and the rest.
Now we can learn in great detail just what we are up against…if we really want to know. I suspect many folks don't want to! I am not sure if I am happy about knowing, either. Like it or not, I just finished reading The Secret Life of Fat: The Science Behind the Body's Least Understood Organ and What it Means For You, by Sylvia Tara, PhD.
You read right: Dr. Tara calls our fat system an organ. It is the largest and most complex endocrine organ in our body, except perhaps for a very few people who cannot deposit fat and as a consequence must eat tiny meals two to four times hourly to stay alive and comparatively pain-free. Do you think you'd like to be truly fat free? Without a system of depositing fat, which our liver and other organs produce continually, the blood gets milky with circulating lipids that just go 'round and 'round until they are used up by metabolic processes. Heart attack at a young age is the typical fate. People with this affliction who try to eat "more normally" wind up with painful lipid deposits in the skin, rather than normal layers of healthy fat, and in either case they look like walking skeletons, like it or not.
Fat does a lot more than regulate our energy stores. As an endocrine organ, it communicates with the rest of the endocrine system, regulates appetite and metabolism, determines our fertility (or its lack), and stands ready to help us stave off a famine. In babies, the "brown" and "beige" varieties of fat produce extra energy to keep the little body warm. When you have the weight-to-skin area ratio of a house cat, but no fur, you need to produce a lot more energy per pound to keep from freezing to death at "normal" temperatures in the 70's (or the low 20's in Celsius). A strange therapy that turns some "white" fat to "beige" fat is being tested to shift people's energy balance for weight loss. It promises to be even more costly than staying at a Mayo Clinic Weight Loss residence.
You may have heard of ghrelin, leptin and adiponectin. These are just three of the signaling molecules that make us hungry, or not. Leptin turns down our "appestat", the others raise it. Several other signals shift our cravings here and there. Others "tell" fat to deposit itself in our subcutaneous layer ("safe" fat) or viscerally ("dangerous" fat). Guess what can shift all of these in a healthy direction? Exercise. Lots of it. Nothing else is as effective.
Also, as we are only recently learning (partly because of a genuine conspiracy carried out 50 years ago), sugar is much more of a culprit in making us fatter and making that fat less healthy than we used to think, as compared to dietary fat. To be clear, trans fat is truly evil (and all of us who grew up eating Margarine rather than butter must shed a tear here), and also, while saturated fat is a little better and some is actually necessary, saturated fat has to be balanced with the mono- and poly-unsaturated varieties or it does cause problems. But excess sugar is the worst, and sugar substitutes, oddly enough, are almost as bad, because the insulin system kicks in when we taste sweetness, regardless of source. An insulin spike causes fat to be deposited.
During my last ten years working, I got in the habit of drinking about a liter of sugar-free cola daily (Pepsi Max had the best taste). Upon retirement I stopped drinking soda almost entirely, and lost 15 pounds. At first I thought the weight loss was because I was under much less stress; chronic stress also causes weight gain. But now I think it is probably at least half due to stopping my soda pop habit.
After nine chapters of the science of fat—and fascinating science it is—the last four chapters are the "how to" section. The author is a woman, she is descended from an ethnic group in India that endured repeated famines for millennia, and both of these work to make her metabolically fitted to gain weight and hold it, waiting for the next famine. She has also done a certain amount of yo-yo dieting. Guess what? If you have never been overweight, you have a metabolism that matches the calculations at sites such as the Basal Metabolic Rate Calculator. There I find that my basal metabolism is about 1,750 Cal/day, with a dietary intake need of between 2,400 and 3,000, depending on how active I am. Were I female, these numbers would be 1,550, 2,050 and 2,650 (Note: I rounded the numbers from the overly-exact calculations. Also, when I write Calorie, I refer to kilocalories. The calorie of physics is 1/1000 of a Calorie).
What if you have dieted, and regained your weight? Your fat system changes, permanently, so that maintaining the weight now requires fewer Calories, a lot fewer (20-30%). So, you struggled with a 1,200 Calorie per day diet and lost 25 pounds. You used to eat 2,400 Calories daily. If you go back to a 2,400 Cal/day diet, you'll gain it all back, and then some. You'll even gain it back, more slowly, at 2,000 Cal/day! If the BMR Calculator says your dietary need at the weight you want to maintain is 2,250 Calories, you'll actually now barely be able to hold the new weight at 1,700 Cal/day. That is a cruel fact of weight loss-and-regain.
Chapter 12 is titled "Fat Control II: How I Do It". Dr. Tara eats no dinner. Ever (hardly ever!). She chronicled, almost pound-by-pound, how she lost a certain amount of weight over about a year, and how she did it using a "partial fast" of no food intake for 18 of the 24 hours a day, and small high-fiber meals in that 6-hour "eating window". She also boosted her activity level, mightily. She recommends 5 workouts per week of 45 minutes' duration, sufficiently vigorous to make us sweat and have a hard time talking (none of this treadmill-walking while holding a conversation on the phone!).
I decided to check something. I used the short-form Longevity Calculator at Wharton twice, making one change between. The first time, I put "1-2 workouts per week", the second "5+ workouts per week". My life expectancy in the first instance is 91, and in the second 92. In either case, the tool reports that I have a 75% chance to live beyond age 84. Going back and changing activity to "rarely", returned a life expectancy of 90. Hmm. I am 70 now. If I hold up, and am able to do those vigorous workouts 5x/week, I'd spend an extra three hours weekly working out. That is 156 hours yearly or, in 15 years (until age 85, when I'd probably have to slow down!), 2,340 hours. My waking hours in one year are 6,570 (I sleep 6 hours on average, in spite of trying to stay abed longer).
So I can gain another year of life if I spend about a third of it working out. Would I be healthier? Certainly, as long as I don't tear up my body doing all those workouts. I'd have to get into it gradually. So it is likely that those 15 years would be pretty good ones. On the other hand, it would have to go hand-in-hand with less eating, meaning I'd be living with being chronically hungrier. That is not an easy choice, but this is the kind of cost/benefit analysis we need to do. Unless the FDA approves an economical form of Leptin treatment to help us manage appetite, it's the best hope I have of being svelte again. That's mildly depressing.
Subscribe to:
Posts (Atom)