Monday, September 18, 2017

Fake news isn't new

kw: book reviews, nonfiction, essays, science, sociology

A headline or tweet is too small to elaborate anything useful. When Dr. Feynman received a Nobel Prize in Physics a generation ago, a reporter asked him, "Can you briefly tell me what you did?" He replied, "If I could tell that to you in one minute, it wouldn't be worth a Nobel Prize."

If everyone had to learn the basics of the sciences to graduate from high school, as it was in the past, and learn the kind of critical thinking required to "do science", maybe the American populace would be harder for quacks, charlatans, and a dishonest Media machine to manipulate. And, just maybe, the purported "national leaders" we call Senators and Representatives wouldn't make so many utterly boneheaded decisions whenever there is a scientific fact involved. The "critical thinking" taught in this generation's high schools would be laughed out of Plato's Academy.

Into the fray wades Joe Schwarcz, a Chemist who writes best-selling books, one after another, that frequently discuss how most Americans and most purveyors of the news get science wrong nearly all the time. His latest is Monkeys, Myths and Molecules: Separating Fact From Fiction, and the Science of Everyday Life.

There is no useful way to summarize 65 essays as diverse as I find between these covers. I'll just touch a point here and there:
  • "A Tale of Two Cantaloupes" in the section "Swallow the Science" discusses first an outbreak of Listeria in 2011, carried by feces-contaminated cantaloupes, that eventually killed 35 people. Cantaloupes sit on the soil at they ripen, making them particularly prone to harboring infectious bacteria if "natural" (manure-based) fertilizers are used. The second cantaloupe saved lives: during the research to turn penicillin from a laboratory curiosity into an industrial scale medicine, in 1941 a particular strain of Penicillium that makes cantaloupes rot was found to produce the antibiotic with a concentration ten times that of other strains of the mold. The principle developed here is that context matters.
  • "Capturing Carbon Dioxide" in the section "Chemistry Here and There" looks beyond the technologies of snagging the gas from smokestacks and such "emitters". The technology is well known. Its costs are inescapable, about 20% of total energy production. Than what do you do with millions of tons of this gas? The author discusses numerous things that we can do with CO2, such as making soda pop, using it to feed algae for biofuel production, and making chemical intermediates. But these don't add up to enough "uses" for the stuff to use up the supply. We burn a lot of fuel! Pumping it into the ground has its own problems. Besides the difficult matter of ensuring it won't just leak back out, the recent rash of earthquakes in places where "fracking" for oil is being carried out show that shoving anything into the earth in large quantities can have wide-spread and possibly devastating effects. Y'gotta think things through.
  • Several of the essays discuss the trouble folks sometimes cause by taking the results of tests done on mice and extrapolating them to humans. "Of Mice and Men" in the section "Stretching the Truth" is an example. A study had shown that intense sessions of treadmill running made changes to muscular and molecular structures, that were not found by longer sessions of less intense activity…at least for mice. Mice are convenient. You can work them half to death for a week or so, and then kill them and dissect their muscles to figure out what might have changed. Can't do that with people. As it happens, there appears to be a threshold in this effect, and it is probably so high that very few people ever work out with sufficient intensity. This essay takes a side trip into the possible effects of chocolate on Alzheimer's disease. This was also based on mouse work. But it was even more indirect: nobody fed cocoa to mice or to people. They dosed mouse nerve cells with various cocoa extracts. I guess there was enough of an effect that exciting headlines and review blurbs could be written. But nothing is yet known about whether the chemicals so tested will cross the blood-brain barrier when you EAT chocolate, rather than injecting it directly into the brain. Oh, well. The point here is, you need to determine what was actually found out, before concluding it is anything useful in the real world.
Dr. Shwarcz has wide-ranging interests, and his essays cover topics from acupressure to vitamin deficiencies (and how they were discerned). I hope many folks read this book, and at the very least learn a little more caution about news headlines that tout "discoveries" that soon drop out of sight.

Sunday, September 10, 2017

A must-read if you care about your internet presence

kw: book reviews, nonfiction, internet, privacy, security, self help

The subtitle of the book should catch your attention: "Are you naked online?". Are you? If so, what might you do about it? Ted Claypoole and Theresa Payton drew on broad experience in the online security arena, including Ms Payton's service in the White House, to write Protecting Your Internet Security: Are You Naked Online?. Start by reading this book.

Privacy as we once knew it is a thing of the past. The security of everything online is similarly threatened. It doesn't have to be. I took certain pieces of the authors' advice, and before noting the results, I should perhaps explain something that may be a little uncommon about me.

I decided from the time I started this weblog that I would go to certain lengths to divorce its identity from my own. There are a few clues here and there in my posts, so a persistent and diligent person could track me down. I tend to trust in "security by obscurity", as compared to more technical means. So these results might indicate how successful I have been.

Of course, the authors recommend that we all fire up a browser window in Private (or Incognito or Stealth or whatever) mode, and search our name. In my case, for historical reasons, there are six variants of my name, and two variants of this blog's name. But the search was not nearly as arduous as I expected. From the longest to the shortest ways to look up my name, always putting the entire thing on quotes (but none of those ways using a first initial rather than a first name):
  • 4 text hits, 3 of them reporting my first marriage in various newspapers.
  • 1 text hit.
  • Though Google initially said 279, there were 17 actual hits, and 12 were about me. There was also one picture of an ancestor of mine in an Image search, but none of me. More briefly for the rest...
  • 164 at first, then 19 of 21 text hits.
  • 1,330 at first, then 31 of 35 text hits, plus 3 of about 170 images are actually me.
  • 2 of 532 text hits are actually about me.
And for the blog names:
  • "polymath07": 3,930 at first, but only 33 if you look at returns pages. Hundreds of pictures in the Image search, most from this blog, in which I frequently post pictures.
  • "polymath at large": 7,590 at first, but only 26 actual hits. Many, many pix, nearly all from this blog.
All in all, the pages Google finds about me are nearly all positive, and of course the blog post returns are as I wish. I'd call that success.

Now, from an open browser page, I went to the Open Data Partnership "choices" page, which immediately ran a status check. It returned a long list of entities that either are or are not modifying ad choices based on my browsing behavior. Those who are, that I recognized, include Adobe Marketing Cloud, Amazon, Experian, Google and Microsoft. Some that are not doing so include LinkedIn and Ziff Davis (publisher of many magazines including PC Magazine, to which I once subscribed). One may easily opt out of all of their chicanery, but I have learned instead to do most searches for "stuff" in Private mode.

Private mode isn't perfect, so if you want to avoid, or at least confuse, advertisers and their tracking gimmicks, page 78 has a list of nine suggestions such as the Blur feature by Abine. You can also analyze your own online profile—the real one, not any of the ones you created yourself—using Spokeo, for example.

Why should all this be necessary? If you are old like me, it doesn't matter as much, but think of a teenager whose entire online presence is rife with teen attitude, complaints about parental restrictions, kidding and teasing (and worse) of "friends" and others, and the general sort of things you'd expect from kids who don't yet realize they are mortal. Fast-forward five years, when they are applying to a college, or ten years, when they are applying for a job. Colleges and prospective employers track down all the social media you've been using, and they are better at it than you think. Just changing your name on FaceBook or Twitter isn't enough. If your likeness appears anywhere, a single well-composed image of your face can turn up a lot through Image Search in Google, at the very least.

Or maybe you are a 35-year-old trying to build a business who has attracted the ire of a competitor. Will the competitor create an account somewhere in your name and use it to publish inflammatory and defamatory material that would drive away customers who stumble across it and think it is you? An entire industry of Reputation Management has arisen to address just such scenarios. Even if your competitor didn't do you dirt, maybe your teenage self did already, and unpleasant traces remain of someone you once were, but no longer are. The internet has a longer memory than a jilted spouse! To many folks, what you were then, perhaps you still are, under that polished veneer.

This naturally leads to a section on guiding your children through their early years as a digital native. It will be hard work to keep them from shooting their future self in the foot, but it is necessary.

Another notion occurred to me: Phase your life, and use a different online avatar and screen name for each phase. Upon entering Middle School, a preteen might post a sign-off in a soon-to-be-unused FaceBook account, saying "Goodbye, I'll go silent now. Catch everyone later, and elsewhere." Then she can use a new version of her name to start a new account, and gradually import old friends, but only after they have undergone a similar transition. Hard as it may seem, it is best to discard "friends" who don't see the value in this approach. The end of High School is a good time for a similar changeover. Other phases come to mind. Think it through.

Also, such a time is a good one to go through the abandoned account and delete posts that will cause trouble to the "new you". Of course, the Internet Archive will still have them in its "wayback machine", but you cannot cleanse everything. That's why it is best to keep your most private thoughts out of the ether entirely. When you are musing darkly, the Cloud is not your friend! The Google Docs app isn't totally secure; nothing online is.

A lot of this is like getting a better lock for the front door of your house and a better security system. It doesn't guarantee the house won't be robbed, but it makes you a harder target, so most thieves will pick someone less diligent to burgle. And that's the best advice for anyone concerned about their online privacy and security. Take a little forethought to be a harder-than-average target. It is worthwhile, and these authors are good guides to doing just that.

Monday, September 04, 2017

Toothbrush support Life Hack

kw: life hacks, toothbrush support, travel

Here is a little item I haven't seen in any of the "life hack" viral videos:


When staying in a hotel, we never see a toothbrush stand. We had been using those flimsy plastic cups to hold the toothbrushes off the sink surface, but they tip easily. Then we hit upon an easy solution: half fill the cup with water. Now the cups are stable.

By the way, we also keep the plastic bags the little soaps come in. (They are behind the left cup). If we like the soap (some brands are too sticky to wash off) we let it dry a little and put it back in the bag to bring home. If we don't like the soap I get out a small bar I carry in the suitcase. The hotel throws the soap away if you don't take it.

I'll often try out the shampoo they provide and if it is good I'll take that also. They are small enough to carry in carry-on luggage for plane flights.

We had a holiday weekend but the spiders didn't

kw: blogging, blogs, spider scanning

I didn't log in for several days because of the Labor Day weekend. Here is what the Stats look like for the past week:


I wonder what the Russians are so curious about this time? Not that it matters much…

Thursday, August 31, 2017

Student Loans Exceed Credit Card Debt

kw: book reviews, nonfiction, lending, student loans, debt

A good friend of mine was "killing himself by degrees" prior to age 30: he has four degrees, the familiar BS - MS - PhD in science, and a MS in Computer Science. He married a woman with two degrees in Fine Arts (and she is indeed a fine artist). They entered upon a new marriage and new careers with loads of student debt. However, he landed a job at the company I worked for, in a well-paying position, so in addition to making payments on their student loans, after a couple of years they were able to buy a house. It took them until their mid-forties to pay off all their student loans. I count them lucky.

Equally lucky are those who at least partly "work their way through". They finish a BS or BA degree holding debt that is no more than half a year's pay at a reasonably good job, say, $25,000 or less (The median wage in 2015 was $56,500). Over ten years at low interest, the payment would be about $220 monthly. That's only about twice what many folks pay for cable TV.

A growing number would not consider themselves lucky in any way. They borrowed $30,000 to $80,000 to fund an education that prepared them for years of unemployment or a "desperation job" (a McJob) that doesn't pay them enough to rent a tiny apartment with three roomies that is "only" a 10-minute walk from a bus stop. Car? You gotta be kidding! They live in their parents' house and borrow their car.

Let's face it. The job market for English and History and Anthropology majors, not to mention majors in Women's Studies, Social Science, Art, and almost any other "liberal art" is next-to-invisible. A few hundred college professors in "humanities" departments, at most, retire yearly. Not all are replaced (the student body is shrinking). If you have a degree in History, nearly the only jobs in History are that tiny pool of college professorships, for which you need to get a PhD anyway, at even greater expense. Or you can get a MS in Education and try for a teaching job at a high school or middle school. That job market is pretty small also, and shrinking.

OK. So you're lucky. You got a degree in a STEM discipline: Geology, Physics, Engineering, Math, even Industrial Engineering. You're marketable. Whew! The book probably isn't for you anyway.

Let's take a side tack for a moment. I am about to have a pair of roofers fix some squirrel damage to our church's roof. They'll probably work most of a day. The estimator's bid is $875. Considering that some of that money goes to the company and some to the boss, still, each worker will get around $250-$300 for the day's work. That comes to $60,000 to $72,000 for a year, as long as their company can keep them fully employed. And you know what? Nobody in India or China or Mexico or Vietnam can take that job. You can't "outsource" roofing! Nor plumbing, painting, carpentry, electrical work, landscaping, paving, and a host of other "trades". None of them require a college degree. Most of them pay better than teaching school, which these days requires two degrees (the low pay for most teachers is an injustice I'll take up on some other occasion).

For me, the hero of America's prosperity is not the college professor in the ivory tower, but the people interviewed by Mike Rowe for the Dirty Jobs series on the Discovery Channel. Mike is an actor, but the people doing the work sure aren't. To me they are heroes. And it would be a good idea if people had to work at a tough trade for several years before they were admitted to college!

This is all a riff on recently finishing the book Game of Loans: The Rhetoric and Reality of Student Debt by Beth Akers and Mattew M. Chingos. I don't really have much to say about the book itself. It saddened me, but not for the usual reason. I know enough already to be sad about abusive student debt, which is why we struggled to get our son through college debt-free. No, I am saddened by the lengths to which the authors go to minimize the reality, and they all-too-frequently "blame the victim".

I am a political conservative, though that term is losing its meaning these days. But I am also a social liberal, in the old sense of making people free, of giving them a hand up but not a handout. "Handout" politics is actually socialism, and there are darn few non-Socialists in today's Democratic party. I am not sure I would count Toni Morrison as a socialist. She is definitely a social liberal, and when I heard her speak at the commencement for our son's BA degree, she said this (not an exact quote; it has been a few years), "Will the day come that people will look back on our generation with astonishment that we required the best among us to pay for their own educations?" I agree with this, in part. Our system needs to re-gear itself toward having every student exit the "halls of academia" debt free. But I think it is healthy for a student to have some skin in the game. That means doing some work to pay for part of their education.

It is unhealthy for someone to finish college with a degree or three or four, having never worked at a job people were willing to pay them to do. It is unhealthy for massively unprepared 17-year-olds to be dropped into a super high-school environment with no parental oversight, and with no understanding of the source of the funds hidden behind the meal card they swipe at the all-you-can-eat buffet many colleges now have in place of the "food service" I "enjoyed" at Kent State in the 1960's. (P.S., There was no "freshman 15" then. Most Frosh lost weight their first year of college.) It is unhealthy for students to find themselves faced with 20 or 100 options for "student aid", most of which involve debt under terms they haven't been educated to read with any understanding, and confusing qualifications that waste their time when they apply for things they can't get anyway.

I say "unhealthy". Debt-ridden college graduates are sick. Job-unprepared graduates are sick. We need a culture shift and I am not sure how to even describe it.

The authors of Game of Loans decry the difficulty of finding information about the cost-benefit ratio of most college degrees. This is true: Congress has passed laws specifically forbidding the gathering of such information! Guess which lobbyists supported those laws? But we don't really need information in such detail. Just look at the job market. Go to any employment agency and ask for a breakdown by job type.

Oh, I forgot for a moment: If you have read this far you are not likely to be a Millennial, but if you are, you never "go to" any such place as an employment agency. You want everything online. OK, go to the Bureau of Labor Statistics website, the Occupational Outlook Handbook (OOH) (https://www.bls.gov/ooh/). Dig around and find out whether there is much future for the kind of job you "sorta want" (I know, Millennials are practically free of passions). Here are a couple of examples I found by digging around:
Fine Arts or Crafts. About 50,000 jobs held in 2014. Growth rate less than 2% (or about 1,000 new jobs per year). Half of the jobs are "self employed". Median pay is under $49,000/year. Digging elsewhere we find that about 100,000 new BA's in Fine Arts graduate yearly. Only 1% will be able to make it pay, and half of them will spend more of their time running the business than "making art".
Environmental Scientist. About 95,000 jobs held in 2014. Growth rate 11% (~10,000 new jobs yearly). Median pay $69,000/year. I didn't find stats on graduation rates. But there are about 20 times as many jobs available as there are for artists.
Carpenter. Just under 1,000,000 jobs in 2014. Growth rate 6%, or about 60,000 new openings yearly. No college is needed but it usually takes 3-7 years in an apprenticeship program to become a Journeyman and earn pay in the mid-$40's or more.
I picked the first item because I have a young friend who is quite a good artist and illustrator. He wants to work in animation, even Animé. That's a smaller field than fine arts in general. There is little hope that he will ever be anything beyond "self employed" (struggling/starving artist working nights at a McJob to pay rent and buy beans to eat).

The biggest and most important educational innovation that could be performed for America would be to teach our young people the meaning of "employable": You must be able to do something people are willing to pay for. Period. If you need college credentials to get such a job, dig around in the OOH web site above for a dose of reality. Is it worth $30,000 in student loans to get a ½% chance of paid employment as an artist? Or is that four or more years (5-6 is common now) better spent in an apprenticeship program for carpentry, or electrician (2/3 the jobs as carpenters but better pay)? Get a part time job teaching art and making art at a private school of the arts (like I did with music).

I am in favor of programs advocating trades. I read that three million jobs in the trades are just waiting for competent workers to fill them. People go to college for many reasons, but I suspect for many of them, it is that they don't want to sweat on the job. The Bible has two things to say about that:
By the sweat of your brow you will eat your food until you return to the ground, since from it you were taken; for dust you are and to dust you will return. — Genesis 3:19
For even when we were with you, we gave you this rule: “The one who is unwilling to work shall not eat.” — 2 Thess. 3:10
These two passages are the basis of social conservatism. But they also highlight a problem with the education-versus-work culture in America today. Our youngsters are told (as my generation also was told) that they need a college education to get a "good job." But what is a "good job?" One that doesn't involve bodily sweat? When a plumber's hourly pay is greater than that of the accounting clerk going blind at a "desk job", which one has the better job?

Colleges charge 'way too much because their "services" are in demand. That demand is part of the problem. I have been sorry to see the demise of most of the nation's Vo-Tech and Trade Tech institutions, while we churn out tens of millions of unemployable college graduates who think they "deserve" a better job than driving a backhoe. Backhoe operator is a pretty skilled occupation! It frequently requires problem solving skills that would surprise you.

Parents of students entering high school, and high school students: Think about what you really want to do. Find out how likely it is that someone will pay good money for that to be done. The fastest growing occupations for the next decade or so will be in personal care and health care for all of us aging Boomers! Think about that. Construction trades are big right now but they may enter a period of decline, because Millennials, today's twenties and thirties, aren't so much into buying McMansions, compared to Generation X. And do yourself a favor. Unless you have both the love and the talent for a top profession such as medicine (which Obamacare is destroying), take off a year or three from education and work in a trade before deciding on a college major, if any. Examples from my experience:

I am one of four brothers, and our life arc has been thus:

  1. Me, the eldest. Majored in Chemistry, switched to Physics, finished in Geology after 7 years. Two of those years were working full time to pay my way through the rest of college, and part time work during the rest of college. Then I worked as a Draftsman who also did computer coding. Returned to graduate school at 32, getting a MS at 38 (another 7 years), working my way through with teaching and consulting. Worked as a coder until retirement at age 66, and was never paid a nickel to do any geology. I work part time in retirement, more by choice, but the added income is nice.
  2. Majored in Physics and Art History, graduating in 4 years. Worked his way through school as an electrician's apprentice. A calligrapher and carver, worked as a "starving artist" for 20+ years, making ends meet as an occasional coder. Returned to school and got a MA in History and PhD in Archaeology by age 50, working his way through as a book illustrator. Now a college professor. Age 66, with no end in sight.
  3. Majored in Mechanical Engineering. Worked in Environmental Equipment design until company folded when he was 60. Now works as a Maintenance Tech.
  4. Didn't finish college. Tried various "management training" type jobs with friends, but best pay has always been handyman and home remodeler. He is good at it. Age 62, with plenty of work and quite good pay.

None of us had college loans. We would have floundered had we had such debt to pay off.

You may wonder why I didn't really review the book. That is because it misses the point so badly.

Tuesday, August 22, 2017

Hi, Russian spiders. I'm still watching!

kw: blogging, blogs, spider scanning

The various tools on the Stats page in Blogger just showed me that these 181 hits from Russia all occurred in the 5:00 AM hour. I presume that is Pacific Daylight Time. That would be about 3:00 PM in Moscow and 7:00 PM in Novosibirsk, a more likely source.

If I decide to pay for the more precise analytics I could pinpoint the city, but I'll leave that to others. Considering that this blog has quite low popularity, I wonder whether the same spiders are making a big impact on really popular sites like the Freakonomics guys. Is anyone else out there even checking their stats?

Sunday, August 20, 2017

Your English isn't your grandfather's English

kw: book reviews, nonfiction, language, words, linguistics, historical linguistics

I find John McWhorter fascinating: he digs out so many lovely examples of language usage, and writes about them so engagingly… In a prior book I reviewed in 2009 (Our Magnificent Bastard Tongue) he brought to our gaze the numerous chunks of other languages that were dragged together almost wholesale to produce what we today call "English". Now in Words on the Move; Why English Won't—and Can't—Sit Still (Like, Literally), he provides an antidote to the amount of energy some of us "seasoned citizens" give to decrying the trends of change in language usage (Like, you know, gag me with a spoon if I have to keep hearing that!).

That last string of phrases caused much angst in my generation when "Valley Girl" (Val Gal) talk sprawled across the nation like a lanky teen on a love seat. In particular, "like" has gone from a word meaning (as a verb) "to desire or feel affinity to" or (adjective, adverb, etc.) "similarity", into a "piece of grammar", no longer really a word, but a functional sound that has morphed from the "similarity" end of things to at least three or four uses, most particularly a kind of bullet point, such as an example on page 215:
"So we're standing there and there were like grandparents and like grandkids and aunts and uncles…"
"Like" has become more a signal than a word, and this isn't new, it started almost a century ago, some 30 years before the Beatniks began to say, "Like, wow, man!". The new "like" has gathered new uses to the extent that McWhorter touches on it in three different chapters and spends a dozen pages on it in his last chapter, "This is your brain on writing." This word is an example of several he discusses, that are grammatical markers and have become very hard to explain as words. They are "grammaticalized." Consider what "well" or "so" might mean when used to begin a sentence. Could you explain them to an inquisitive five-year-old? Thought not!

Gliding back to the first chapter, "The FACEs of English", we find a long discussion of the acronym FACE, used to describe the uses of grammaticalized words such as "well" or "so", which a linguist would call "Modal Pragmatic Markers" or MPM's. Here "pragmatic" most closely means "personal". Our author states that a multitude of such words are needed so that we don't just speak English, we can talk.

This brings us to a major theme of the book, the difference between written and spoken (or "talked") English. Firstly, of course, we use fewer grammaticalizations when writing. I tend to write at full speed as though I were having a conversation with you, so I almost began this paragraph with, "Now, …". Were you and I really talking together, that's how I would have said it. But even writing full speed at 50wpm or so, I edit as I go and make the written form a little more compact, and, I hope, readable. (Those who find me long-winded are saying, "Oh, really!")

He dwells much more on spelling. For example, written English has a pronunciation rule of "silent, terminal e", that it makes the vowel before the prior consonant into a long vowel. Thus we have "mad", meaning crazy or angry, in which the "a" is pronounced as flatly as possible and is often called "short A"; and we have "made", meaning constructed or produced, in which the "a" is pronounced almost like "eh-ee" and is called "long A". The author tells us that nobody would design such a system from scratch, and that it had to arise from some process. Indeed it did. He discusses the "Great Vowel Shift" on pages 152-159, using a map of the placement of vowels in our mouth to show how the "short A" of 5 to 9 centuries ago morphed into a longer "E" sound then to the "long A", and that a final "eh" sound at the end of many words was gradually dropped. Thus, "made" was once pronounced "mah-deh", as the spelling suggests, shifted through "meh-də' ", which a much shorter final syllable, shown by the schwa (ə), which is more of a tiny grunt than a vowel, and then into the one-syllable word of today. The Great Vowel shift moved all the vowels about, leading certain words that once rhymed to have different sounds now than then, and they no longer rhyme. "Water" and "after", in "Jack and Jill", used to rhyme perfectly. No longer.

Dictionaries began to be written for English very early in the Great Vowel Shift. While this didn't exactly entomb all the spellings in stone, they did tend to hold things back, and today, dictionaries of "modern English" have to trot to keep up, having been rendered out of date by our movable language just in the time needed to research, typeset, and publish them. By the way, usage of the words "typeset" and "typesetting" is dropping, having peaked in the 1980's; they are being overtaken by "key in" and "keying in". As computers get better at speech recognition, those will drop off also.

Here is side point that I enjoyed. Do you ever hear the expression "willy-nilly"? I figured out long ago that it came from "will I, or nill I", but I wasn't sure just what "nill" meant. Dr. McWhorter has the answer. A millennium or so ago, negating words was done by adding the prefix "ne-", so to "not will", or not desire, something was to "ne-will" it. To say you don't have something, you would say, "I ne-have it", but by Chaucer's time it would have been "I nave it", with "nave" pronounced "nah-veh" or even "nah-və". And Chaucer spelled it næbbe. It seems the consonants have shifted as well, but the author has left that for a future book, I reckon.

I'll forbear further nerdifying. It is a delightful book, and an incredibly informative one. I am thinking of giving a copy to a friend who is a linguist, but primarily of Chinese, not English, to see what similar trends might have occurred in Mandarin, which the Chinese acknowledge is not a written language at all: the "written Chinese" language is one that nobody speaks, but they all know how to interpret it into whatever dialect they grew up speaking.

Thursday, August 10, 2017

Amidst the hype, an Eclipse book of value

kw: book reviews, nonfiction, science, history, eclipses

OK, let's get the ooh-and-aah stuff out of the way first. This image shows the eclipsed Sun in an intermediate state: a medium amount of corona and several prominences are visible. The solar prominences are the red bits around the rim of the Moon. The image was enhanced by unsharp masking to show more of the corona, which has a sharp drop-off of intensity with distance from the solar photosphere (the "surface").

Viewing an eclipse without magnification, you are unlikely to see the prominences, so it helps to have a telescope set up ahead of time, its clock drive running, ready for action the instant that second contact occurs. A magnification of 30-to-60x is sufficient. This is about how the Sun would look at 30x.

Perhaps you know that the Sun has an 11-year cycle of activity. During periods of low activity, it is more likely to look like this (and this photo was enhanced also). This is an older, black-and-white photo, but I suspect few prominences would have been visible in a color image.

Interestingly, even in quiet years the corona may be quite extended, though it tends to be smoother. 2009 was a very quite year, according to records at spaceweather.com, the Sun's face was free of sunspots on 260 days, 71% of the year.

At the peak of a sunspot cycle, sunspots are typically visible every single day, or very nearly. Sunspots are evidence of the "wound-up" condition of the magnetic fields inside the Sun. Prominences and flares are triggered by magnetic re-combination events.

A large, active corona is seen here. Looking carefully (click on the image for a larger version), you can see prominences. The rather bright blob at right might be a coronal mass ejection. When one of these occurs in the center of the Sun's face, we can expect a magnetic storm on Earth in 2-3 days' time.

To see what the outer corona looks like any time, look at the LASCO images at the Solar and Heliospheric Observatory (SOHO) satellite's image and video gallery here. One cannot see the close-in corona because the satellite's coronagraph is about two solar diameters across. Sometimes I've looked at a video of the past week or so and been able to watch a comet "auger in".

Now, to the book. John Dvorak is an exceptionally good writer, with much of value to say, and in a time of extraordinary hype about the solar eclipse that will occur across the entire U.S. in just 11 days, he has produced a valuable book of lore, history, and scientific explanations: Mask of the Sun: The Science, History, and Forgotten Lore of Eclipses.

While most people through history have viewed eclipses of both Sun and Moon as dramatic omens of misfortune, there have always been a few wiser folk who realized that though they are so infrequent, they are subject to natural laws. While a total solar eclipse is visible over a small area, a swath no more than 112 km across, partial eclipses can be seen as far as about the diameter of the Moon (3,473 km) on either side of the central path…or a bit farther because of the curvature of the Earth's surface. Thus, if there is a solar eclipse going on, the majority or people on the sunlit side of Earth at the time will be able to witness at least a partial eclipse.

Since the sky doesn't darken much during a partial solar eclipse, how were they noticed in antiquity? Think pinholes. The crescents seen here were in shadows cast by leaves of a tree. If you are used to seeing the round dots on the ground or a wall in a tree's shadow, then you'll likely be drawn to the view when they change shape. Pinhole viewing of partial solar eclipses has been recorded over at least the past 2,400 years.

So, although an average location on Earth experiences one total solar eclipse about every 330 years, a partial eclipse is likely to be seen about every 2-3 years from almost anywhere. With a bit greater frequency, almost anywhere you live you'll be able to see an eclipse of the Moon almost every year, because they are visible from an entire hemisphere at once.

In classical times, one of the seven required subjects of  a classical education was Astronomy, which actually meant learning to gather naked-eye observations and make the calculations to determine the motion of the Moon and the naked-eye visible planets (Mercury, Venus, Mars, Jupiter and Saturn), primarily for astrological purposes and to (very roughly) predict eclipses. Much of Mask of the Sun discusses the ebb and flow of lore and superstition about eclipses, both lunar and solar. Kings and emperors employed skilled mathematicians to predict eclipses, because unfriendly (or hype-engrossed) persons were making the same predictions, and then predicting the likely demise of whomever was in power at the time. A leader with better advance knowledge could then take advantage of public magical ceremonies intended to stave off the disaster and survive the eclipse, which really meant to stave of the likelihood of a revolt.

Eclipses earned great practical value during the "age of sail": they can be used to determine longitude. It isn't easy, but it was too valuable an aid to navigation to not perform. First, one must have a good (relatively speaking) time-measurement device. The water clocks and other mechanical timekeeping devices in use before the pendulum clock was invented in 1656 (by Huygens) were better than counting heartbeats, but not by much. You, the seafaring captain intent on determining the location of some distant port, would contract with an astronomer at home to determine the time at which certain critical events occurred,and their location in the sky, usually during a lunar eclipse. This requires a bit of explanation.

The shadow of a planet or satellite has two parts, the Umbra and the Penumbra. When you see a total solar eclipse, during the time of totality you are standing inside the Umbra. Before and after totality, and in any place where a partial eclipse is witnessed, that is in the Penumbra. There are thus four contacts that delimit a total solar eclipse:

  1. The Moon first impinges on the edge of the Sun.
  2. The Moon fully covers the whole Sun.
  3. The Sun first begins to exit from behind the Moon.
  4. The last bit of the Moon exits the edge of the Sun.

The same four contacts pertain to a total lunar eclipse, except they refer to the impingement of first the penumbra of Earth's shadow, then the umbra, shading the Moon, and then the Moon's exit from first the umbra and then the penumbra.

By taking readings with a sextant or octant of the Moon's position in the sky when each contact occurs, and noting the time of each as exactly as possible, both you and the astronomer back at home gather data that can be used to calculate the longitude difference between the place you were and your home port. Of course, latitude is much easier to measure in the Northern hemisphere by sighting the north star. Seeing the orientation of the Big Dipper lets you correct for the star's offset from the actual pole, which is presently about one degree (Because of Earth's precession, Thuban in the constellation Draco was the star nearest the pole 5,000 years ago, when the pyramids were a-building in Egypt). Prior to the late 1700's, when very accurate marine chronometers were invented, it took months to learn "where" you had been! And then you might still be off by a few degrees (each degree is 60 nautical miles, that is 69 mi or 111 km).

During a total solar eclipse stars become visible. In 1919, this photo was taken and the two stars marked with little dashes were among those used to verify Einstein's general theory of relativity.

Spectroscopy of the solar corona was first done in the 1860's, and led to a paradox that has not yet been resolved. The spectroscope had revealed that the Sun's photosphere is at a temperature of about 5800K (about 10,000°F), and later that the middle part of the chromosphere, a thin pinkish layer just above it, is at about 3800K (about 6,400°F). But the corona had a puzzling spectrum that wasn't figured out until the 1930's and 1940's: its temperature ranges from one to three million kelvins! That's two to more than five million °F.

Before I close I must mention the two central solar eclipses I have seen. The first was July 20, 1963, when I was not quite 16. The Moon's shadow crossed from northwestern Canada to Maine. My family took a vacation starting nearly two weeks earlier, to Montreal and Quebec, and then on the 20th we crossed into Maine at a spot where the highway would be right at the center of the umbra. I had fitted a telescope with a projection screen, with which we watched from just prior to first contact until second contact. Then we looked at the sky to see the Sun and its corona. The hillside had a view to the northwest, and we saw the umbra racing toward us just before second contact. Seeing something, even a shadow, approach at 2,000 mph is amazing! Seeing the "hole in the sky" surrounded by a large corona was amazing! In just over a minute, it ended and third contact occurred. We saw the "diamond ring", the first bright ray of sunlight peeking through a mountain pass on the Moon.

The second was the annular eclipse that passed through Ponca City, Oklahoma, May 10, 1994, when I worked for Conoco. This picture shows the projection screen attached to my telescope, and the eyepiece is visible at the right edge. This is the same telescope I used in 1963, and I still use it. Annular eclipses occur when the Moon is in a further part of its orbit, near apogee, and doesn't cover the entire Sun.

Conoco management gave everyone half the day off. School groups and others were invited on-site. A filtered video camera was used to broadcast the eclipse inside the buildings on TV monitors usually used for executive communications. At least twelve telescopes were brought onsite by Conocoans and a few others, and used, usually by projection, to show the Sun to groups of people. One friend of mine brought a large telescope fitted with a full-aperture solar filter, so you could look through his wide-angle eyepiece at a 100x view of the whole Sun. Now, that was an amazing view!

While the publication of Mask of the Sun was timed to take advantage of public interest in the solar eclipse that will be seen all across North America on August 21, 2017, it is not hyping the eclipse, but instead giving us a primer into the past and continuing importance of eclipses. For example, eclipses on earth and elsewhere (notably, shadows of Jupiter's moons on that planet's cloud tops) are still one of the key ingredients to measuring planetary distances in the solar system. I have deliberately touched on only a few of the many delightful matters covered in the book. It is well worth reading by anyone with any level of scientific education.

Saturday, August 05, 2017

To survive, dig in

kw: book reviews, nonfiction, science, paleontology, zoology, burrowing, mass extinctions

Shortly after we moved to our house 22 years ago we bought some flat stepping stones for high-traffic areas in our yard, such as the path through a "gate" in a hedge. I dug these in to be an inch or so above ground level, a little lower than the mower blade at its lowest setting. Now, nearly all of them have sunk to ground level or below. Two examples are shown here. Is this just soil compaction from the stones being walked on? Not entirely. Wherever I dig in my yard, I encounter several earthworms in every shovelful.

Charles Darwin spent about 20 years studying earthworms, and using "worm stones" plus an ingenious measuring device attached to bedrock beneath, determined that bioturbation (the modern term) of the subsoil by earthworms caused the stones to sink by an average of 2.2 mm/year. Darwin's earthworms must have been very energetic. The "sink rate" for my stepping stones is closer to 1.0-1.5 mm/year.

One of Darwin's worm stones is pictured in The Evolution Underground: Burrows, Bunkers, and the Marvelous Subterranean World Beneath Our Feet by Anthony J. Martin. Dr. Martin's thesis is simple: burrowing and other means of living below ground at least part of the time is so beneficial that many animals are burrowers. I don't know if you could say "most animals", but that might be true (he doesn't say). Also, burrowers provide homes for other species that share their spaces. The author makes a good case, with numerous examples, that living at least part time underground enabled many animal species to survive the various nastinesses we call "mass extinctions".

The "big five" mass extinctions had such profound effects on both biology and geology that they mark geological boundaries (the abbreviation "mya" means "million years ago"):

  • Ordovician-Silurian boundary, 429 mya. About half of species vanished, and about 85% of all animals died.
  • Late Devonian, 364 mya. About 75% of species became extinct.
  • Permian-Triassic boundary, 251 mya. The baddest of the bad, this one drove 96% of species extinct. All living things today are descended from the remaining 4%.
  • Triassic-Jurassic series, between 214 and 199 mya. By the end of this 15-million-year period, more than half of species had been eliminated.
  • End-Cretaceous, 65 mya. This is the best known, because it centers on an asteroid impact and led to the demise of the dinosaurs…or, at least, the non-avian dinosaurs. It is now known that birds are dinosaurs, or, if you prefer, birds are descended from theropod dinosaurs. 76% of species went extinct.

Many cases show that animals that were underground during the big smash, or whatever happened, were the most likely to survive in numbers sufficient to restore their populations afterward and become the ancestors of modern life. But before the first of the mass extinctions, there were big changes as animal life arose and developed, including the development of the first burrowing creatures. An odd group of animal species called the Ediacara Fauna did just a little burrowing, but were followed by the "Small Shelly Fauna" that burrowed more and deeper, and then the proliferation of hard shells that marks the beginning of the Cambrian period also marks the beginning of rather thorough bioturbation of ocean floor sediments.

The author shows the history of animal life from the perspective of an Ichnologist, a scientist who studies trace fossils. This picture, a 6"x8" section of a rock about 15" square, shows trace fossils on a rock I picked up from a sandstone bed near the base of the Morrison Formation in South Dakota, so it is about 150 million years old. This is a bottom cast; we are "looking up" at sediment that settled into tracks and shallow burrows in the late Jurassic sea bed.

Somewhat visible are ripples crossing from top right towards bottom left, showing that this was in rather shallow water. At least three kinds of tracks are visible, though I don't know what animal made any of them. Other dug-in structures are seen, or rather, their casts. Dr. Martin and his colleagues are experts in discerning the meaning of such traces.

Before digging into his subject, however, the author discusses "A brief history of humans underground." If you've heard of Cappadocia, you may know of the underground homes dug into the soft sandstone. That has been going on for several thousand years! Long before that, humans utilized natural caves, not only for shelter and burials but even for their art (think of the amazing art in the caves at Altamira and Lascaux).

While we tend to denigrate "cave men", thinking only Neanderthals lived in caves, the "art gallery" caves were painted by our species. When there were only a few humans worldwide, it makes sense to consider that many or most of them used caves and sometimes stayed in them for extended periods, not just during bad weather or extreme seasons. A cave is easier to defend from predators. And just as the burrows of gopher tortoises permit them to thrive in areas with tough winters, so caves shield those who dwell in them from climatic extremes. Indian Echo Caverns, in Pennsylvania about two hours from where I live, was the home of William Wilson from 1802-1821. The "Pennsylvania Hermit" stayed pretty well wrapped up most of the time, because the cave stays a nice, chilly 54°F (12°C) all the time.

There just aren't enough caves to go around, so now we build artificial caves we call "houses". One of the professors at South Dakota Tech had an "underground house" when I was there in the 1980's. It was technically a house built into a tight place between two rock outcrops. An underground house is nearly free to heat or cool, if it is in the "temperate band" across the world where average temperatures are between about 60°F and 75°F (16°C-24°C). The below-ground temperature near Rapid City, SD is closer to 47°F (8°C), so my professor had to insulate the excavation, pour concrete for the dwelling, and insulate more. South of Oklahoma in the U.S.A. an underground house would not need heating or cooling (just moisture control, perhaps!); in Europe, think Spain, Italy, Greece and Turkey, including Cappadocia.

This may become more pertinent in another generation, if the climate continues to warm. I will be even more pertinent when the "Holocene warming" that began about 12,000 years ago comes to an end and another 100,000-year Ice Age begins! Today's "global warming" caused by "carbon pollution" (an oxymoron; we are made of carbon and its oxy- and hydro-derivatives!) may actually delay an ice age by a century or so.

The most ubiquitous burrowers and tunnelers, humans aside, are invertebrates. Earthworms don't leave open tunnels; their burrows fill in behind them with the excreted feces from which they've digested key organic materials. But ants and termites produce long-lasting tunnels. Some of these have been studied by pouring in plaster or even molten aluminum. This cast of an ant nest is from leaf-cutter ants of Central America.

There is a surprising array of vertebrate burrowers, however. We are familiar with gophers and voles, perhaps, but certain birds burrow, such as kiwis, bee-eaters, and some penguins. The gopher tortoise, as its name suggests, is quite a digger, and its burrows shelter at least 400 species that are enabled to live in otherwise inhospitable places because of a tortoise's "hospitality".

The author also discusses the most amazing tunneler of all prehistory, the giant ground sloth. You might not think of an animal the size of a 4-door sedan as a burrower, but in southernmost Brazil there are hundreds, perhaps thousands, of burrows you could literally drive a truck through! The tunnels are 4-4.5 m wide (13-15 ft) and 2-2.5 m high (6.5-8 ft).

The last Brazilian ground sloths died (probably eaten by early Brazilians) about 12,000 years ago. They had used their strong claws to dig though soft, semi-cemented sandstone. The various species of giant sloth lived through numerous ice ages, having evolved about 23 million years ago, or perhaps earlier. Great bulk is itself helpful for surviving great cold, but burrowing confers an added advantage.

Biologists and paleontologists in general pay most of their attention to animals that lived above ground. True, finding and recognizing the fossil of an animal that died underground is more difficult. But there is so much going on beneath our feet, and so much of prehistory that took place underground, that we must realize that the livability of our environment is largely a result of these hidden lives. Scientists of all stripes would do well to take note.

Are we the cause of a great extinction being called, by some, the Anthropocene? If we are, it is mainly affecting the critters above ground. If we should extinct ourselves at some point, the "rulers of the underworld" will remain, and may hardly notice much difference. They will continue their ecosystem services as before, keeping a significant percentage of the subsurface a nice place to make a home.

Tuesday, July 25, 2017

How tech is changing business

kw: book reviews, nonfiction, business, technology, artificial intelligence, trends

My, my, what a long time it took me to work my way through this book! It goes to show that I still have a poor mind for business. During the latter half of my career in IT, the managers and even some supervisors would speak of the "business reasons" for doing one thing or another. One day I asked a manager named Carol, "What is a 'business reason'?" She replied, "It's something people are willing to pay for." The thought had never entered my head. I have always done things for reasons such as "it is interesting", "it will make this or that task easier", "it does things in a more excellent way" and so forth. Getting paid was nice, but it wasn't my focus. When I heard a new company president speak of having a "passion for profits", I sent him an e-mail explaining how I had always had a passion for excellence, and that profits seemed always to follow. His response was so disturbing, revealing such abysmal blindness to everything I find meaningful, that I immediately sought work in a different company among the Dupont family of companies, and luckily found one within a few months.

I am not sure what I expected once I saw the cover of Machine Platform Crowd: Harnessing Our Digital Future by Andrew McAfee and Erik Brynjolfsson. Something more techie than what it delivered, certainly. But the authors' application of technological trends to present and future business was sufficiently appealing that I read it all.

The three words that begin the title emphasize the subjects of the book, which is a follow-on to their book The Second Machine Age. These words outline three dichotomous trends that are driving businesses:

  • Mind and Machine
  • Product and Platform
  • Core and Crowd

The trends are toward the right, and it is uncertain how far each will proceed. I debated with myself, whether to use "versus" rather than "and". But these pairs are not truly at odds; rather they are synergistic and supplementary to each other. For example, I built much of my career as a scientific programmer and systems analyst on discerning the appropriate tasks for the Machine to do, so as to free up people's Mind to do the things that we do better. From the beginning of the Computer Century (now about 70 years along), computational machinery has been called "mechanical brains", and the term "artificial intelligence" began to be applied even before ENIAC's tubes first lit up.

We now have pocket phones and nearly-affordable wristwatches that are millions of times as computationally powerful as ENIAC (this article includes notes on its speed of computation). But only within the past decade have "AI applications" begun to carry out tasks that are still – usually – done better by people and many animals. Many Sci-Fi stories bring us ideas of giant computers somehow becoming conscious more-or-less by accident (e.g., "Colossus" and "The Moon is a Harsh Mistress"). There is a reason for that. Nobody yet has the slightest idea how to define consciousness in any unambiguous way, and therefore, no idea how to write appropriate code to "do consciousness". To repeat myself, I define "genuine artificial intelligence" thus:
That a mechanism, electronic or electromechanical, carries out its own investigation, does its own research, and obtains a patent or at the very least has its patent application accepted by the U.S. Patent Office.
For the time being, the next generation or two at least, there will remain numerous "real world" tasks that minds will perform better than machines. The authors contend that nearly any repetitive task, including many now deemed "too creative" for a machine to carry out, will over time become the province of machine work, and that humans will be squeezed out. Will the day arrive when humans are no longer permitted to pilot an automobile? Cook their own meals?

The discussion of Product and Platform was harder for me to follow. Having a viable Product is the essence of a Business Reason for doing something. People pay for products, including those more squishy "products" we call "services." For example, technically, nursing care is a "service", but in the context of business, it is a product, delivered as a series of "service tasks" by a skilled person on behalf of another. Where does that fit into the notion of a "platform"? I think I understand that a platform packages products and services to make them easier for a producer to deliver and for a consumer to order and obtain. Will there one day be a platform like Uber for nursing care? I am almost afraid to look; it may already be out there. But there is still the need for the nurse-person (one day, a nurse-machine?) to physically do something to or for the person receiving nursing care.

Then, Core and Crowd. Hmm. I look on this as an expansion of Mind and Machine, where the "machine" has become a human-machine synergy we call the Crowd. I love the Citizen Science efforts out there, 73 of which (to date) are available under the Zooniverse umbrella. I have participated in about a dozen of them, and am most recently active in three that are of most current interest to me. A few years ago I classified more than 6,000 galaxies in one of the early Zooniverse projects. The machine part is the image delivery and questionnaire system. I and thousands of others (many minds) do the crowd part. The designers build in lots of redundancy, so as to spot errors and the occasional troll. The key to such projects is good planning and curation.

The authors focus on more business-oriented crowd projects. Their aim is to show that many untutored folks find innovative ways to solve problems that the "experts" would never think of. Very frequently the synergy of various "out of discipline" methods come together to do something ten or 100 times as well as the best that the "experts" had produced.

This principle comes home for me. Although I long aspired to be a scientist, because I was someone who nearly always wrote software for other scientists I had little occasion to publish; I wrote stuff to support work that other scientists published about. But the key paper of mine that made it into a peer-reviewed journal (Computers and the Geosciences) applied some sideways thinking to the numerical analysis of stiff differential equations used to simulate complex chemical reaction networks. I mixed principles used by astronomers in orbital mechanics with methods devised originally by civil engineers. In my dissertation, I used, and described, another numerical method that applied descending reciprocals to Runge-Kutta methods so that linear equations (linear in the "Diff Eq" sense) could be solved to any order desired. It was just a little part of my research, but crucial for certain computations that were otherwise too lengthy to carry out on the mainframes of the late 1970's.

So, I have rambled a lot into technical areas, mainly to cover up my difficulties "getting" the business focus of the book. It is written as a self-help text, with summaries and guiding questions following each chapter. It is written for business managers and executives. It is well enough written to hold my interest, even where I was in over my head.

Not to end on a downer, but I must quibble: on page 271 it is stated that the "amino acids" are strings of the genetic bases A, C, G and T. Those who know how wrong this is, just take comfort in "the old college try" that McAfee and Brynjolfsson gave it, when they were even more out of their depth than I am in their realm of expertise. (Hint to others: ACGT make genes, which are translated into proteins, composed of amino acids that do NOT include ACGT. That is why it is called translation.)

Thursday, July 13, 2017

The most comprehensive course ever

kw: book reviews, nonfiction, science, astrophysics, cosmology, physical universe, galaxies

As a student of geophysics, I occasionally remarked that the subject's bailiwick was "from the center of the Earth to the end of the Universe." The same could be said for astrophysics. Geophysics and astrophysics are a kind of tag team, covering the same realm from different perspectives. Astrophysics deals in part with how stars forge the elements that wind up in planets, while geophysics deals in the main with what happens to those elements once they form a solid or semisolid body (e.g. a gas giant planet).

I have great interest in both subject areas, so it was a real treat to read Welcome to the Universe: An Astrophysical Tour by Neil deGrasse Tyson, Michael A. Strauss, and J. Richard Gott. The book is a distillation of material from a course taught by these three men at Princeton University, to non-astronomy students.
  • Part I: Stars, Planets and Life, was written (and I presume taught) primarily by Dr. Tyson with certain sections by Dr. Strauss.
  • Part II: Galaxies, was written (and presumably taught) entirely by Dr. Strauss.
  • Part III: Einstein and the Universe, was written (and presumably taught) entirely by Dr. Gott.
You could say that Tyson deals with stellar and condensed matter, Strauss with galaxies and their formation, and Gott with the gamut of cosmological theories. For me, given my lifelong love of reading astrophysical books, both popular treatments and texts and monographs, there was little I would call "new to me." But these scientists are writing at the top of their form, and present their subjects in a most enjoyable way. I had certain take-away's from each author:
  • Chapters 7 and 8 [Tyson], "The Lives and Deaths of Stars", parts I and II, are a good summary of the different types of stars based on their masses, certain features of their internal dynamics that are a result of their mass, and the fate of each type. I did not note a discussion of the first stars, those that were entirely metal-free (Astronomers call all elements heavier than helium "metals", which is understandable from a statistical viewpoint: of the 88 natural elements beginning with lithium, and also the two synthetic elements among the first 92, all but 18 are metals). Perhaps it would have been confusing, because such "zero-metallicity stars" could not have had "careers" that fit well into the Hertzsprung-Russell Diagram that does such a good job classifying all known stars in the present universe.
  • Chapter 16 [Strauss], "Quasars and Black Holes", provides a clear summary of the spectral evidence that led firstly to the discovery that quasars are receding at phenomenal rates and are thus very distant (up to more than 90% of the way to the Big Bang some 13.8 billion years ago) and thus extremely luminous; and secondly that they must be powered by matter streaming into enormous black holes at the centers of galaxies. Nearly all quasars are more distant than a few billion light years. The closest is 600 million l-y. Quasars are the highest energy "active galactic nuclei" (AGN's), and since it seems that every galaxy hosts a supermassive black hole (from millions to billions of solar masses), any galaxy could host an AGN whenever a clump of matter finds its way to the galactic center.
  • Chapter 24 [Gott], "Our Future in the Universe", discusses what has happened to the whole universe since the Big Bang, and what is expected to happen, according to current theories. It is on a sort of super-logarithmic scale, highlighting 15 events ranging from the first 10-44 second to (very approximately) 10100 years in the future. In the text other possible events are mentioned, and one is as far off as a number of years described by a number with 1034 zeroes! That number of zeroes equals the number of hydrogen atoms in about 17 billion kilos of hydrogen. There will never be enough paper to "write" it down.
I was eager to see how Dr. Gott discussed Dark Energy and the (alleged) accelerating expansion of the universe. In the seven chapters he wrote, from time to time he discusses one or another mathematical principle that seems to require cosmic inflation (near the very beginning) or accelerating expansion (ongoing). I have yet to see an explanation of accelerating expansion that makes sense to me. The "evidence" for such acceleration is the anomalous brightness of some very distant supernovae. I have read recent articles that question both the data and the interpretation.

For my own part, I have yet to see an analysis of Type 1a supernovae that originate with a C-O white dwarf that accretes material of very low metallicity, as we would expect of very ancient objects at very great distances. Accretion, however, is not certain as a mechanism; WD-WD collisions are thought to produce the more prevalent type of supernova. The mass limit that must be crossed to yield a supernova is 1.44 solar masses. Thus the product of a collision will momentarily have a mass in the range 1.44 (plus a little) to 2.88 (minus a little). So, how "standard" is the standard candle known as a Type 1a supernova?

Well, that question did not get addressed, but for now that is OK. Astrophysicists and cosmologists are not single "voting bloc" in this regard, and I continue to read with interest the work being reported in this area.

Fascinating subjects, excellent writing: I expect this book to become a classic in its field.

Wednesday, July 05, 2017

A millennial in space

kw: book reviews, science fiction, near-future, space aliens

Caution: the book reviewed was written in the language of many millennials and late Gen-Xers, including the casual cussin' my generation calls "potty mouth." It's not suitable for youngsters you wish to shelter from such language.

I wonder why space aliens are so frequently imagined as having magical attributes. In Spaceman of Bohemia by Jaroslav Kalfař, a Czech astronaut on a solo flight of 8 months' duration, to a mysterious purple cloud between Earth and Venus, spends a lot of time with a spider-like being that apparently talks to him in his language, but soundlessly, in his mind. It also rifles through his memories.

The real thrust of the story is, what is real? what is imaginary? How does the ill-starred astronaut return to Earth after the destruction of his space capsule, from a distance of tens of millions of miles? I was reminded of The Life of Pi (reviewed in 2015), and the long trip the young man Pi takes in a lifeboat with a tiger as his companion. The same ambiguity fills both stories.

In its wider sense the story is one of someone cycling back to the beginning to restart with a wiser outlook. Yet the protagonist is full of obsessions, and not all have been resolved at the end. Was his experience more delusion than fact, and is he still delusional? Probably.

About half the chapters are flashbacks to the astronaut's formative experiences, from the Velvet Revolution to the "Capitalist Invasion" of Prague. Assuming the history is accurate, there are a few things one can learn about the development of Czechoslovakia into the new nations that succeeded it after 1989, and a few things to learn about peasant life pretty much anywhere in Eastern Europe in those years.

I wonder how much astronomy and cosmology the author has been exposed to. The purple cloud is supposedly emitted by a "comet … from the Canis Major galaxy." There actually is a dwarf galaxy well behind the Canis Major constellation. It is about 25,000 light-years away. All known comets are members of our solar system, and perhaps a very few originate as far away as half a light year. So this is a book for the astronomically illiterate.

The book jacket blurbs treat the book as a great feat of humor. I found nothing funny in it. I wonder what joke I have been left out of. I'll chalk that up to a generational thing, and remark only that, if this is humor, I tremble for the generation now entering middle age.

Monday, July 03, 2017

Russian spiders at it again

kw: blogging, blogs, spider scanning

Late last evening I went in to add a post to the blog and noticed heavy traffic from Russia again. We'll see how long it lasts this time. The activity is not as regular as before (though the Russians are not as regular as the Americans), and began on June 30. That tall peak just over a day ago (as I write this) represents 96 hits in one hour. When the spiders aren't active, I seldom exceed 96 hits in two days.


Sunday, July 02, 2017

What might one learn from having cancer?

kw: book reviews, nonfiction, self help, cancer

When I saw The Cancer Whisperer: Finding Courage, Direction, and the Unlikely Gifts of Cancer by Sophie Sabbage, I wasn't sure what I would find, but I was hoping for a practical self help book. I think that is what this book is, but let me confess at the outset that I did not read the whole book: I read the Introduction and the first and last chapters in their entirety, and skipped here and there within the other 8 chapters.

I am certain this book is worth at least beginning to read, by anyone facing a new cancer diagnosis. You will know soon enough whether it suits your needs. I had cancer 17 years ago, died in the recovery room and had to be resuscitated, and fought a series of very different battles from those that Ms Sabbage describes. This was one reason I could not connect with the book's message.

The other primary reason I could not connect is that the writing style, though written in a self help style that is quite popular, simply puts me off. Sorry, Ma'am!

It is worthwhile to introduce the Compass concept, the subject of Chapter 1. In a diagram of an 8-point compass, the first item (the subject of Chapter 2) is at the top, and the subjects proceed clockwise around. They are, in order:

  1. Coming to Terms – a matter of balancing feelings and facts, and setting the boundaries you wish to preserve (such as those around work and relationships).
  2. Understanding Your Disease – learning all you can: more facts, the more the better. And here the author wisely tells (most of) us to avoid statistics, but I'll touch on this later.
  3. Knowing Your Purpose – to decide what you want and why, and establish a plan toward obtaining it.
  4. Stabilizing Your Body – prioritize actions such as changing eating habits.
  5. Clearing Your Mind – including building the support network you need when your own control slips, as it will from time to time.
  6. Directing Your Treatment – learn from your doctors, set your own priorities, and preserve your own integrity as a person not a disease. You may need help from your support network to lead your healing team, not just blindly following "what the doctors want". I'll have more on this below.
  7. Dancing With Grief – embrace grief; there are automatic losses, including the possible loss of your future. 
  8. Breaking the Shell – I am not totally sure, but this seems to entail "making friends" with your cancer to learn from it. Here we part ways. I am quite comfortable learning all I can from an enemy, all the while planning the most efficient way to totally eliminate it!

For many of us, the first in time will be 4…if we have time. In my case, I was working toward stabilizing a deteriorating situation for about two months before I had a cancer diagnosis. Once that occurred, I had no more than 8 days from diagnosis (Nov 22, 2000; the day before Thanksgiving!) to major surgery (Nov 30). I entered the hospital on Nov 27, and they took care of the stabilizing, because the doctor was not sure I could survive surgery. The bare facts:

  • Stage 3+ colon cancer, with a major mass visible in the colonoscope, about the size of my fist (I have big hands).
  • Nearly two months of enforced fasting due to intestinal blockage.
  • Loss of 25 pounds during 2 months.
  • Blood count of 8.5 and falling (15 is normal).

On Nov 27 I was placed in the hospice ward, and they began intravenous feeding. The normal "dose" is one 1-Liter bag of "lion milk" daily. I was given three bags daily. I was allowed a little walking around, steering my IV pole. I realized I was in the hospice when the message board outside all the other rooms said, "Comfort", while mine said "Comfort and Feed 3x". How many people do you know who spent 3-4 days in a hospice, and came out alive?

What led up to my diagnosis? I had a rather passive doctor. When I went to him with persistent pain that seemed to be near my stomach, he spent more than a month trying ulcer remedies and then an antibiotic. One day he said something like, "Maybe it would be a good idea to get a colonoscopy…at some point."—Appalling! At that point, I silently took charge (in the book's terms, I began directing my own treatment). I had been in the ER twice already with violent vomiting and bloody stools, and had overheard the ER doctor say, "There is a very high white blood count, but we can't find an organism." I was thinking, "Sounds more like cancer than an infection." Inside me I already had my diagnosis.

The next day, after the doctor had expressed puzzlement and made his immensely stupid statement, I went to the receptionist and innocently asked her, "He said something about seeing a gastroenterologist. Is there one he prefers?" She gave me a name. I had a fleeting thought that my inept doctor might have inept friends, but decided to give the man a try. In those days you needed a referral so I faked one. After a talk with that doctor's receptionist, she got me an appointment three weeks on. I'm not sure why I didn't immediately call some other GI doctors, but I didn't.

I made it through the 3 weeks (now it was 2 months since I had effective nourishment), and saw him on a Monday. He asked, "3 weeks? How'd you get in here so fast? My backlog is 3 months! Did you tell her you are bleeding?" I said, "Of course!" He said, "You're very pale" and took me right downstairs to a clinic that drew blood and determined my blood count was 8.5. He said, "Go to such-and-such a hospital at 7:00 AM on Wednesday and I'll meet you there." And on Wednesday the cancer was seen by my wife and me via the 'scope. But I was on Demerol and the memory didn't "take"; I had to be told about it after I came around.

Thanksgiving Weekend! What a time to suffer through telling my dear friends of my disease. They prayed for me. My wife and I had planned to go to a church conference for two days, so we went. It was just 2 hours away. There I told certain ones, who took the news to their churches so they could pray for me.

Early Monday I called my doctor. He called back saying he had a surgeon who would see me for "consultation" on Thursday. I hung up without a word, thought it over (chronic pain level had reached 8 and I had to think very slowly and thoroughly). I called him back and said, "I won't live that long." He said, "Go to the ER now. I'll call ahead that you are coming." Thus began 3 nights in a hospice, 9 days of IV feeding in 3 days, an an operation on the same Thursday that was going to be a "consultation." I was in the OR 5 hours. In the recovery room they put in an epidural to administer Morphine. It turns out I am over-sensitive to Morphine and I stopped breathing. My heart slowed to about 30/minute (any slower and it'll simply stall and stop). A nurse stood by with defibrillator paddles as another gave me mouth-to-mouth and then oxygen. Once the morphine wore off, they tapered off the oxygen and let my wife see me. After that I suppose I recovered as normally as one can.

That's enough on such a subject in this much detail. I followed up with chemotherapy. The GI doctor was frank enough to give me accurate statistics. In my case, being a mathematician, I knew exactly what they were telling me and what they were not telling me. He said, after the operation, I had a 15% chance of living for one year. After the "gold standard" chemotherapy for six months, that chance would improve to 35%. "Gold standard" is leukovorin plus 5-FU. 5-FU was originally developed as a "weapon of mass destruction", but was found, rather accidentally, to cure many cases of colon cancer. Leukovorin helps it work better.

And what does 35% mean? Survival rates in such cases follow the same statistics as failure rates in a transistor factory. Technically, it is a type of Weibull distribution. At some time 65% of the devices will have failed. The doctor's prediction put that point at one year, when 35% are still alive. Such a distribution has a very long tail such that, for example, about 10% survive for five years. In the case of colon cancer, there is very little chance of recurrence after five years, and different statistics come into play. Most folks who live for five years after colon cancer surgery will die of something besides colon cancer, 10, or 20, or 30-40 years later, depending on their original life expectancy. In my case, I was 53 at the time of my operation (pretty young for this kind of cancer), and now I am just a couple of months shy of being age 70. My father is alive, so I have some chance of living into my 90's, at least medically speaking. The last time I saw the GI doctor (he does a follow-up colonoscopy every 3 years), he called me "a trophy".

Looking back at the list above, I think I covered most of the bases of the Compass. The one thing I'd have added, perhaps as a part of "Dancing With Grief", or perhaps as a ninth point: "Laugh as much as possible". For some reason, the six months of my chemotherapy were the longest sustained period of great happiness of my life. Perhaps 5-FU has a side effect of being a superb anti-depressant (too bad about losing your hair if you are young; I didn't lose any). I also stumbled on AFV (America's Funniest Videos) on ABC, and have watched it pretty regularly every since. My kind of humor.

Considering that this is not a very popular blog, I conclude that few people think the way I do or like many of the things I like. So, while I was not so enamored by this book, I think it can help a great many people either to become cancer survivors, or to muddle their way through their cancer experience better than they might have done if left totally to their own devices.

Monday, June 26, 2017

When the math you used could mean life or death

kw: book reviews, nonfiction, mathematics, geometry, analysis, renaissance

Who would have thought that for a period of decades a student's adherence to certain mathematical methods could get him in trouble with the Inquisition, imprisoned, or even burnt at the stake. Galileo was placed under house arrest for the last two decades of his life, not only for advocating the motion of the Earth, but also for the kind of mathematical analyses he published!

Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World, by Amir Alexander, chronicles the development of a "new" kind of mathematics, one that had actually existed alongside Euclidean geometry for centuries, but had been little used and was denigrated by Aristotle and others. It flowered along with the Italian Renaissance, but ran afoul of the reactionary politics of the Jesuits.

To most mathematicians of the early Renaissance, mathematics was geometry, and all proofs and analyses that proceeded by any method other than straightedge-and-compass derivation from first principles were suspect. It is rather amazing to read how the Society of Jesus, originally rather blind to mathematics because of the proclivities of its founder, Ignatius of Loyola, took up Euclidean geometry as a point of pride within a generation after his death.

In their to-the-death struggle to throw back the influence of the Protestant Reformation, the Jesuits, brought into being as the Reformation was blossoming throughout Europe, realized that geometrical proofs provided a perfect model for their rigid theology and social structure. The Reformers declared that all persons had a right to know and understand Scripture, and offshoots such as some Anabaptists, and free-land proponents such as the Diggers, began to question the "divine right of the King" and the "natural order" of aristocracy. Dogma was being replaced by opinion. Long-held traditions were in danger of being overthrown. Chaos was imminent. The execution of the English king Charles I emphasized the danger.

If one accepts the validity of the methods of Euclid, there is no room for opinion. A geometrical constructive proof, proceeding by pure deduction, leads step by step to a conclusion that cannot be denied. But it had become evident to the disciples of Pythagoras, nearly a full twenty centuries earlier, that some propositions one could state, could not be proved. They had begun by proclaiming that all problems were subject to "rational" proof; by "rational" they meant using only ratios of whole numbers. An early demonstration that the hypotenuse of a square could not be exactly expressed as a ratio, that it was "incommensurable", led to the breakdown of the Pythagorean system and eventually to the disbanding of the Pythagoreans.

By Aristotle's time, about 200 years later, inductive methods based on "indivisible" quantities had shown some promise, and had been used to demonstrate certain propositions that geometric methods could not solve. But Aristotle, at first intrigued, later decried such methods. Euclid he could understand; the new methods seemed to allow a certain leeway for error. In his way he was as rigid as any Jesuit of the Sixteenth Century.

I have often been astounded that the Medieval Roman Catholic Church based so much of its philosophy on Aristotle, whose only brush with Theism is some vague statements about an "unmoved mover." I was further amazed to read of the process that led to this, via Thomas Aquinas. The Jesuits believed that Aristotle had it right. Mathematical induction by "indivisibles" (also called "infinitesimals" after about 1730) was unreliable. The Church needed … NEEDED! … a rigidly reliable theology and rule of society that disallowed dissent as thoroughly as a Euclidean proof disallows "alternate opinion". Galileo was only the most prominent of a large number of Italian mathematicians to learn of inductive methods, and use them to great effect, so much so that these methods swept through Europe. But over about a century's time the Jesuits drove "indivisibles" out of Italy. Indivisibles and inductive methods flourished elsewhere, in all the countries of Europe.

Reasoning similar to that of the Jesuits led Thomas Hobbes to found his political philosophy on Euclidean geometry. He strongly felt that the chaos following the Reformation simply cried out for a more totalitarian form of government. His exceedingly famous book Leviathan proposes the most profoundly totalitarian political system ever devised. When he learned that three very significant propositions were incommensurable via Euclidean methods, he realized that this left a great loophole in his philosophy.

Three problems: Squaring the Circle (making a square with the same area as a given circle), Trisecting an Angle, and Doubling a Cube (constructing a length that can be used to construct a cube with twice the volume of a given cube). None of these can be done using Euclidean geometric methods. This has been proven, using mathematical methods developed centuries after the time of Hobbes. He spent the rest of his life trying to square the circle, and eventually lost his reputation as a mathematician. He ran afoul of Gödel's Incompleteness Theorem: that every mathematical system can be used to formulate problems that cannot be solved withing the confines of that system. This includes geometry. But Kurt Gödel was two centuries in Hobbes's future.

In the opening chapters of the book, it seemed to me that "indivisibles" and "infinitesimals" were described as being in opposition. It took careful reading to understand that they were synonyms separated by a century or two of usage. They form the foundation of The Calculus, as developed by both Newton and Liebnitz. The modern world would not exist without the analytical methods of calculus. From a modest number of "demonstrations" using induction—based on lines being composed of an infinite number of "indivisible" points, planes being composed of indivisible lines, and volumes being composed of indivisible planes—calculus and modern analysis in general have become supercharged, and now include both inductive and deductive methods.

I spent much of my adult life as a working mathematician, and I find it fascinating that such a life-and-death struggle had to be won, and won decisively, for the modern, technological world to appear. I have just touched on a few of the trends and a handful of the players in the saga of Infinitesimals. I have to mention John Wallis, whose 25-year battle with Hobbes "saved" inductive mathematics in England. How much longer would the modern era have been delayed otherwise? He originated the symbol for infinity: . Infinitesimals is quite an amazing story, very well told.

Sunday, June 18, 2017

Wu Li: Circular reasoning to the max

kw: book reviews, nonfiction, physics, cosmology, buddhism, copenhagen interpretation, quantum mechanics

From time to time I have heard about The Dancing Wu Li Masters: An Overview of the New Physics, by Gary Zukav, since it was published in 1979. I had never read it until now. As a student of all the sciences, particularly the "hard" sciences (those amenable to experimental verification), since before 1960, I have at least a reading familiarity with physics, which is a hard science, and cosmology, which is not. Now having read the book, I find it contains no surprises, at least, none of a scientific nature. Of course, a lot has happened in physics and cosmology in the past nearly forty years.

The author, an admitted outsider to the field of physics, conceived of the book while on a retreat at Esalen along with a real mixed bag of folks including numerous scientists and science hangers-on (some would consider me more of a hanger-on, though I am a working scientist, even in "retirement" from a career in the sciences). Al Huang, who was teaching T'ai Chi at Esalen when Zukav was there, introduced him to the concepts of Wu Li. That is concepts, plural.

I have a great many Chinese friends. The Chinese languages, primarily Mandarin, the principal written Chinese language, abounds in homophones, words that sound the same, at least to a Westerner. Most basic Chinese words consist of one syllable, and very few require more than two syllables. Spoken Chinese sounds to us like a long string of only a few syllables repeated various ways, with a "sing-song" quality that means nothing. What Westerners miss is that the "sing-song" variations in tone are meaningful and are part of the proper pronunciation of Chinese words. Thus, the syllable "MA", depending on the tone, and its context in a sentence, has at least these meanings:

  • Mother.
  • When doubled, an affectionate term for Mother, just as in English, at least when pronounced with two flat tones.
  • Horse, using a different tone.
  • The verb "ride", when the context demands a verb rather than a noun, and using still another tone.
  • The pronounced question mark that ends (nearly) all Chinese questions, spoken with a rising tone.

The familiar greeting "Ni Hao Ma" is a lot like the New Jersey, "How are ya?" The Chinese sentence, "Ma-ma ma ma ma", with the proper string of tones, means, "Is mother riding the horse?" (Chinese has no articles, so "the" is implied).

Depending on tone and context, "WU", pronounced "woo", has about 80 meanings, and "LI", pronounced "lee", has a great many, primarily focused on pattern. Different written Chinese characters (ideographs) are used for the various meanings of wu and li. In combination, the word wu li is the primary Chinese term for "physics". But when other combinations of ideographs with the same pronunciation (except for tones) are used, there are other meanings. In the context of this book, Al Huang gathered five. The literal meaning of the ideographs used for wu li meaning "physics" is "patterns of organic energy". The other four are "my way", "nonsense", "I clutch my ideas", and "enlightenment".

The book is structured around these five concepts, with each section containing two or three chapters. As I might have expected from a book inspired at Esalen, each chapter is numbered 1.

The "new physics" on which the book is centered is quantum mechanics and its relationship to Einstein's theories of relativity (special and general). The core message is the ambiguity of quantum phenomena—when any single "particle" is studied—coupled with the exactitude of the predictions the mathematical theories of quantum mechanics make regarding the statistics of interactions when many particles are subjected to the same set of conditions. The "scripture" of quantum mechanics is the Copenhagen Interpretation, that of Niels Bohr and his followers (I almost wrote "disciples").

Thus, for example, when light is shined through a pinhole, which spreads the beam by diffraction, and this beam is passed through a pair of narrow slits, an interference pattern emerges. This works best when monochromatic light is used, such as from a laser, but "near-mono" filtered light works well enough for visual purposes. The intensity in each part of the interference pattern can be exactly calculated by the Schrödinger wave equation, although the calculations are formidable; various simplifications of the wave equation yield very precise results with less arithmetical grinding.

I mentioned diffraction. This matter is first mentioned on pages 64-65 of the book. In the upper half of an illustration, a series of waves in a harbor are shown exiting a rather broad opening, and those that get through are shown going straight onward, with a sharp edge to their pattern. In the lower half, the opening of the harbor is smaller, and the waves exiting are shown as semicircular wave fronts spreading beyond the opening. There are two major errors here. Firstly, the upper pattern should show a little spreading at the edges of the "beam" of waves exiting the harbor (you can verify this using a wave tank, as I was shown decades ago in a Freshman physics class). In other words, diffraction occurs when waves pass through any opening of any width, not just very narrow ones. Secondly, for the lower wave pattern, the wavelength of the exiting waves is drawn as much shorter than the waves in the harbor.

In actuality, diffraction produces a nonzero probability of the waves at every angle. They seem to "go straight" through a larger opening only because the off-axis waves lose energy with angle very rapidly in such a case. When a wave front passes through an opening of a size similar to the wavelength, or smaller, there are significant amounts that are found at nearly every angle, making a much more divergent beam. Zukav seems to have been ignorant of this.

Interestingly, if a double-slit setup using extra-sensitive photographic film is set up, you can get a surprising result. The best photo film can record the capture of each photon, as long as the light is blue enough, meaning the photons are energetic enough. One silver halide grain is exposed by the capture of a single photon. If the light is dimmed enough that only a few photons per second pass through the apparatus, and you let it run for less than a minute before extracting the film and developing it, the developed film will have one or two hundred tiny exposed grains that are seemingly scattered at random over the film. If instead, you leave the film in place for an entire day, there will of course be many more exposed grains, tens of thousands of them. They will show a very clear interference pattern, identical in form to the one you could see when the light was shining brightly and tens of trillions of photons per second were passing through the apparatus.

Interference is a wave phenomenon. Photons are particles; each carries a specific amount of energy and has a specific momentum (these are all the same for monochromatic light). It took me and all my fellow students a long time to become comfortable with the fact that light has both wave and particle characteristics. Eventually we thought of a photon as a "wavicle", a small wave bundle, that could somehow "sense" that both slits were open and "interfere with itself", when passing through a two-slit apparatus. It seems that light behaves as a wave when wave "behavior" is demanded of it (the two slits), and as a particle when particle "behavior" is required (exposing a silver grain in the film).

Where does Gary Zukav take this, and several other experimental results of quantum mechanics, special relativity, and general relativity? Straight to the door of a Buddhist sanctuary. The language he uses is usually as ambiguous as the language physicists typically use to describe concepts like the "collapse" of a wave function when an "observation" is made. He compares some conclusions and statements of physicists to similar statements of Buddhist doctrine, though I could seldom recognize the resemblance. The core of the Copenhagen Interpretation, at least as it is explained in this book, is that the Observer is central. But, to date, nobody has adequately defined "Observer". That doesn't stop Zukav from equating the one-is-all-all-is-one that he believes the new physics is trending toward to Buddhist teachings of the pre-Christian era. I have a question or two about observers, or Observers.

Must an Observer have a self-aware mind? Can the photographic film described above be an observer, or has no observation been made until the film has been developed and a human (or other self-aware entity) has looked at it to see the pattern? If I understand the Gary Zukav presentation of the Copenhagen Interpretation, there is no "collapse" of the wave function into an actual "event" without an observer. It is as though, outside your peripheral vision, nothing exists until you pay attention to it. Taken to an extreme, it means there was no Universe until humans evolved to be the Observers to bring it into existence. This is the reason for the title of this post. If this is actually what Niels Bohr believed, I have to say to him and his disciples, as Governer Festus long ago said to the Apostle Paul, "Much learning has driven you insane!" Paul was not insane, but I think Zukav might be. More on this anon…

At the time The Dancing Wu Li Masters was being written, some "newer" new physics concepts were arising, such as the Quark/Gluon resolution of the Particle Zoo, and the theory of the Multiverse. To take up the former: It appears that the quark is truly fundamental. All the hadrons seem to be made up of various combinations of quarks and anti-quarks. However, it takes such enormous energies to generate interactions that give evidence of the existence of quarks—and they apparently cannot be brought into independent existence—that we may need to await a particle accelerate wrapped around the equator of the Earth to achieve energies sufficient to determine whether quarks do or do not have any substructure. Apparently, electrons have no substructure, so maybe they and quarks are as fundamental as it gets. But our experiments have reached "only" into the range of 10 to 100 TeV. What might be achieved with an energy a thousand times as great, or a million? Fears have been expressed already that the current experiments at CERN could trigger destruction of the Universe. Maybe the Multiverse is real, and we inhabit a surviving Universe that didn't get destroyed.

The notion of the Multiverse is simple. Rather than the wave function for a particle "collapsing" into some actual event, an entirely random outcome within the statistical framework described by the wave function, perhaps every possible outcome actually occurs, and a new Universe is spawned to contain each of those outcomes. This is simple enough if the "outcome" is that a particular photon passes through either the left slit or the right slit of a two-slit apparatus. Two universes result. I one of them, the photon passes to the left, and in the other, it passes to the right. But there is detail in the interference pattern, and when I have done the experiment with a laser pointer and a home-made pair of slits cut in aluminum foil, I could see more than twenty interference fringes. Now what? Did each photon create twenty or more universes to accompany each outcome? When the light is bright enough to see, trillions of photons per second are "in use"; the beam of my laser pointer emits 200 trillion photons or deep red light per second. Did I inadvertently create a few quadrillion new universes, just by shining my laser pointer through a pair of slits? Were new universes being created at the same rate even when I wasn't looking?

So what are the chances that the search for the Higgs boson at CERN caused the creation of truly enormous numbers of universes, nearly all of which were immediately destroyed, and we inhabit one of those that survived. I think you can see where such thinking can lead.

And some folks say that I am crazy to believe in God, a God who knows a level of physics (if it is called that) that can resolve this stuff, without the insanity of Multiverse speculations. I think it is fair to say that "modern physics" has reached a point of adding more and more epicycles to a group of theories that seem to produce very precise results, but that they are really analogous to pre-Copernican cosmology. Actually, Copernicus used epicycles also, because he thought orbits were based on circles. It took Kepler and others to work that part out.

Another item or two that have arisen in physics since 1979:

  • On page 119 we read, "No one, not one person has ever seen an atom." If you are talking about direct visual sight without the use of a microscope, you could say the same thing about bacteria or viruses. But we have microscopes of several kinds that can show us what they look like in rather amazing detail. Since about 1981, highly refined transmission electron microscopes have been able to show atoms directly, and since the invention in 1982 of the scanning tunneling microscope and the atomic force microscope, we now have three methods for seeing where the atoms lie in a surface. Whatever point the author wished to make based on the above statement is now moot.
  • Beginning on page 292 we find an illustration using polarized light. Simply put, when light is passed through a polarizer (such as the special plastic in some sunglasses), the light that emerges is now all vibrating in the same plane (for convenience, we use the electric vector as the "direction" of polarization, though the magnetic vector could be used equally well, and is at 90° to the electric vector. Zukav does not mention this). When you place a second polarizer with its polarizing axis at 90° to the first, it blocks all the light. If you rotate it to various angles, some of the light gets through, in accordance with an elliptical formula. Now, if you set the two polarizers so their polarization axes are at precisely 90° so that no light is getting through, then put a third polarizer between them, with its axis oriented at 45° to the other two, quite a lot of light gets through! This goes on for several pages and is presented as quite a mystery. Strangely, elsewhere in the book we find the tools to solve this mystery (I didn't look up page numbers):
    • In a discussion of Feynman Diagrams and the S-Matrix (Scattering Matrix) we read that physicists consider every interaction to entail the destruction of all the impinging particles and the creation of new ones that exit the interaction locus at the appropriate angles with appropriate velocities. Thus, when a photon reflects off a mirror or any shiny surface, it is actually absorbed and a new photon is released at the appropriate angle. So they say. Refraction works similarly. Thus, the polarizer absorbs the incoming photons and releases a somewhat smaller number of photons, all with the appropriate polarization.
    • As I recall, a polarizer made of stretched plastic film passes 38% of the original light. A Nicol prism can actually split light into two beams with nearly no loss, so that 50% exits with horizontal polarization at one angle, and 50% with vertical polarization at a different angle. This would make no sense according to the "picket fence" analogy, because very, very little of the original light could get through any polarizer: only that which is already polarized the "right" way. Thus, a Nicol prism, in particular, "tests" each photon, and either twists its polarization to match the nearest direction (and shifting its exit angle according to the one or the other), or annihilates the photon and emits one of appropriate polarization and exit angle.
    • Polarizing plastic is less efficient, passing only light of one polarization, but obviously changing whatever the polarization was of most photons to match its orientation. Thus, what is happening with the 45° polarizer is this: it absorbs some photons entirely, and twists the polarization of the rest of them by 45°. Then when they reach the last polarizer, they are now subject to a further absorption or twisting, so that the "twisted ones" get through, with perhaps 5% of the original beam intensity. That is a lot more than the fraction of a percent that "sneaks through" the original set of crossed polarizers because plastic film polarizers are not perfect.
    • So polarizing devices do not just passively allow certain photons to pass and block all others, but they change the polarization of the photons that they allow to pass.
  • I cannot pass by the chance to mention circular polarization. A thin piece of calcite or quartz (or, indeed, any colorless crystalline material that does not have cubic molecular symmetry) rotates the polarization of the incoming light. What is more, if it is just the right thickness, it will produce circularly polarized light. This is sometimes thought of as two streams of photons that are related to one another. Think of a vertically polarized photon coupled with a horizontally polarized photon, and their "waves" are out of phase by a quarter of a wavelength. Then, in effect, their polarization will rotate as the go.

As interpreted by Gary Zukav, physics was becoming one with Buddhism. I wonder what he would make of today's situation, with the great popularity among physicists of cosmological string theories (at the moment, they can't decide which of the potential 10500 possible string theories to favor!), the supposed detection of increasing cosmological expansion that may lead to a "big rip" in which all things will be literally shredded to their composite quarks, and the theory of cosmological inflation (developed in the early 1980's) that supposes that the initial expansion of the big bang took off at several trillion trillion trillion times the speed of light for just a tiny fraction of a second, during which the Universe grew to a size somewhere between that of a grapefruit and a galaxy (nobody can pin that down too precisely).

In my view, coupling physics theorizing with Buddhism is tantamount to solipsism. Let us accept as a first premise that what exists, does indeed exist, and go from there. Then the extreme versions of "New Physics" simply vanish, like an unobserved photon.