Sunday, February 28, 2021

Are we making ourselves into techno-monsters?

kw: book reviews, nonfiction, psychology, parenting, education, technology

It took me an incredible amount of time to read i-Minds: How and Why Constant Connectivity is Rewiring our Brains and What to Do About It, Second Edition (AKA i-Minds 2.0), by Mari K. Swingle, PhD. Other than a brief post 14 days ago, which came 9 days after the prior "real" book review, I have utterly neglected this blog. To those few who follow it, I apologize.

This book is so big, 498 pages of 9-pt. type, and covers so much ground, it is hard to take it all in. The author is a therapist whose work begins when a "problem child" (or young adult) is brought to her, and she performs an EEG study. She can discern quite a number of odd things going on in the brain from the EEG, and she explains a few of them. One of the most important is "disregulation". A person gets out of control because the brain cannot self-control or self-regulate. An i-addicted person can only self-regulate when looking at their screen of choice. Children of all ages may throw tantrums for a number of reasons, but some of the most frightening tantrums occur when a child who spends too long with the laptop, phone, or tablet, is kept from using it for a time. In adults, tantrums are replaced by other kinds of misbehavior, some of which can land one in jail.

Side note: Recently the electrical grid failed in most of Texas for three days, parts of it for longer. I wonder how many homes were host to an extreme form of "cabin fever" when all the batteries ran down?

Although the book is very big, I could have read it faster if it were better written. The author presents a great deal of good and useful information and advice, very badly! Soon after I began reading, I was ready to drop it entirely. It just seemed she was so impressed with herself she had to write every fleeting thought. The writing style swings between clinical and colloquial. Just as I'd get going at a good clip, I'd encounter a word such as neuroatypicality or psychoneurophysiology, or a sentence with missing prepositions, or a half sentence ending in a period, only to find it taken up again after a subordinate clause with its own period. I guess the author took to heart the advice to avoid run-on sentences, somewhat too literally! By the way, look at the way I used a comma in the prior sentence. Try covering the comma with the corner of a piece of paper and reading it again. There is a difference in connotation. In this book commas are sometimes overused and sometimes neglected. My advice to the author: Before preparing version 3.0, hire two people in sequence: Firstly, a good copy editor, and then secondly, pass the edited copy to someone who can produce Reader's Digest Condensed Books and have it condensed. Most of the content of this book could have been said in half the space or less.

Herewith, to save space, I will do little more than reproduce my notes and comment on them:

  • ch 6, "Boxed In – Anxiety in the Masses", on collecting hobbies. She is derogatory. She doesn't distinguish actual field collectors from "silver pickers". I am a collector of several categories of things, so perhaps my reaction is to my own ox being gored. The more sedentary things I collect are stamps and coins (world variety, not gold/silver coins). But in each case, I make it social by belonging to stamp and coin clubs, and even going to auctions (to be resumed after the pandemic), both as a buyer and seller, but usually a kibitzer. I also collect photographs, which gets me out and around, though I do spend a lot of time with "digital darkroom". My first love is rock collecting. "Silver picking" is buying mineral or fossil specimens from others. I've never bought a rock, though I traded for a few. I prefer to get out there and hit rocks. Any of these hobbies can be conducted without ever turning on a computer or looking at a phone. A cousin of mine's main hobby is long-distance high-altitude hang gliding (sometimes oxygen is needed!). Try doing that on your phone!
  • p108, ch 8, "The Narrowing of Minds", on children who grow up gaming more than playing: "Today, many kids don't lose the creativity, they never find it!" Further, a "grouping quiz" example: 3 yellow squares of different sizes, then the choice of "which is the most similar?" a pink square or a banana? Is it the banana? Yes, according to the test the author saw, because it is the right color. If a child chooses the pink square, the ugly beep and WRONG appear, even though the shape is right. Who is to say that color supersedes shape? Particularly if no other information is given, either with this "question" or in the context of the rest of the "test".
  • p140, ch 10, "Of Systems and Process", on writing (scribing) versus keyboarding, "…studies clearly show that we remember more, and significantly more, by writing than by keyboarding." I used to visit a typesetting shop. The proprietor was able to typeset in Spanish while conversing with me in English. Clearly, he could not possibly retain anything of the Spanish document. In my own case, though I can type very fast, I find I retain more by taking sketchy handwritten notes than by taking a laptop to a lecture and transcribing chunks of it verbatim.
  • p202, ch 13, "Learning, Play, and Parenting", compares nature with "screen stuff, summarized in a keen table. I imaged the entire page, which is inserted here without further comment. Tap or click to see it larger, or to download it.

  • p263 +/-, ch 17, "The Good, the Bad, and the Ugly – Esports and the Business of Gaming", on the relabling of i-games as "eSports": a cynical attempt to coopt the positive image of sports. The author decries "…the deviant brilliance of early game design ensuring that children legitimately would be penalized for leaving a game mid-play (or not returning…". This triggers extreme peer pressure. When a child (of any age less than 200 years) cries, "But I HAVE TO!!", he or she fears becoming a pariah more than death. It is time for some extreme re-training.
  • p296-7, ch 19, "Breaking the Trance", "Ten things parents learned in science class the gaming industry wants them to forget (aka Lost Lessons from Research 101)". I scanned this pair of pages also, but I'd better not include them here. Put an e-mail address in a comment and I'll send you the image. The "Ten things" boil down to ten of the most popular fallacies of logic, exploited by the game-writing-and-selling industry to keep regulators at bay.
  • p310, ch 20, "i-Tech and Healthcare – To Care or i-Care", "Why do we want to replace ourselves with interactive technologies? Why do we want i-tech taking care of our children and … our elders?" This fosters dissociation and vanquishes healthy attachment. Isaac Asimov foresaw this, blurrily, in his very early story "Robbie", about a robot babysitter to which the child became attached, moreso than to the parents. But Asimov's point was introducing the concepts that became his "3 laws of robotics", so he sidestepped whether the child in question was in any way harmed; I suspect he thought not, as he was himself famously neurotic and might have preferred a robot caretaker to a human nurse.
  • p 356, erratum, "Zukerberg" for "Zuckerberg". Spelled correctly in other places. [grammatical errors and other solecisms would have filled pages]
  • p394, ch 26, "Community, Communication, Digital Mediation, and Friendship", to txt or not to txt? Are emojis replacing facial expressions such that we no longer can read actual faces? A factor tending towards autism. I have long asserted that I get along with machines better than people. But I recognize it and have taken steps for 60+ years to learn to relate to people in spite of my own autistic tendencies. I fear that, had I been born in 2020 rather than the 1940's, I'd have become a clear case of Aspergerism, or "high-functioning autism", or worse. We can all exhibit autistic behaviors under stress, such as the autistic rocking of someone who has suddenly lost their beloved spouse, child, or pet. Such should be rare.
  • p459-50, ch 31, "i-Addiction and i-Life", levels of i-addiction:

  1. Generalized Internet Addiction: "Excess internet use is not an addiction in itself, but rather, the internet is the space where an individual can engage in addictive behavior." The Net facilitates access to one's fetish, but does not create the addiction.
  2. Fantasy Internet Addiction: Addictive behaviors that would otherwise not occur, such as engaging in role-playing games (less frenetic games such as Dungeons & Dragons, that can be played as a board game, came after their online counterparts); and chat rooms and similar sites with pseudonymous interaction including cybersex.
  3. Technological Internet Addiction: The technology itself is the addicting element. The Poster Child is compulsive follow-the-white-rabbit searching, which for some people can consume hour after hour, even day after day. [unbounded exploration] Here process itself is the bait on the hook.

  • p463, same chapter, "Scientific Corner" [a feature found scattered throughout the book], an introduction to IAT, the Internet Addiction Test [search for "Internet Addiction Test Young" to find one you can take online. Many versions are behind a paywall]. My score is 18 (out of 72, see below). Paradoxically, many of the most addicted persons who are single young men have a lower score than you'd expect on the IAT, because all the questions about their relationships are answered with "no effect" or "not relevant", because they have no relationships!

  • p491, ch 33, "Our Future Our Selves", The coolness factor is overcoming prudence in applying technologies that are dehumanizing us. [Many parents who work for Google, FB, etc., send their own children to private schools where screens of any kind may not be used prior to 8th grade, and the school strongly encourages that the home environment ought to be nearly screen-free. What do they know that the rest of us don't?]

Human intelligence, like all natural intelligence, is embodied. The body is a part of nature. Send your kids outdoors! Free-range children grow up the best adjusted! To grow up as an appendage of digital culture is to become more like the machine than like a human.

You have what you need from what is written above. For the research and references, do get the book, but be prepared to use it as a resource. Only read it through if you're as much a glutton for punishment as I was, reading the whole thing! I hated the presentation, but content kept me going.

Sunday, February 14, 2021

Screening the Screens

kw: education, technology

I am halfway through a book I recommend to anyone who is the least bit concerned about how technology affects learning: i-Minds 2.0 by Mari K. Swingle, PhD. Two points stick in my memory:

1) In Silicon Valley, the most popular private schools have a policy for grades 1-7 that PROHIBITS screens inside the school buildings (phones, tablets, laptops, everything), and strongly suggests that parents limit "total screen time" to one hour daily for children younger than 10. The parents are those folks who labor to make social media addicting...to YOUR kids, and to you also.

2) Overuse of i-tech increases the tendency to autism. No matter where a child may begin, on the "autism spectrum", he or she will move progressively toward more and more autistic behaviors, the longer the overuse continues.

I write more when I finish the book and can produce a more fitting review.

Wednesday, February 03, 2021

Getting your veggies in liquid form

kw: book reviews, nonfiction, botany, mixology

A favorite country tune, Rocky Top, has the lines

Corn don't grow so well on Rocky Top,
  Ground's too rocky by far.
That's why all the folk on Rocky Top
  Get their corn in a jar

Corn isn't all that gets into corn whiskey. As we read in The Drunken Botanist: The Plants That Create the World's Great Drinks, by Amy Stewart, if a plant can be ingested (and sometimes if it can't), it has been used to produce an alcoholic beverage.

My drinking days are long behind me. I recall preferring port wine to all other wines (I particularly didn't like "dry" wines), and smooth Scotch whiskey to the rest of the "hard stuff." That ended before I was 21 years old. Port is fortified (higher proof), but also sweeter and more "grape-y", and of course, one could call Scotch "barley in a jar", although there is much more behind these drinks than grapes or barley. That "much more" is what the book is about.

In orderly fashion, Ms Stewart starts with the plants that produce the alcohol, from agave to wheat, including apples, grapes, sorghum and a few others, and then introduces some that are a bit more strange, such as bananas, jack fruit and parsnips. If it'll ferment, someone's tried to drink it.

There follows a series of chapters on every kind of plant product that has been used in a beverage. The only one left out seems to be bark (Oh, yeah, Cinnamon is made from a bark). Stems. Flowers. Spices. Roots. Fruits.

Many recipes are found throughout, and also gardening tips for growing certain otherwise hard-to-obtain plants. There are also tips in a few places about brewing your own, frequently by doing little more than harvesting, grinding or mashing some plant part, and leaving it alone for days or weeks. The yeast varieties that grow on the plant are frequently the ones that ferment it best. This points up that the first domestic organism was almost certainly yeast!

I don't want to get further into this. I have some fond memories of "non professional" mixology, such as learning by accident how easy it is to produce cider ("hard cider" is a redundancy). But for me the drawbacks of an imbibing life outweighed the pleasures. If you enjoy "adult beverages" and mixology, this book is a delightful introduction to the botany behind the beverages.

The other cDc

kw: book reviews, nonfiction, information technology, hacking, clubs, politics

There were hackers before there were computers. Many were golfers, which is where the term "hacker" originated. It meant "enthusiast", and in golf, a player with more zeal than accuracy would just hack away at the ball. Soon "hacker" meant any non-pro enthusiast in many endeavors, frequently a hobbyist.

Once people could build their own hobby computers in the late 1970's, they also came to be known as hackers, and within a few years, a hacker was someone who programmed for the fun of it (I was one such). Among programmers both amateur and professional (we weren't called coders until the early 2000's), a bit of nicely-written code or a routine that did something really well or even with elegance was called a "good hack" or a "clever hack".

Of course, pushing the ethical envelope comes along with any enthusiastic pursuit, and all kinds of shady behavior showed up. To many folks, locks exist to be picked. In his autobiography Surely You're Joking, Mr. Feynman, the Nobel Prize-winning physicist wrote of a practice he had while he worked on the Manhattan Project, of figuring out the combinations of the locks on all the filing cabinets in his colleagues' offices.

When people who used either computer skills or social-engineering skills to break into computer systems (either determining/stealing a password or finding a way around the password authentication) began to be called "hackers" in news reports, those of us who wore the term proudly protested that they should be called "crackers", by analogy to "safecrackers" (safecracking was what Dr. Feynman was doing). It was to no avail. For close to 40 years, "hacker" has come to mean "criminal interloper".

Curiously, many of the famous "hacking" exploits didn't involve computer skills, but rather fooling someone into revealing privileged information a criminal could use to penetrate a phone system or computer network. But programmers were also busy learning various ways to extract passwords. An early "window into Windows" took advantage of a lazy error by Microsoft programmers in the way passwords were processed. That may have been corrected, but laziness by computer users still provides a wide-open door. Let me explain.

Modern encryption methods produce a "hash", turning a password into a string of 8, 12, 16, 24, or 32 bytes (when you read of "128-bit encryption", for example, that's 16 bytes). Of course, 32 is best. One would think that would make it hard to decrypt, but brute force attacks work this way:

Someone with access to a server (legal or otherwise) copies the pwd file or its equivalent. It contains hashed passwords for all the accounts. The cracking procedure is to create all possible passwords of a specific length, hash them, and see if any of the hashes match those in the pwd file. Specially-built hardware can generate and test billions of combinations every second. The fastest I've read about can "crack" more than 300 billion per second, using a big stack of GPU's (Graphical processing units).

Do you still use mainly 8-character passwords? I hope you at least use both upper- and lower-case letters. Here are some numbers you need to know.

  • 8 lower-case letters, 26^8 = 210 billion possibilities. All passwords of this sort in a pwd file can be found in less than a second.
  • 8 letters, both cases, 52^8 = 53 trillion possibilities. The time to crack them all is about 3 minutes.
  • 8 letters, both cases, plus digits, 62^8 = 218 trillion. Crack time is now 12 minutes.
  • 8 letters, all typeable characters (excluding é, ø, etc), 95^8 = 6.6 quadrillion. Crack time 370 hours, or just over two weeks.

That two-week epic cracking session has already been performed. I suspect someone somewhere has a room full of disks with all the resulting hash decryptions stored in about 160 petabytes. This is why you need passwords longer than 8 characters. I use passwords ranging from 12 to 15 bytes. I call them "million year passwords".

Some of the specialty cracking work was done by members of hacking groups with various quirky names. One group that is probably not quite as unethical as others is one of the oldest, named Cult of the Dead Cow, or cDc. In the book Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World, Joseph Menn outlines the formation and history of cDc and many of its prominent members.

Ignoring for the moment the first chapter and last two chapters, the book does dig into fascinating history, that of cDc and a number of other groups, and into the conferences and other events that made them famous, at least in IT circles. It seems that cDc members mostly straddled the boundary, some working with government and industry in a "white hat hacker" rôle (breaking in to show how it is done and to advise on how to prevent future intrusions), and some feeding hardware and software to what I prefer to call the cracking community, the black hats.

I was puzzled by the "save the world" bit in the subtitle, until I read the last two chapters and saw their connection to the first chapter. An early member of cDc was Robert "Beto" O'Rourke (AKA Psychedelic Warlord), who had left the group after around a decade and gone into politics. You may remember him from the Presidential primaries. He hasn't been named yet, but during the Biden campaign last year, Beto was to be Biden's front man for disarming the American people. Whether that makes you love him or hate him depends on your own political leanings.

It became clear that the book is more properly viewed as campaign material for the future political aspirations of Beto O'Rourke. It opens and closes with fund-raising being carried out on his behalf by a prominent member of cDc, and ends with a long musing on his prospects in future (and now, present) administrations and Presidential ambitions.

The history is interesting. The politics, not so much. For that reason, I can't recommend the book.

Tuesday, February 02, 2021

How real is dark energy?

kw: cosmological musings, dark energy, spectroscopy, universe evolution

I approach cosmology from a recreational perspective. I was introduced to extremely basic astronomy and cosmology during early primary education at a private school that followed the Classical model. But for a third-grader, a basic introduction to the Big Bang—early enough that Fred Hoyle's contention on behalf of a Steady State universe was still well regarded by many—was about as far as that instruction got. It piqued my interest, and I have enjoyed amateur astronomy, and also followed cosmology, sporadically and from a distance, ever since.

The apparent discovery of cosmological acceleration and "dark energy" just over twenty years ago got my attention. Over the past two decades I have pondered numerous aspects of cosmology, and while I still have my wits about me, it is time to record a few ideas. Allow me to state my bias at the outset: I think the analyses so far performed are incomplete, and that eventually the Cosmological Constant will be returned to zero, from the present hypothesis that it is either -1 or -0.3, depending on the units chosen by various authors.

I collected my thoughts on the matter and wrote this list of discussion topics. I don't plan to discuss each topic in detail, and not in this order. But the first item on the list is a good place to begin.

Photon "Experience"

Albert Einstein began by imagining he could follow a beam of light in a rail car. From the speculations that followed, he applied appropriate mathematical treatments to derive the theory of Special Relativity.

With a much more modest goal, I began by considering what it would be like to "be" a photon. What does a photon experience? From our point of view, a photon is emitted, typically by an accelerating charged particle (I include quantum transitions as "acceleration"), it then travels some distance before being absorbed, either by being converted to heat energy or by causing a quantum transition in an electron orbital. I will take advantage of my experience as a spectroscopist, primarily of near-to-medium infrared (NIR and MIR), but also plenty of visible and near-ultraviolet (V and NUV) spectra, and even some time spent working with vacuum UV or far UV (FUV).

What is the photon's point of view? If we imagine a photon as having sufficient awareness to "experience" anything, what does it experience? Let us first consider emission by a quantum transition, such as an electron of a hydrogen atom in the first excited state, dropping to the ground state. The energy of the photon is 10.2 eV, and it has a nominal wavelength in FUV of 121.6 nm. This is the Lyman-alpha (Lyα) transition. As a spectroscopist I primarily worked with cesium. Considering its ground state of five filled shells plus one s-electron as a lower-energy "virtual hydrogen atom", the first-level-to-ground transition energy is 1.39 eV, with a NIR wavelength of 895 nm. The higher-energy transitions of cesium's outer electron to ground have wavelengths in the visible spectrum, with the strongest being a bright green 540 nm.

I'll analyze the "experience" of a 540 nm photon. Although we tend to think of photon emission as instantaneous, it is more reasonable to consider that it occurs in a tiny slice of time. How tiny? The travel time at c across one wavelength at 540 nm is 1.8x10-15 s, or 1.8 femtoseconds (fs). If the photon is to "experience" its creation, it must think very fast because it has only a fs or two of "assembly time" before it begins its journey.

Skipping over the journey for the moment, it is reasonable to assume that "disassembly" occurs on a similar time scale of 1-2 fs. Now, as to the journey itself, what might the photon experience? Special relativity dictates that time dilation at velocity c is infinite; from the photon's point of view, the duration of the journey is precisely zero, and it experiences nothing!

Thus we can say that the "life experience" of a photon consists of at most 1-2 fs of assembly (birth?) followed immediately by at most 1-2 fs of disassembly (dissolution). Whether it is consumed in pushing an electron into a higher orbital, or exciting vibrational states in a molecule, which then turn into phonons and heat things up a tad, the photon itself experiences nothing whatever on its journey, whether the distance is a micron or a gigaparsec.

This in itself is quite tangential to cosmology. However, as a concept, wherever it might prove useful, I'll refer back to it.

Cosmological Red Shift

Note: For calculations of the age of the universe at different red shift I used calculators provided by NED at CalTech. NED is the NASA/IPAC Extragalactic Database.

The first date of interest to a visual observer is the formation of the first stars, which ionized the hydrogen and helium filling the young universe so that light could travel with little absorption. This date is widely considered to be about 400 My (0.4 Gyr, or 2.9% of current age) after the Big Bang, and if any light from this era is observable, its red shift ought to be about z=11.3

The second date of interest is at the completion of ionization by the first stars, now being formed into galaxies, at about 1Gy (7.3%), at a red shift of z=5.7.

The studies of supernovae used to winkle out the Cosmological Constant, and thus Dark Energy, have typically been undertaken between 0.1<z<1.0 (as compared to supernovae closer than z=0.1). This corresponds to ages between 5.87 Gyr (43%) and 12.4 Gyr (90%).

Mechanism of a Type Ia Supernova

This is relevant at this point, because so much hinges on the peak brightness of these phenomena.

The two principal mechanisms that produce supernovae are core collapse and thermal runaway. Core collapse supernovae may have spectra that contain hydrogen lines, meaning the star exploded before running out of hydrogen; these are Type II. There are several subtypes, but they don't pertain to this discussion, and there is a wide variation in their peak luminosity. If a core collapse occurs after the hydrogen is exhausted, no hydrogen spectral lines will be seen; such supernovae are Type I. These also have a few subtypes, which don't concern us.

A supernova that contains no hydrogen lines in its spectrum, and also has a strong line of singly ionized silicon (615 nm), is interpreted as a thermal runaway supernova of Type Ia. What is thermal runaway?

The basic mechanism starts with a white dwarf. This is what remains after a main sequence star with an initial mass less than 8 times that of the Sun (M*<8M) has used up all its fuel. The time this takes varies depending on the mass of the star. The initial mass is called its "Zero-age mass", setting zero at the point hydrogen fusion begins. 

This chart shows the time, in billions of years, that a star spends on the Main Sequence, fusing hydrogen exclusively, until it begins to fuse helium in its core, at which time it "leaves the Main Sequence" and grows to be a Red Giant. The data for this chart came from one of many reports about computer modeling of stellar evolution.

Depending on the star's mass, it takes between a few million years and half a billion years for a red giant to complete helium fusion, at which point it will erupt in a gentler way than a supernova, "whooshing" away up to half its mass or even more (which often becomes a planetary nebula), and then it shrinks into a white dwarf. A new-born white dwarf is actually rather blue, with a temperature of around 100,000K.

The blue line tracks stars of similar composition to the Sun, but a little less initial helium (parameter "y"). The green line tracks stars with very low amounts of "metals", by which an astronomer means all elements heavier than helium. The symbol for metallicity is "z" (don't confuse this with z, for redshift), and for the Sun it is about 1%, called "hi" in this chart. The green and violet lines are for z=0.00001, or 0.001%, which would represent the second generation of stars in the early universe. The first generation had a metallicity of zero, and behaved very differently from stars with even a few parts per million of "metals"; for one thing, they couldn't really get fusion going until their mass exceeded 50-100 solar masses, and they then burned very brightly and used up all their fuel in just a few million years. At that point they exploded as a supernova (I don't know what type). That explosion synthesized elements of the entire periodic table in a very short time, so the first generation of stars seeded the universe with elements that let all succeeding generations of stars "work" the way stars now operate. Over time, supernovae of types I, II and III added more and more metals, so that five billion years ago when our Sun was formed, that part of the galaxy had a metallicity of about 1%.

I am interested in the formation of the earliest white dwarfs. These would have been produced from middleweight stars, with mass between 4 and 8 solar masses (M☉). Their main sequence "dwell time" would be about 100 million years for those near 4 M☉, and just 10-20 million years for the heaviest stars. Just as the main sequence dwell time is much shorter for more massive stars, so is the period of helium burning. While the red giant that the Sun becomes will spend perhaps half a billion years burning helium, a star 6-8 times as massive will burn through all its helium in a million years or so, and become a white dwarf soon thereafter ("soon" meaning probably less than a million years). When we're looking back to a time before the universe's age was about 6 billion years, a few million one way or another is negligible.

What is the composition of such a white dwarf (WD hereafter)? Their basic composition is carbon and oxygen, formed by helium fusion. In the heaviest "middleweights", a little fusion of the C and O can occur, yielding magnesium (C+C→Mg) and silicon (C+O→Si). In the earliest WD's we would not expect other elements in more than trace amounts, but in later ones, minor element abundance will be similar to that of the original star.

Now we are ready to talk mechanisms. When the Sun becomes a WD, it will have a mass of about 60% of its present mass. In the late stages of being a red giant it will expel the rest of the mass in a big "whoosh". Heavier stars expel a larger proportion, and no WD gets formed with more than about 1.4 M☉. This is because the only thing keeping a WD from collapsing into a neutron star or black hole is electron degeneracy, a quantum-mechanical effect. The 1.4 M☉ limit, first calculated by Chandrasekhar, and named for him, is the limit of the strength of degeneracy. This is the key to the fixed value of the luminosity of a Type Ia supernova.

Once a WD forms, if it is isolated, it remains forever, slowly cooling to a black dwarf (which takes a few trillion years). But many stars are members of a doublet. The companion star of the WD can do several things to add mass to it, and once its mass exceeds the Chandrasekhar Limit, it will collapse, from the core first: Beginning at the core, rapid deflagration occurs and soon consumes the entire star. The detonation outshines a typical galaxy and can be seen across the visible universe. That is a Type Ia supernova.

There are two principal ways a companion star might add mass to a WD.

  1. The companion will itself eventually become a red giant. If the orbit with the WD is small enough, material from the giant will be swept onto the WD. This can go on until the WD detonates, incidentally giving the companion a "kick" that sends it careening away at a good fraction of the speed of light.
  2. The companion may not push enough mass to the WD to make it detonate. Instead, when it begins its late stage "whoosh", some of the cloud thus released can stay behind, and friction will result, such that the WD and the fading red giant, on the way to becoming a WD itself, spiral toward one another. In time they will merge, combining their masses into one larger WD. It is very likely that this combined WD will have a mass greater than 1.4 M☉, and it may even approach 2.8 M☉ if both stars' initial mass exceeded 6 M☉. The new WD doesn't even settle down, but detonates right away. 

Interestingly, all the literature I have read indicates that the second mechanism is apparently much more common than the first. Perhaps only 5% of Type Ia supernovae occur by the first mechanism. That presents a problem, because we now have a 2:1 range of "mass detonated", which ought to have a big influence on the peak luminosity of the supernova.

If that were all there is to it, I could say with confidence that the spread in luminosity is too great for these supernovae to be a useful "standard candle." However, the studies that led to the discovery of apparent cosmological acceleration, and later researchers seeking to confirm it, have another ace up their sleeve. They use spectroscopic criteria to distinguish different subtypes of Type Ia. If they aren't fooling themselves with circular reasoning (and I do not claim they are), the selected supernovae do seem to exhibit extra dimming with distance, consistent with the acceleration hypothesis.

For example, this figure from a short publication by the Dark Energy Survey shows the effect. The authors used spectroscopy to identify more than 250 supernovae with redshift between 0.1 and 1.0 (1.4 Gyr ago to 7.9 Gyr ago), shown by the red symbols. They used the same criteria to gather a sample of similar size of more recent supernovae, shown by the yellow symbols.

They normalized the data to the hypothetical model with acceleration, so their hypothesis follows the horizontal line in the lower diagram. The lower, dashed blue line shows where they would expect these data to fall if there were no acceleration and the universe were flat.

Faced with a chart like this, I have to accept the hypothesis, don't I? Not necessarily. The article has a comprehensive, but exceedingly compressed, discussion of the computational methods used to correct for intergalactic extinction (dimming of the light by gas and dust 'way out there'), for example. They may have corrected for the effect I am soon to discuss, but I could not determine if this were so. I also have a point to discuss relating to the composition of WD's.

I am quite impressed by the tightness of the error bars, a result of the selection criteria, considering that the pre-selection luminosity data must have had a scatter exceeding 2:1, for reasons noted a few paragraphs above.

White Dwarf Composition

The initial counter-idea I had, nearly 20 years ago, was, "Does the metallicity of the star that formed the WD have any effect on the actual value of the Chandrasekhar Limit for that particular WD?" Secondarily, does the composition modify the rate of deflagration, and thus the peak luminosity?

I don't have the mathematical or computation tools to delve into this directly. Perhaps someone will do so, or maybe someone has and I haven't seen the literature. However, I'll ask the questions in another way, and leave it as something for future resolution: "For a white dwarf with a mass just below the nominal Chandrasekhar Limit, and a metallicity very near zero, perhaps no more than a few ppm, will its diameter differ from that of a WD of the same mass but metallicity near 1%?" Similarly, "Will the exact value of the Chandrasekhar Limit differ between the two WD's?"

Either effect introduces systematic error. However, metallicity ought to affect the spectroscopy, so perhaps it has been accounted for, wittingly or not.

Density of Earlier Universe

In the second section above, I wrote that the lookback time represented by a redshift z=1 is 7.9 Gyr, when the universe's age was 5.87 Gyr, about 43% of its current age, here considered to be 13.77 Gyr. A simplistic understanding of the distances represented by redshift, based on neither acceleration nor deceleration, is that the light reaching us from the early universe has traveled one billion light years per gigayear, so the radius the observable universe would have been 43% of the present value at z=1. The cube of (5.87/13.77) is 0.077. Inverting this, the density of the intergalactic medium at z=1 then would be 12.9 times as great as it is now.

Things aren't that simple. The cosmological calculators I've been using for these figurations use the Friedmann equations to factor in the effects of the cosmological constant and general relativity, such that the "distance" light has traveled since the big bang isn't 13.77 billion light years (GLy), but about 46.1 GLy. The "extra" 33 GLy is from cosmological expansion of space, which carried the light along with it. According to the Friedmann equations, the lookback distance to z=1 is 11.05 GLy, which means the distance light came to reach that point was 35.05 GLy, or 76% of the full distance. The cube of (35.05/46.1) is 0.440; inverting this yields 2.275. If these are the right figures, the universe was only a little more than twice as dense then, compared to now.

So, which is it? The Friedmann equations are used in the calculation of 13.77 Gyr as the "age" of the big bang, and they have built into them the cosmological constant and are thus are based on the existence of dark energy. Here the reasoning does seem circular. Again, I'll put that notion on hold until I learn more.

The fact remains that the density of the universe about 8 billion years ago was between about 2.3 and 13 times the present value.

That is not all. While the Milky Way is thought to have formed within 100 million years after the big bang, and many other large galaxies also, other galaxies are of more recent vintage, some as recent as half a billion years. What proportion of the primordial gas that now makes up the stars—and the supernova-processed material of the interstellar medium and intergalactic medium—had been gathered into galaxies in the first 6 billion years after the big bang? Half of it? 80%? . . . 20%? I have not seen any well-supported estimates. Galaxy formation is written about as a continuous process. If the most recent "new" galaxy has an age of 500 million years, there are others, not yet discerned, of a wide span of ages.

The universe of between 4 and 8 billion years ago contained intergalactic material that has since been swept into galaxies and used to form newer stars. Our Sun, of age about 5 Gyrs, was probably formed mainly from material that had been in the Milky Way almost from the beginning, but was more recently-gained material also included? It it likely.

Interstellar Extinction and Intergalactic Extinction

In astronomical terms, "extinction" is nothing like biological extinction. It refers to the dimming of the light by absorption as it passes through gas and dust between the stars and, for anything outside the Milky Way, between the galaxies. When measuring the brightnesses of stars we actually have to correct for atmospheric extinction also, for telescopes on Earth. Space telescopes, of course, are above the atmosphere, which removes this complication. But the effect of the atmosphere is a familiar starting point toward understanding how gas and dust in space affects starlight.

The clean air of a cloudless day scatters more than ¼ of sunlight before it reaches the surface near the Equator, and even more at other latitudes. The scattered light is the blue sky we see, because the scattering is more efficient for bluer light. Thus, at the equator direct sunlight, which began at the top of the atmosphere with an intensity represented by the Solar Constant of about 1,370 W/m², has an intensity of about 1,000 W/m² when it hits a beach in western Ecuador. The other 370 Watts has been scattered such that half goes up and half goes down, which means the sky brightness has an intensity of 185 W/m². At the latitude where I live, near 40° north, photovoltaic solar panel systems are designed for peak insolation of about 700 W/m².

If there is any dust in the air, it absorbs more of the light, sometimes nearly all of it. Scattering by dust is also more efficient for bluer light, but the spectral function is not as steep as it is for clear-gas scattering (Rayleigh scattering). We have all seen a sunset on a windy day with very dramatic colors because of dust in the air; we can look right at the deep red Sun without harm.

There is a lot of dust in the interstellar medium within the Milky Way. That is why galactic surveys are undertaken far from the trace of the Milky Way on the sky. Most of the intergalactic medium is thought to be gas, with much less dust.

If the air in the sky above that beach in Ecuador didn't decrease in pressure with altitude, the depth of the atmosphere would be about 7.8 km (4.8 miles). That is sufficient to scatter away 27% of the sunlight by Rayleigh scattering. However, the primary gas in space is molecular hydrogen, not nitrogen. The Rayleigh scattering cross section of nitrogen is reported as about five times that of hydrogen. We may thus determine that, to absorb 27% of incoming sunlight, normalized across the visible spectrum, a thickness of 39 km of hydrogen, at a pressure of 760 Torr, is required. One-third of this depth of gas will absorb 10%, so a base thickness of 13 km is a good starting point for what follows. Even then, the calculations will be pretty rough because I don't want to enter into the complexities of integrating across the spectrum.

The gas "pressure" in interstellar space is quite variable but over long distances it averages out to between 10-10 and 10-11 Torr. That is less than a trillionth of the gas density of the atmosphere. Fifty years ago, working as a high-vacuum technician and spectroscopist, I regularly achieved pressures in this range in a stainless steel chamber (with various sorts of viewports) that had a volume of a couple of cubic feet.

The ratio 760 T/10-11 T is 7.6x1013, and this times 13 km is 9.88x1014 km. That is about 104 light years. This is convenient: interstellar extinction by hydrogen is about 10% per hundred light years, within the Milky Way at least. In areas with pressure closer to 10-10 T, it is 10% per ten light years.

In intergalactic space the gas density is about a million times less, with a "pressure" of about 10-17 T. When we look out away from the play of the Milky Way, the thickness of nearby gas may come to several hundred light years, and absorb perhaps half the light. Beyond that, each hundred million light years, intergalactic extinction would be about 10%.

Here I have to step back to say, "Is this reasonable? At redshift distance of z=1, or about 8 GLy (uncorrected by Friedmann equations), is the remaining light really 0.980, or 0.00022?" That is nine magnitudes! I suspect the material I read that compared Rayleigh scattering in hydrogen with other gases had a value too high for hydrogen. If it is even half the value I've used, the base depth of 13 km would instead be 26 km, and intergalactic extinction would be 10% per 200 million light years, leading to extinction of 0.940, or 0.015, which is 4.5 magnitudes, a much more likely value.

This emphasizes that the observed luminosity of distant objects must be properly corrected by a correct model of intergalactic absorption.

This is the final concern I have with the measured luminosities of distant supernovae. If the volume of the universe at z=1 was less than half what it is now, or perhaps much less than that, the density of the intergalactic medium was at least twice as great, and perhaps much more, depending on how much was already sequestered into galaxies.

The reports from the Dark Energy Survey state a scatter in corrected luminosity of their supernovae of about 20%, which is small enough that the effect can be seen. Yet, if the extinction calculations are not taking proper account of the change in density of the intervening gas (and dust) over time, a rather small error in the extinction coefficient, drawn out over billions of light years, makes a large difference.

Conclusion

It takes a long chain of reasoning to draw the conclusion that the Hubble expansion of the universe is accelerating. Is it possible that the cosmologists have thought of absolutely everything that could systematically bias the measurements on which they rely? Everything? Possibly, but it is unlikely.

Here are the factors that may be confusing the matter:

  • The peak luminosity of a Type Ia supernova may depend on the metallicity of the original stars, which would have been very low in the early universe.
  • Intergalactic extinction is based on the total gas column between the source and the observer. The gas density has changed through time, firstly because of thinning in an expanding universe.
  • The gas density has also changed through time as gas was gathered into galaxies, and in particular how rapidly that gas was taken up and thus removed from most sight lines.
  • The Friedmann equations used to determine lookback distance depend on the cosmological model, including values of Ωo, ΩM, and Ωvacuum. Unless used with care, calculating distances to use for extinction calculations becomes circular reasoning.

With these in mind, I think it is very, very premature to conclude that cosmological acceleration is real.