Tuesday, February 02, 2021

How real is dark energy?

kw: cosmological musings, dark energy, spectroscopy, universe evolution

I approach cosmology from a recreational perspective. I was introduced to extremely basic astronomy and cosmology during early primary education at a private school that followed the Classical model. But for a third-grader, a basic introduction to the Big Bang—early enough that Fred Hoyle's contention on behalf of a Steady State universe was still well regarded by many—was about as far as that instruction got. It piqued my interest, and I have enjoyed amateur astronomy, and also followed cosmology, sporadically and from a distance, ever since.

The apparent discovery of cosmological acceleration and "dark energy" just over twenty years ago got my attention. Over the past two decades I have pondered numerous aspects of cosmology, and while I still have my wits about me, it is time to record a few ideas. Allow me to state my bias at the outset: I think the analyses so far performed are incomplete, and that eventually the Cosmological Constant will be returned to zero, from the present hypothesis that it is either -1 or -0.3, depending on the units chosen by various authors.

I collected my thoughts on the matter and wrote this list of discussion topics. I don't plan to discuss each topic in detail, and not in this order. But the first item on the list is a good place to begin.

Photon "Experience"

Albert Einstein began by imagining he could follow a beam of light in a rail car. From the speculations that followed, he applied appropriate mathematical treatments to derive the theory of Special Relativity.

With a much more modest goal, I began by considering what it would be like to "be" a photon. What does a photon experience? From our point of view, a photon is emitted, typically by an accelerating charged particle (I include quantum transitions as "acceleration"), it then travels some distance before being absorbed, either by being converted to heat energy or by causing a quantum transition in an electron orbital. I will take advantage of my experience as a spectroscopist, primarily of near-to-medium infrared (NIR and MIR), but also plenty of visible and near-ultraviolet (V and NUV) spectra, and even some time spent working with vacuum UV or far UV (FUV).

What is the photon's point of view? If we imagine a photon as having sufficient awareness to "experience" anything, what does it experience? Let us first consider emission by a quantum transition, such as an electron of a hydrogen atom in the first excited state, dropping to the ground state. The energy of the photon is 10.2 eV, and it has a nominal wavelength in FUV of 121.6 nm. This is the Lyman-alpha (Lyα) transition. As a spectroscopist I primarily worked with cesium. Considering its ground state of five filled shells plus one s-electron as a lower-energy "virtual hydrogen atom", the first-level-to-ground transition energy is 1.39 eV, with a NIR wavelength of 895 nm. The higher-energy transitions of cesium's outer electron to ground have wavelengths in the visible spectrum, with the strongest being a bright green 540 nm.

I'll analyze the "experience" of a 540 nm photon. Although we tend to think of photon emission as instantaneous, it is more reasonable to consider that it occurs in a tiny slice of time. How tiny? The travel time at c across one wavelength at 540 nm is 1.8x10-15 s, or 1.8 femtoseconds (fs). If the photon is to "experience" its creation, it must think very fast because it has only a fs or two of "assembly time" before it begins its journey.

Skipping over the journey for the moment, it is reasonable to assume that "disassembly" occurs on a similar time scale of 1-2 fs. Now, as to the journey itself, what might the photon experience? Special relativity dictates that time dilation at velocity c is infinite; from the photon's point of view, the duration of the journey is precisely zero, and it experiences nothing!

Thus we can say that the "life experience" of a photon consists of at most 1-2 fs of assembly (birth?) followed immediately by at most 1-2 fs of disassembly (dissolution). Whether it is consumed in pushing an electron into a higher orbital, or exciting vibrational states in a molecule, which then turn into phonons and heat things up a tad, the photon itself experiences nothing whatever on its journey, whether the distance is a micron or a gigaparsec.

This in itself is quite tangential to cosmology. However, as a concept, wherever it might prove useful, I'll refer back to it.

Cosmological Red Shift

Note: For calculations of the age of the universe at different red shift I used calculators provided by NED at CalTech. NED is the NASA/IPAC Extragalactic Database.

The first date of interest to a visual observer is the formation of the first stars, which ionized the hydrogen and helium filling the young universe so that light could travel with little absorption. This date is widely considered to be about 400 My (0.4 Gyr, or 2.9% of current age) after the Big Bang, and if any light from this era is observable, its red shift ought to be about z=11.3

The second date of interest is at the completion of ionization by the first stars, now being formed into galaxies, at about 1Gy (7.3%), at a red shift of z=5.7.

The studies of supernovae used to winkle out the Cosmological Constant, and thus Dark Energy, have typically been undertaken between 0.1<z<1.0 (as compared to supernovae closer than z=0.1). This corresponds to ages between 5.87 Gyr (43%) and 12.4 Gyr (90%).

Mechanism of a Type Ia Supernova

This is relevant at this point, because so much hinges on the peak brightness of these phenomena.

The two principal mechanisms that produce supernovae are core collapse and thermal runaway. Core collapse supernovae may have spectra that contain hydrogen lines, meaning the star exploded before running out of hydrogen; these are Type II. There are several subtypes, but they don't pertain to this discussion, and there is a wide variation in their peak luminosity. If a core collapse occurs after the hydrogen is exhausted, no hydrogen spectral lines will be seen; such supernovae are Type I. These also have a few subtypes, which don't concern us.

A supernova that contains no hydrogen lines in its spectrum, and also has a strong line of singly ionized silicon (615 nm), is interpreted as a thermal runaway supernova of Type Ia. What is thermal runaway?

The basic mechanism starts with a white dwarf. This is what remains after a main sequence star with an initial mass less than 8 times that of the Sun (M*<8M) has used up all its fuel. The time this takes varies depending on the mass of the star. The initial mass is called its "Zero-age mass", setting zero at the point hydrogen fusion begins. 

This chart shows the time, in billions of years, that a star spends on the Main Sequence, fusing hydrogen exclusively, until it begins to fuse helium in its core, at which time it "leaves the Main Sequence" and grows to be a Red Giant. The data for this chart came from one of many reports about computer modeling of stellar evolution.

Depending on the star's mass, it takes between a few million years and half a billion years for a red giant to complete helium fusion, at which point it will erupt in a gentler way than a supernova, "whooshing" away up to half its mass or even more (which often becomes a planetary nebula), and then it shrinks into a white dwarf. A new-born white dwarf is actually rather blue, with a temperature of around 100,000K.

The blue line tracks stars of similar composition to the Sun, but a little less initial helium (parameter "y"). The green line tracks stars with very low amounts of "metals", by which an astronomer means all elements heavier than helium. The symbol for metallicity is "z" (don't confuse this with z, for redshift), and for the Sun it is about 1%, called "hi" in this chart. The green and violet lines are for z=0.00001, or 0.001%, which would represent the second generation of stars in the early universe. The first generation had a metallicity of zero, and behaved very differently from stars with even a few parts per million of "metals"; for one thing, they couldn't really get fusion going until their mass exceeded 50-100 solar masses, and they then burned very brightly and used up all their fuel in just a few million years. At that point they exploded as a supernova (I don't know what type). That explosion synthesized elements of the entire periodic table in a very short time, so the first generation of stars seeded the universe with elements that let all succeeding generations of stars "work" the way stars now operate. Over time, supernovae of types I, II and III added more and more metals, so that five billion years ago when our Sun was formed, that part of the galaxy had a metallicity of about 1%.

I am interested in the formation of the earliest white dwarfs. These would have been produced from middleweight stars, with mass between 4 and 8 solar masses (M☉). Their main sequence "dwell time" would be about 100 million years for those near 4 M☉, and just 10-20 million years for the heaviest stars. Just as the main sequence dwell time is much shorter for more massive stars, so is the period of helium burning. While the red giant that the Sun becomes will spend perhaps half a billion years burning helium, a star 6-8 times as massive will burn through all its helium in a million years or so, and become a white dwarf soon thereafter ("soon" meaning probably less than a million years). When we're looking back to a time before the universe's age was about 6 billion years, a few million one way or another is negligible.

What is the composition of such a white dwarf (WD hereafter)? Their basic composition is carbon and oxygen, formed by helium fusion. In the heaviest "middleweights", a little fusion of the C and O can occur, yielding magnesium (C+C→Mg) and silicon (C+O→Si). In the earliest WD's we would not expect other elements in more than trace amounts, but in later ones, minor element abundance will be similar to that of the original star.

Now we are ready to talk mechanisms. When the Sun becomes a WD, it will have a mass of about 60% of its present mass. In the late stages of being a red giant it will expel the rest of the mass in a big "whoosh". Heavier stars expel a larger proportion, and no WD gets formed with more than about 1.4 M☉. This is because the only thing keeping a WD from collapsing into a neutron star or black hole is electron degeneracy, a quantum-mechanical effect. The 1.4 M☉ limit, first calculated by Chandrasekhar, and named for him, is the limit of the strength of degeneracy. This is the key to the fixed value of the luminosity of a Type Ia supernova.

Once a WD forms, if it is isolated, it remains forever, slowly cooling to a black dwarf (which takes a few trillion years). But many stars are members of a doublet. The companion star of the WD can do several things to add mass to it, and once its mass exceeds the Chandrasekhar Limit, it will collapse, from the core first: Beginning at the core, rapid deflagration occurs and soon consumes the entire star. The detonation outshines a typical galaxy and can be seen across the visible universe. That is a Type Ia supernova.

There are two principal ways a companion star might add mass to a WD.

  1. The companion will itself eventually become a red giant. If the orbit with the WD is small enough, material from the giant will be swept onto the WD. This can go on until the WD detonates, incidentally giving the companion a "kick" that sends it careening away at a good fraction of the speed of light.
  2. The companion may not push enough mass to the WD to make it detonate. Instead, when it begins its late stage "whoosh", some of the cloud thus released can stay behind, and friction will result, such that the WD and the fading red giant, on the way to becoming a WD itself, spiral toward one another. In time they will merge, combining their masses into one larger WD. It is very likely that this combined WD will have a mass greater than 1.4 M☉, and it may even approach 2.8 M☉ if both stars' initial mass exceeded 6 M☉. The new WD doesn't even settle down, but detonates right away. 

Interestingly, all the literature I have read indicates that the second mechanism is apparently much more common than the first. Perhaps only 5% of Type Ia supernovae occur by the first mechanism. That presents a problem, because we now have a 2:1 range of "mass detonated", which ought to have a big influence on the peak luminosity of the supernova.

If that were all there is to it, I could say with confidence that the spread in luminosity is too great for these supernovae to be a useful "standard candle." However, the studies that led to the discovery of apparent cosmological acceleration, and later researchers seeking to confirm it, have another ace up their sleeve. They use spectroscopic criteria to distinguish different subtypes of Type Ia. If they aren't fooling themselves with circular reasoning (and I do not claim they are), the selected supernovae do seem to exhibit extra dimming with distance, consistent with the acceleration hypothesis.

For example, this figure from a short publication by the Dark Energy Survey shows the effect. The authors used spectroscopy to identify more than 250 supernovae with redshift between 0.1 and 1.0 (1.4 Gyr ago to 7.9 Gyr ago), shown by the red symbols. They used the same criteria to gather a sample of similar size of more recent supernovae, shown by the yellow symbols.

They normalized the data to the hypothetical model with acceleration, so their hypothesis follows the horizontal line in the lower diagram. The lower, dashed blue line shows where they would expect these data to fall if there were no acceleration and the universe were flat.

Faced with a chart like this, I have to accept the hypothesis, don't I? Not necessarily. The article has a comprehensive, but exceedingly compressed, discussion of the computational methods used to correct for intergalactic extinction (dimming of the light by gas and dust 'way out there'), for example. They may have corrected for the effect I am soon to discuss, but I could not determine if this were so. I also have a point to discuss relating to the composition of WD's.

I am quite impressed by the tightness of the error bars, a result of the selection criteria, considering that the pre-selection luminosity data must have had a scatter exceeding 2:1, for reasons noted a few paragraphs above.

White Dwarf Composition

The initial counter-idea I had, nearly 20 years ago, was, "Does the metallicity of the star that formed the WD have any effect on the actual value of the Chandrasekhar Limit for that particular WD?" Secondarily, does the composition modify the rate of deflagration, and thus the peak luminosity?

I don't have the mathematical or computation tools to delve into this directly. Perhaps someone will do so, or maybe someone has and I haven't seen the literature. However, I'll ask the questions in another way, and leave it as something for future resolution: "For a white dwarf with a mass just below the nominal Chandrasekhar Limit, and a metallicity very near zero, perhaps no more than a few ppm, will its diameter differ from that of a WD of the same mass but metallicity near 1%?" Similarly, "Will the exact value of the Chandrasekhar Limit differ between the two WD's?"

Either effect introduces systematic error. However, metallicity ought to affect the spectroscopy, so perhaps it has been accounted for, wittingly or not.

Density of Earlier Universe

In the second section above, I wrote that the lookback time represented by a redshift z=1 is 7.9 Gyr, when the universe's age was 5.87 Gyr, about 43% of its current age, here considered to be 13.77 Gyr. A simplistic understanding of the distances represented by redshift, based on neither acceleration nor deceleration, is that the light reaching us from the early universe has traveled one billion light years per gigayear, so the radius the observable universe would have been 43% of the present value at z=1. The cube of (5.87/13.77) is 0.077. Inverting this, the density of the intergalactic medium at z=1 then would be 12.9 times as great as it is now.

Things aren't that simple. The cosmological calculators I've been using for these figurations use the Friedmann equations to factor in the effects of the cosmological constant and general relativity, such that the "distance" light has traveled since the big bang isn't 13.77 billion light years (GLy), but about 46.1 GLy. The "extra" 33 GLy is from cosmological expansion of space, which carried the light along with it. According to the Friedmann equations, the lookback distance to z=1 is 11.05 GLy, which means the distance light came to reach that point was 35.05 GLy, or 76% of the full distance. The cube of (35.05/46.1) is 0.440; inverting this yields 2.275. If these are the right figures, the universe was only a little more than twice as dense then, compared to now.

So, which is it? The Friedmann equations are used in the calculation of 13.77 Gyr as the "age" of the big bang, and they have built into them the cosmological constant and are thus are based on the existence of dark energy. Here the reasoning does seem circular. Again, I'll put that notion on hold until I learn more.

The fact remains that the density of the universe about 8 billion years ago was between about 2.3 and 13 times the present value.

That is not all. While the Milky Way is thought to have formed within 100 million years after the big bang, and many other large galaxies also, other galaxies are of more recent vintage, some as recent as half a billion years. What proportion of the primordial gas that now makes up the stars—and the supernova-processed material of the interstellar medium and intergalactic medium—had been gathered into galaxies in the first 6 billion years after the big bang? Half of it? 80%? . . . 20%? I have not seen any well-supported estimates. Galaxy formation is written about as a continuous process. If the most recent "new" galaxy has an age of 500 million years, there are others, not yet discerned, of a wide span of ages.

The universe of between 4 and 8 billion years ago contained intergalactic material that has since been swept into galaxies and used to form newer stars. Our Sun, of age about 5 Gyrs, was probably formed mainly from material that had been in the Milky Way almost from the beginning, but was more recently-gained material also included? It it likely.

Interstellar Extinction and Intergalactic Extinction

In astronomical terms, "extinction" is nothing like biological extinction. It refers to the dimming of the light by absorption as it passes through gas and dust between the stars and, for anything outside the Milky Way, between the galaxies. When measuring the brightnesses of stars we actually have to correct for atmospheric extinction also, for telescopes on Earth. Space telescopes, of course, are above the atmosphere, which removes this complication. But the effect of the atmosphere is a familiar starting point toward understanding how gas and dust in space affects starlight.

The clean air of a cloudless day scatters more than ¼ of sunlight before it reaches the surface near the Equator, and even more at other latitudes. The scattered light is the blue sky we see, because the scattering is more efficient for bluer light. Thus, at the equator direct sunlight, which began at the top of the atmosphere with an intensity represented by the Solar Constant of about 1,370 W/m², has an intensity of about 1,000 W/m² when it hits a beach in western Ecuador. The other 370 Watts has been scattered such that half goes up and half goes down, which means the sky brightness has an intensity of 185 W/m². At the latitude where I live, near 40° north, photovoltaic solar panel systems are designed for peak insolation of about 700 W/m².

If there is any dust in the air, it absorbs more of the light, sometimes nearly all of it. Scattering by dust is also more efficient for bluer light, but the spectral function is not as steep as it is for clear-gas scattering (Rayleigh scattering). We have all seen a sunset on a windy day with very dramatic colors because of dust in the air; we can look right at the deep red Sun without harm.

There is a lot of dust in the interstellar medium within the Milky Way. That is why galactic surveys are undertaken far from the trace of the Milky Way on the sky. Most of the intergalactic medium is thought to be gas, with much less dust.

If the air in the sky above that beach in Ecuador didn't decrease in pressure with altitude, the depth of the atmosphere would be about 7.8 km (4.8 miles). That is sufficient to scatter away 27% of the sunlight by Rayleigh scattering. However, the primary gas in space is molecular hydrogen, not nitrogen. The Rayleigh scattering cross section of nitrogen is reported as about five times that of hydrogen. We may thus determine that, to absorb 27% of incoming sunlight, normalized across the visible spectrum, a thickness of 39 km of hydrogen, at a pressure of 760 Torr, is required. One-third of this depth of gas will absorb 10%, so a base thickness of 13 km is a good starting point for what follows. Even then, the calculations will be pretty rough because I don't want to enter into the complexities of integrating across the spectrum.

The gas "pressure" in interstellar space is quite variable but over long distances it averages out to between 10-10 and 10-11 Torr. That is less than a trillionth of the gas density of the atmosphere. Fifty years ago, working as a high-vacuum technician and spectroscopist, I regularly achieved pressures in this range in a stainless steel chamber (with various sorts of viewports) that had a volume of a couple of cubic feet.

The ratio 760 T/10-11 T is 7.6x1013, and this times 13 km is 9.88x1014 km. That is about 104 light years. This is convenient: interstellar extinction by hydrogen is about 10% per hundred light years, within the Milky Way at least. In areas with pressure closer to 10-10 T, it is 10% per ten light years.

In intergalactic space the gas density is about a million times less, with a "pressure" of about 10-17 T. When we look out away from the play of the Milky Way, the thickness of nearby gas may come to several hundred light years, and absorb perhaps half the light. Beyond that, each hundred million light years, intergalactic extinction would be about 10%.

Here I have to step back to say, "Is this reasonable? At redshift distance of z=1, or about 8 GLy (uncorrected by Friedmann equations), is the remaining light really 0.980, or 0.00022?" That is nine magnitudes! I suspect the material I read that compared Rayleigh scattering in hydrogen with other gases had a value too high for hydrogen. If it is even half the value I've used, the base depth of 13 km would instead be 26 km, and intergalactic extinction would be 10% per 200 million light years, leading to extinction of 0.940, or 0.015, which is 4.5 magnitudes, a much more likely value.

This emphasizes that the observed luminosity of distant objects must be properly corrected by a correct model of intergalactic absorption.

This is the final concern I have with the measured luminosities of distant supernovae. If the volume of the universe at z=1 was less than half what it is now, or perhaps much less than that, the density of the intergalactic medium was at least twice as great, and perhaps much more, depending on how much was already sequestered into galaxies.

The reports from the Dark Energy Survey state a scatter in corrected luminosity of their supernovae of about 20%, which is small enough that the effect can be seen. Yet, if the extinction calculations are not taking proper account of the change in density of the intervening gas (and dust) over time, a rather small error in the extinction coefficient, drawn out over billions of light years, makes a large difference.

Conclusion

It takes a long chain of reasoning to draw the conclusion that the Hubble expansion of the universe is accelerating. Is it possible that the cosmologists have thought of absolutely everything that could systematically bias the measurements on which they rely? Everything? Possibly, but it is unlikely.

Here are the factors that may be confusing the matter:

  • The peak luminosity of a Type Ia supernova may depend on the metallicity of the original stars, which would have been very low in the early universe.
  • Intergalactic extinction is based on the total gas column between the source and the observer. The gas density has changed through time, firstly because of thinning in an expanding universe.
  • The gas density has also changed through time as gas was gathered into galaxies, and in particular how rapidly that gas was taken up and thus removed from most sight lines.
  • The Friedmann equations used to determine lookback distance depend on the cosmological model, including values of Ωo, ΩM, and Ωvacuum. Unless used with care, calculating distances to use for extinction calculations becomes circular reasoning.

With these in mind, I think it is very, very premature to conclude that cosmological acceleration is real.

No comments: