kw: book reviews, mysteries, fiction, animal fiction
In the past I read a couple of the Sneaky Pie Brown mysteries by Rita Mae Brown, in which the cat is an essential element in the solution of the crimes. Homeward Hound is the latest of her Sister Jane series, set in the Virginia horse country, among the fox hunting set.
Ms Brown's rollicking prose carries a story along, no matter the venue. I found it fascinating reading, not so much for the gradual solution to two murders, but as a window into a subculture about which I know next to nothing. As it happens, the author is a Master of Fox Hounds (MFH designation, something I didn't know is a "thing"). I suspect a number of her friends find themselves limned in the story, only thinly disguised.
Reading of the fox-and-hounds set, to a surprising level of detail, added to my understanding that, whatever formal religion folks may belong to, most have a real "church" that some might call a hobby. So these riders' dedication to the hunt is their actual religion, as is the desert-scanning and mineral/agate-gathering of many rockhounds and the finely-honed model-building of model railroad enthusiasts.
In a Sneaky Pie story, the cat does some of the talking, but only to other cats and dogs, who also talk. In a Sister Jane story, cats, dogs, foxes, and owls get into the act. It seems they can all understand one another's "speech", and also what the humans are saying, but no human knows what the animals are "saying". However, the animals don't do much to solve the crime. Sister Jane does most of that. The animal chatter seems to serve more as a series of asides that help the reader gather background information before Sister Jane becomes aware of it. For example, the primary murder in this volume occurs during a snowed-out fox hunt, when a rider is pulled off his horse and killed, but only the horse notices…and "talks" about it with another horse later on.
The story gets pretty complex, with many characters (about twice as many as most writers would employ, just of the human kind, let alone a couple dozen animals). There are a couple of love stories going on. The tying-up-the-knots scene at the ends ties up a few knots nobody knew were in the works. To me, it detracted a bit. But all in all, quite an enjoyable read.
Monday, May 27, 2019
Wednesday, May 22, 2019
Refraining from a roll in the gutter
kw: book reviews, short fiction, short stories
"You can't rassle with a skunk and come out smelling like a rose." —anonymous Texan.
The blurb on the slipcover says of the characters in Thom Jones's stories, "…grifters and drifters, rogues and ne'er-do-wells, would-be do-gooders…", but I decided to give the book a try: Night Train, a posthumous collection of both published and unpublished stories by Thom Jones.
Jones became famous in his forties when he published "The Pugilist at Rest", which opens the volume. It is indeed a pretty good story. There is an interesting mix of erudition and the argot of the perennial bottom-feeder. The second story, "Black Lights", on the same theme of a washed-up boxer, goes downhill from there. I didn't finish it. I popped around the volume, and found little worth reading for more than a minute or so. I usually checked the last paragraph to see if a story or the character went anywhere. Nope.
I don't mean to say that Jones wrote badly. He spent (squandered?) his skill on getting us inside the heads of people whose head I don't like being in.
There are 19 stories from three prior volumes, and 7 new ones. Only one of the new ones caught my interest for more than a minute or two: "Diary of My Health". It is evidently autobiographical, with flights into lunacy, from the viewpoint of a 62-year-old who doesn't expect to live much longer. It describes Jones's medical issues, but how inflated or deflated I cannot tell. He lived to age 72, dying, as he expected, mainly from diabetes. He also suffered from temporal lobe epilepsy, which features strongly in many of the stories, and figures in his bursts of creativity.
There are a lot of well known writers (and many others) who almost idolize his work. Most of the writers are people whose writing I don't read, so I'll leave them all in the bubble I found them in.
"You can't rassle with a skunk and come out smelling like a rose." —anonymous Texan.
The blurb on the slipcover says of the characters in Thom Jones's stories, "…grifters and drifters, rogues and ne'er-do-wells, would-be do-gooders…", but I decided to give the book a try: Night Train, a posthumous collection of both published and unpublished stories by Thom Jones.
Jones became famous in his forties when he published "The Pugilist at Rest", which opens the volume. It is indeed a pretty good story. There is an interesting mix of erudition and the argot of the perennial bottom-feeder. The second story, "Black Lights", on the same theme of a washed-up boxer, goes downhill from there. I didn't finish it. I popped around the volume, and found little worth reading for more than a minute or so. I usually checked the last paragraph to see if a story or the character went anywhere. Nope.
I don't mean to say that Jones wrote badly. He spent (squandered?) his skill on getting us inside the heads of people whose head I don't like being in.
There are 19 stories from three prior volumes, and 7 new ones. Only one of the new ones caught my interest for more than a minute or two: "Diary of My Health". It is evidently autobiographical, with flights into lunacy, from the viewpoint of a 62-year-old who doesn't expect to live much longer. It describes Jones's medical issues, but how inflated or deflated I cannot tell. He lived to age 72, dying, as he expected, mainly from diabetes. He also suffered from temporal lobe epilepsy, which features strongly in many of the stories, and figures in his bursts of creativity.
There are a lot of well known writers (and many others) who almost idolize his work. Most of the writers are people whose writing I don't read, so I'll leave them all in the bubble I found them in.
Tuesday, May 21, 2019
VR and AR - Becoming the newest IE's
kw: book reviews, nonfiction, virtual reality, augmented reality, surveys, history
I must have been twelve when my family and I went to a Cinerama theater to see Windjammer. It was quite a special event, partly because it cost a lot more than going to an ordinary movie. The theater's program discussed what made Cinerama extra-special; the ultra-wide screen (more than 140°) and the fully-surrounding sound system. Though the movie had some kind of plot and plenty of action (mostly sailors-in-training climbing rigging, at least as I remember), the sheer spectacle was the main attraction. It was my first Immersive Experience (IE).
The hype surrounding Virtual Reality (VR) and Augmented Reality) AR—and several other bits of jargon that boil down to these two terms—promises us better IE's. So it is about time for a historical and technological survey of the VR/AR field, ably supplied in a new book by David Ewalt: Defying Reality: The Inside Story of the Virtual Reality Revolution.
The author digs deep into history for the forebears of motion pictures and TV, and various efforts, with varying levels of success, to immerse the viewer in the experience. Considering how prone we are to see animals and scenery in clouds, pictures on half-melted butter on a piece of toast, and maps and other images on Holstein cattle, it doesn't take much to prepare us to be immersed in whatever is going on in front of us. If you look now at a video made for "standard" TV (called NTSC), it seems surprisingly blurry. Although vertical resolution was 535 lines, horizontal resolution seldom exceeded 200 dots per line. A TV screen larger than 25" (diagonal; 20"x15") looked blocky. Yet we could easily become immersed in the action on the screen and "zone out" everything else, particularly in a darkened room.
Fast-forward through the two HD formats, which look good on bigger screens, up to 50" or so, and 4k, which comes close to the resolution that most people can see when the screen's horizontal dimension fills about 70° of their visual field (that means being about 50" from the center of an 80" screen). But our actual visual field is more than 200° wide by 150° high, and the zone of overlap for binocular vision is 120° wide. Thus, for a fully immersive visual experience, it takes a heck of an optical system!
Every VR system or AR system so far produced cuts into this, sometimes greatly. As described in Defying Reality, the Oculus Rift, first prototyped by Palmer Luckey nearly ten years ago (on sale for about 3 years now), comes the closest. It is almost exactly matched by the HTC Vive. Both claim 110° horizontal field of view, which is close to our stereo-vision field of 120°.
The author, as a journalist, had good access to Luckey and other developers throughout this recent history. He is a good subject for testing the equipment, because he is extra-susceptible to motion sickness. He bought a purported VR system 20+ years ago, got sick, and remained a skeptic about VR's possibilities until the Rift came along. That and a few equally recently developed headsets are the first ones he's been able to use without nausea. As we say in the computing field, "The iron didn't match the application" until about 2008. It takes computing power equivalent to a 1990's Supercomputer such as the Cray-YMP to do all the "physics" needed to keep up with our visual senses. A decent gaming computer (in the $1,500 price range, at the moment) now has that level of compute power.
More recent developments include the Magic Leap, the first capable AR system, that lets you see the world around you (through tinted glass, not by re-displaying video from external cameras), with virtual elements superimposed. It uses some amazing optical tricks to make that work. So far, though, the field of view of the Magic Leap system is only about 40°x30°, so it has some specialty uses, but is less capable of producing a full IE.
How soon will VR or AR be found in the average middle-class home? A lot depends on application development. It's a little like the year 1981, when the IBM PC was introduced. There wasn't much you could do with it at first, but by the end of the year, WordStar and Lotus 1-2-3 became the killer apps that pushed sales of the machines, even at prices exceeding $2,000 (or $3,000 if you got a 10-Mbyte hard disk). The $2,400 I spent on my first PC clone would come to $6,500 today. No killer app for VR or AR has arisen. When one appears, the field will attain some momentum.
David Ewalt is young enough to keep his finger on the pulse of the field for another decade or so, and write the next book, about how incredibly changed life is with such machinery in most homes, and perhaps in public spaces, trending toward a much better successor to the Google Glass (a great idea ahead of its time and far ahead of the tech needed to make it worthwhile). I'll keep a search agent out there for upcoming books by this author.
I must have been twelve when my family and I went to a Cinerama theater to see Windjammer. It was quite a special event, partly because it cost a lot more than going to an ordinary movie. The theater's program discussed what made Cinerama extra-special; the ultra-wide screen (more than 140°) and the fully-surrounding sound system. Though the movie had some kind of plot and plenty of action (mostly sailors-in-training climbing rigging, at least as I remember), the sheer spectacle was the main attraction. It was my first Immersive Experience (IE).
The hype surrounding Virtual Reality (VR) and Augmented Reality) AR—and several other bits of jargon that boil down to these two terms—promises us better IE's. So it is about time for a historical and technological survey of the VR/AR field, ably supplied in a new book by David Ewalt: Defying Reality: The Inside Story of the Virtual Reality Revolution.
The author digs deep into history for the forebears of motion pictures and TV, and various efforts, with varying levels of success, to immerse the viewer in the experience. Considering how prone we are to see animals and scenery in clouds, pictures on half-melted butter on a piece of toast, and maps and other images on Holstein cattle, it doesn't take much to prepare us to be immersed in whatever is going on in front of us. If you look now at a video made for "standard" TV (called NTSC), it seems surprisingly blurry. Although vertical resolution was 535 lines, horizontal resolution seldom exceeded 200 dots per line. A TV screen larger than 25" (diagonal; 20"x15") looked blocky. Yet we could easily become immersed in the action on the screen and "zone out" everything else, particularly in a darkened room.
Fast-forward through the two HD formats, which look good on bigger screens, up to 50" or so, and 4k, which comes close to the resolution that most people can see when the screen's horizontal dimension fills about 70° of their visual field (that means being about 50" from the center of an 80" screen). But our actual visual field is more than 200° wide by 150° high, and the zone of overlap for binocular vision is 120° wide. Thus, for a fully immersive visual experience, it takes a heck of an optical system!
Every VR system or AR system so far produced cuts into this, sometimes greatly. As described in Defying Reality, the Oculus Rift, first prototyped by Palmer Luckey nearly ten years ago (on sale for about 3 years now), comes the closest. It is almost exactly matched by the HTC Vive. Both claim 110° horizontal field of view, which is close to our stereo-vision field of 120°.
The author, as a journalist, had good access to Luckey and other developers throughout this recent history. He is a good subject for testing the equipment, because he is extra-susceptible to motion sickness. He bought a purported VR system 20+ years ago, got sick, and remained a skeptic about VR's possibilities until the Rift came along. That and a few equally recently developed headsets are the first ones he's been able to use without nausea. As we say in the computing field, "The iron didn't match the application" until about 2008. It takes computing power equivalent to a 1990's Supercomputer such as the Cray-YMP to do all the "physics" needed to keep up with our visual senses. A decent gaming computer (in the $1,500 price range, at the moment) now has that level of compute power.
More recent developments include the Magic Leap, the first capable AR system, that lets you see the world around you (through tinted glass, not by re-displaying video from external cameras), with virtual elements superimposed. It uses some amazing optical tricks to make that work. So far, though, the field of view of the Magic Leap system is only about 40°x30°, so it has some specialty uses, but is less capable of producing a full IE.
How soon will VR or AR be found in the average middle-class home? A lot depends on application development. It's a little like the year 1981, when the IBM PC was introduced. There wasn't much you could do with it at first, but by the end of the year, WordStar and Lotus 1-2-3 became the killer apps that pushed sales of the machines, even at prices exceeding $2,000 (or $3,000 if you got a 10-Mbyte hard disk). The $2,400 I spent on my first PC clone would come to $6,500 today. No killer app for VR or AR has arisen. When one appears, the field will attain some momentum.
David Ewalt is young enough to keep his finger on the pulse of the field for another decade or so, and write the next book, about how incredibly changed life is with such machinery in most homes, and perhaps in public spaces, trending toward a much better successor to the Google Glass (a great idea ahead of its time and far ahead of the tech needed to make it worthwhile). I'll keep a search agent out there for upcoming books by this author.
Tuesday, May 14, 2019
Quantum Mechanics brought up to date
kw: book reviews, nonfiction, quantum mechanics, overviews
I've just finished reading Beyond Weird: Why Everything You Thought You Knew About Quantum Physics is Different, by Philip Ball. Prior to reading it, every book I've read that "explained" quantum mechanics has used the knowledge and language of the 1940's. Philip Ball brings things up to date, and it is about time someone did so!! The field of quantum (we can leave off the various modifying nouns for the nonce) has been the hottest of hot topics in physics for more than a century, so incredible numbers of experiments have been performed, and astronomical numbers of hypotheses have been proposed and defended and attacked and re-defended in tons and tons of published articles. So folks have learned a thing or to.
To cut to the chase. Werner Heisenberg got things rolling with his Uncertainty Principle: you can't specify both the position and the momentum of any object with accuracy more precise than a tiny "delta" that is related to Planck's Constant (ℏ); it yielded a Nobel Prize for him. Albert Einstein showed that light is quantized in his publication on the photoelectric effect, for which he received his Nobel Prize. In general terms, Heisenberg and Einstein agreed that a moving particle had a genuine position and momentum, but that no tool for determining these quantities could operate without changing them.
We can summarize the way this is explained in the Feynman Lectures on Physics thus:
This led Erwin Schrödinger to make fun of Bohr with his Cat story: Put a cat in a box, along with a device that has a 50% chance of killing the cat via poison gas in the next hour. Just before opening the box, can you say whether the cat is alive or dead? He asked if Bohr would say, "Both," until the box is opened. Most likely, Bohr would say that the wave function, which Schrödinger had proposed for calculating probabilities of quantum events, would "collapse" and you would only then observe either a dead cat or a living cat (which you would whisk out of the box and close it in case the mechanism were to go off just then and kill you both).
Bohr was a bully. He didn't just promote what became the first of many versions of his Copenhagen Interpretation, he evangelized it, in a totally obtrusive way. He used no violence (of the fisticuffs variety, anyway). He'd just talk you to death. It was like water torture, a torture to which many of that generation of physicists eventually submitted. Still, fewer than half the physicists of any generation, then or since, really believes it. Murray Gell-Mann, for one, thought he simply brainwashed people.
The thing is, there is a bit of support for this view, though it calls into question the definition of "observer". And I confess, when I first heard of the Cat story, I asked, "Isn't the cat an observer?" Anyway, anyone who has done technical photography, and especially astronomical photography, knows of Diffraction. The smaller the hole (the camera diaphragm, for instance) through which a beam of light passes, the more that beam spreads out. It isn't much, in most cases, because visible light has such short wavelengths, between 400 and 700 billionths of a meter (nanometers or nm). However, the pixels in a modern digital camera are really small and closely spaced. In my Nikon D3200 camera, for example, there are about 24 million cells in a sensor that measures 12mm x 18mm, spaced 0.003mm apart. That is 3,000 nm. For technical photography I'd prefer for the light to spread out no more than the distance from cell-to-cell, so that "every pixel counts". For a lens of focal length 100mm, we want a half-angle α to be less than arcsin(.003/200) = 0.00086°. For such calculations it is common to use a wavelength of 500 nm. We find that an aperture of 33.3mm, or larger, is needed to get full utility from the camera's sensor. That implies an f/stop of f/3. Make the aperture any smaller, and the picture gets successively fuzzier. If you have an SLR and can control the aperture, try taking a picture outdoors at f/22 or f/32 (many will allow that). It will be pretty fuzzy.
This is why astronomers like telescopes with a wide aperture. Not just because they are efficient "light buckets", but because the bigger the hole the light goes through, the less it spreads out, and the sharper an image you can obtain. Of course, a telescope with a focal ratio of f/3 or less is hard to build and expensive. But for a large instrument with a long focal length, those little pixels are very small "on the sky", allowing you to see finer detail in distant galaxies.
Now, if there is no sensor at the focus of the telescope, does the diffraction still occur? Here I attempt a détente between Bohr and Heisenberg. Heisenberg would say that the aperture, no matter its size, is "disturbing" the light beam from some distant object, spreading it out. The effect is never zero, no matter how big the aperture. This implies that the whole universe affects what happens to every bit of light, every photon, as it makes its way from source to wherever it is "going". But, whether we are watching what happens or not, it must still be happening. Bohr would have to admit that the "observer" is effectively the aperture, and by extension, that the universe itself is an observer, or is constituted of observers. Effectively, everything is an "observer" of everything!
On page 84, the author writes, "…the idea that the quantum measurement problem is a matter of 'disturbing' what is measured is exactly what the Copenhagen Interpretation denies." (Author's emphasis) I think this example shows that such a position is untenable. For my part, I think the Copenhagen Interpretation, as stated by Bohr and most of his followers, is simply silly. Photons go where they go and do what they do. So does anything else in motion. It is amazing that the environment, all of it, has an effect on where they go. But diffraction experiments show that it is so: Everything disturbs everything. However, for most of the universe out there, the disturbance doesn't seem to amount to much.
One quibble I have with the book is that it lacks a table of contents. The 19 chapters just have a 2-page heading with a snappy title. In the chapter titled "The everyday world is what quantum becomes at human scales" (the 11th), the environment is brought in, and the matter of "decoherence". Prior chapters have discussed all the things we find "weird", such as "entanglement" (for example, two particles that carry equal but opposite values of some characteristic such as polarization, even to the ends of the universe if they don't bump into anything). They get us ready for this chapter, the key chapter of the book.
Entanglement, just mentioned above, is one kind of Coherence between quantum entities. A laser beam and a Bose-Einstein condensate are, or express, coherent states among numerous entities. Coherence is thought to be fragile. It is actually quite robust, and even infectious. Particles that interact with their environment spread their quantum states around. The problem is, any instrument we might use to measure such quantum states is part of the environment, and so partakes of that state, becoming unable to detect it. That is what is meant by Decoherence. It expresses our inability to keep a pair of quanta, for example, in a given entangled state because they "want" to spread it around. The longer we want them to stay in coherence, the more it will cost. However, it is this phenomenon of decoherence that leads directly to the human-scale, everyday behavior of objects. The author concludes that the entire universe "observes" everything that goes on unless we take great pains to isolate something from the environment, so we can measure it. It is the error of Bohr and others in not recognizing the influence of the multitude of non-conscious "observers" known as the universe, that led to the silliness of the Copenhagen Interpretation.
Or, perhaps I ought to be more charitable to Niels Bohr. Maybe he was right, that things only happen when they are observed by a conscious entity. But diffraction shows that every photon, of whatever wavelength, that passes through the universe; every electron, neutrino, etc., etc., etc., is an observer, and produces the universe that we see, in which most quantum phenomena require costly apparatus to observe and maintain (except diffraction!). If the universe required conscious observers only, or things could not happen, that would imply that God made the universe only after there were observers to keep it functioning! And that's funny. It may even be true! The Bible, in the book of Job (38:4-7), mentions multitudes of angels that observed the "foundation of the Earth". The Copenhagen Interpretation agrees with Job.
A late chapter discusses quantum computing, that it is being over-hyped (so what else is new?). Near the end of the discussion, I read that all the ways of making a quantum computer, so far discovered, are special-purpose. One can search, one can encode or decode, and so forth. It appears that, at the moment at least, no general-purpose quantum computer can be produced, that would be analogous in its breadth of function to our general-purpose digital computers. So don't sell your stock in Intel or Apple just yet!
I was much refreshed by this book. The author's point of view is still somewhat "Copenhagenish", but that's OK with me. If decoherence is what he says it is, then it really is true that what we see at our scale of a meter or two is just the consequence of quanta doing what they do, in all their multitudes, and spreading their characteristics about quite promiscuously, so that the universe just keeps keeping on, as it has since the Big Bang, at the very least.
I've just finished reading Beyond Weird: Why Everything You Thought You Knew About Quantum Physics is Different, by Philip Ball. Prior to reading it, every book I've read that "explained" quantum mechanics has used the knowledge and language of the 1940's. Philip Ball brings things up to date, and it is about time someone did so!! The field of quantum (we can leave off the various modifying nouns for the nonce) has been the hottest of hot topics in physics for more than a century, so incredible numbers of experiments have been performed, and astronomical numbers of hypotheses have been proposed and defended and attacked and re-defended in tons and tons of published articles. So folks have learned a thing or to.
To cut to the chase. Werner Heisenberg got things rolling with his Uncertainty Principle: you can't specify both the position and the momentum of any object with accuracy more precise than a tiny "delta" that is related to Planck's Constant (ℏ); it yielded a Nobel Prize for him. Albert Einstein showed that light is quantized in his publication on the photoelectric effect, for which he received his Nobel Prize. In general terms, Heisenberg and Einstein agreed that a moving particle had a genuine position and momentum, but that no tool for determining these quantities could operate without changing them.
We can summarize the way this is explained in the Feynman Lectures on Physics thus:
A beam of electrons directed through two holes will produce an interference pattern on a screen. The electrons are behaving like waves. You want to determine which electron goes through which hole. Electrons can be 'probed' by shining short wavelength light on them, and recording where the scattered light came from. When you turn on the light beam, the interference pattern disappears, and is replaced by a smooth diffraction curve, as though there were one hole, not two. Weakening the beam doesn't help. So you try using a longer wavelength, then longer and longer wavelengths. Finally, you find a wavelength that will scatter off the electrons, but allows the interference pattern to form. However, the wavelength of the light you are now using is greater than the spacing between the holes, and you cannot tell which hole any electron goes through.Niels Bohr thought differently. I have not determined whether he focused on the ubiquitous presence of diffraction patterns in focused beams (of light, electrons, or whatever; I'll come back to this). Whatever, he concluded that the observation of an electron's position or its direction of motion didn't just reveal that quantity, but created it. His published material shows that he dwelt on the matter of measurement; that no quantum theory could be admitted that said anything more than what measurements could say: "There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that it is the task of physics to find out how nature is. Physics concerns what we can say about nature." (p.73) He went beyond this, stating that nothing really happens until it is observed.
This led Erwin Schrödinger to make fun of Bohr with his Cat story: Put a cat in a box, along with a device that has a 50% chance of killing the cat via poison gas in the next hour. Just before opening the box, can you say whether the cat is alive or dead? He asked if Bohr would say, "Both," until the box is opened. Most likely, Bohr would say that the wave function, which Schrödinger had proposed for calculating probabilities of quantum events, would "collapse" and you would only then observe either a dead cat or a living cat (which you would whisk out of the box and close it in case the mechanism were to go off just then and kill you both).
Bohr was a bully. He didn't just promote what became the first of many versions of his Copenhagen Interpretation, he evangelized it, in a totally obtrusive way. He used no violence (of the fisticuffs variety, anyway). He'd just talk you to death. It was like water torture, a torture to which many of that generation of physicists eventually submitted. Still, fewer than half the physicists of any generation, then or since, really believes it. Murray Gell-Mann, for one, thought he simply brainwashed people.
The thing is, there is a bit of support for this view, though it calls into question the definition of "observer". And I confess, when I first heard of the Cat story, I asked, "Isn't the cat an observer?" Anyway, anyone who has done technical photography, and especially astronomical photography, knows of Diffraction. The smaller the hole (the camera diaphragm, for instance) through which a beam of light passes, the more that beam spreads out. It isn't much, in most cases, because visible light has such short wavelengths, between 400 and 700 billionths of a meter (nanometers or nm). However, the pixels in a modern digital camera are really small and closely spaced. In my Nikon D3200 camera, for example, there are about 24 million cells in a sensor that measures 12mm x 18mm, spaced 0.003mm apart. That is 3,000 nm. For technical photography I'd prefer for the light to spread out no more than the distance from cell-to-cell, so that "every pixel counts". For a lens of focal length 100mm, we want a half-angle α to be less than arcsin(.003/200) = 0.00086°. For such calculations it is common to use a wavelength of 500 nm. We find that an aperture of 33.3mm, or larger, is needed to get full utility from the camera's sensor. That implies an f/stop of f/3. Make the aperture any smaller, and the picture gets successively fuzzier. If you have an SLR and can control the aperture, try taking a picture outdoors at f/22 or f/32 (many will allow that). It will be pretty fuzzy.
This is why astronomers like telescopes with a wide aperture. Not just because they are efficient "light buckets", but because the bigger the hole the light goes through, the less it spreads out, and the sharper an image you can obtain. Of course, a telescope with a focal ratio of f/3 or less is hard to build and expensive. But for a large instrument with a long focal length, those little pixels are very small "on the sky", allowing you to see finer detail in distant galaxies.
Now, if there is no sensor at the focus of the telescope, does the diffraction still occur? Here I attempt a détente between Bohr and Heisenberg. Heisenberg would say that the aperture, no matter its size, is "disturbing" the light beam from some distant object, spreading it out. The effect is never zero, no matter how big the aperture. This implies that the whole universe affects what happens to every bit of light, every photon, as it makes its way from source to wherever it is "going". But, whether we are watching what happens or not, it must still be happening. Bohr would have to admit that the "observer" is effectively the aperture, and by extension, that the universe itself is an observer, or is constituted of observers. Effectively, everything is an "observer" of everything!
On page 84, the author writes, "…the idea that the quantum measurement problem is a matter of 'disturbing' what is measured is exactly what the Copenhagen Interpretation denies." (Author's emphasis) I think this example shows that such a position is untenable. For my part, I think the Copenhagen Interpretation, as stated by Bohr and most of his followers, is simply silly. Photons go where they go and do what they do. So does anything else in motion. It is amazing that the environment, all of it, has an effect on where they go. But diffraction experiments show that it is so: Everything disturbs everything. However, for most of the universe out there, the disturbance doesn't seem to amount to much.
One quibble I have with the book is that it lacks a table of contents. The 19 chapters just have a 2-page heading with a snappy title. In the chapter titled "The everyday world is what quantum becomes at human scales" (the 11th), the environment is brought in, and the matter of "decoherence". Prior chapters have discussed all the things we find "weird", such as "entanglement" (for example, two particles that carry equal but opposite values of some characteristic such as polarization, even to the ends of the universe if they don't bump into anything). They get us ready for this chapter, the key chapter of the book.
Entanglement, just mentioned above, is one kind of Coherence between quantum entities. A laser beam and a Bose-Einstein condensate are, or express, coherent states among numerous entities. Coherence is thought to be fragile. It is actually quite robust, and even infectious. Particles that interact with their environment spread their quantum states around. The problem is, any instrument we might use to measure such quantum states is part of the environment, and so partakes of that state, becoming unable to detect it. That is what is meant by Decoherence. It expresses our inability to keep a pair of quanta, for example, in a given entangled state because they "want" to spread it around. The longer we want them to stay in coherence, the more it will cost. However, it is this phenomenon of decoherence that leads directly to the human-scale, everyday behavior of objects. The author concludes that the entire universe "observes" everything that goes on unless we take great pains to isolate something from the environment, so we can measure it. It is the error of Bohr and others in not recognizing the influence of the multitude of non-conscious "observers" known as the universe, that led to the silliness of the Copenhagen Interpretation.
Or, perhaps I ought to be more charitable to Niels Bohr. Maybe he was right, that things only happen when they are observed by a conscious entity. But diffraction shows that every photon, of whatever wavelength, that passes through the universe; every electron, neutrino, etc., etc., etc., is an observer, and produces the universe that we see, in which most quantum phenomena require costly apparatus to observe and maintain (except diffraction!). If the universe required conscious observers only, or things could not happen, that would imply that God made the universe only after there were observers to keep it functioning! And that's funny. It may even be true! The Bible, in the book of Job (38:4-7), mentions multitudes of angels that observed the "foundation of the Earth". The Copenhagen Interpretation agrees with Job.
A late chapter discusses quantum computing, that it is being over-hyped (so what else is new?). Near the end of the discussion, I read that all the ways of making a quantum computer, so far discovered, are special-purpose. One can search, one can encode or decode, and so forth. It appears that, at the moment at least, no general-purpose quantum computer can be produced, that would be analogous in its breadth of function to our general-purpose digital computers. So don't sell your stock in Intel or Apple just yet!
I was much refreshed by this book. The author's point of view is still somewhat "Copenhagenish", but that's OK with me. If decoherence is what he says it is, then it really is true that what we see at our scale of a meter or two is just the consequence of quanta doing what they do, in all their multitudes, and spreading their characteristics about quite promiscuously, so that the universe just keeps keeping on, as it has since the Big Bang, at the very least.
Tuesday, May 07, 2019
The robots are coming...aren't they?
kw: book reviews, nonfiction, artificial intelligence, essay collections, essays
I almost titled this post "Artificial Intelligence Through a Human Lens". But that describes only a few of the essays in Possible Minds: 25 Ways of Looking at AI, edited by John Brockman.
When I was an active programmer (these days I'm called a "coder"), my colleagues and I often spoke of "putting the intelligence into the code". By "intelligence" we really meant the logic and formulas needed to "get the science done". During that 40-plus year career, and for the half-decade since, I've heard promises about the "electronic brains" we worked with. They were going to replace all of us; they would program themselves and outstrip mere humans; they would solve the world's ills; they would develop cures for all diseases and make us immortal. The spectrum of hopes, dreams and fears shared among professional programmers exactly matched the religious speculations of earlier generations! To many of my former colleagues, the computer was their god.
The term "artificial intelligence" may have first hit print soon after ENIAC became public, but it really took off after 1960, nearly sixty years ago. As far as I can tell, there has been about one article expressing fears for every two articles expressing hopes, over that entire time span. But in the popular press, and by that I mean primarily science fiction, the ratio has been about 4 or 5 dystopian visions for every utopian one…even with the popularity of Isaac Asimov's I, Robot books.
An aside: Asimov was famously neurotic. In his Robot fiction, the people are neurotic while the robots are deified. However, a great many of the short pieces explore all the ways a robot can go wrong, either because of a misapplication of one of the famous "Three Laws", or by a dilemma that arises when two of the laws are in conflict. The scary old film Forbidden Planet has its own version of Robbie the Robot temporarily driven to catatonia by being ordered to shoot a person.
While the pros have been dreaming of the wonderful things "thinking machines" could do, the SciFi community has put equal thought into what they might do, that we don't want them to do. Colossus is probably the best example (the book is much better, and infinitely more chilling, than the film).
While I thoroughly enjoyed reading Possible Minds, I finally realized that the essays tell us a lot about the writers, and very little about machine intelligences.
Some of the essays limn the history of the field, and the various "waves" (or fads!) that have swept through about once a decade. The current emphasis is "deep learning", which is really "neural nets" from about 30 years ago, made much deeper—many more layers of connectivity—and driven by hardware that runs several thousand times faster than the supercomputers of the 1980's. But a few authors do point out the difference between the binary response of electronic "neurons" and the nonlinear response of biological neurons. And at least a couple point out that natural intelligence is embodied, while hardly anyone is giving much attention to producing truly embodied AI (that is, robots of the Asimovian ilk).
Another aside: Before I learned the source of the name "Cray" for the Cray Computer Corporation, I surmised that it represented a contention that the machine had brain power equal to a crayfish. Only later did I learn of Seymour Cray, who has named his company for himself. But even today, I suspect that the hottest computers in existence still can't do things that the average crayfish does with ease.
What is a "deep learning" neural net? The biggest ones run on giant clusters of up to 100,000 Graphics Processing Units (GPU's), and simulate the on-off responses of billions of virtual "neurons". Each GPU models several thousand "neurons" and their connections. They are not universally connected, that would be a combinatorial explosion on steroids! But so far as I can discern, each virtual neuron has dozens to hundreds of connections. The setup needs many petabytes of active memory (not just disk). Such a machine, about the size of a Best Buy store, is touted as having the complexity of a human brain. Maybe (see below). The price tag runs in the billions, and the wattage in the millions.
More modest, and less costly, arrangements, perhaps 1/1000th the size, are pretty good at learning to recognize faces or animals; others perform optical character recognition (OCR). The Weather Channel app on my phone claims to be connected to Watson, the IBM system that won a Jeopardy episode a few years back. The key fact about these systems is that nobody knows what they are really doing "in there", and nobody even knows how to design an oversight system to find out. If someone figures that out, it will probably slow them to uselessness anyway. At least the 40-year-old fad of "expert systems" was able to explain its own reasoning, but proved incapable of much that might be useful.
To expand on the difference between virtual neurons in a deep learning system, and the real neurons found in all animals (except maybe sponges): A biological neuron is a two-part cell. One end includes the axon, a fiber that transmits a signal from the cell to several (or many, or very many) axon terminals, each connected to a different neuron. The other end has many dendrites, which reach toward cells sending signals to the neuron. An axon terminal connects to a dendrite, or directly to the cell body, of a neuron, through a synapse. That is one neuron. It receives signals from tens to hundreds to thousands of other neurons, and sends signals to a similar number of other neurons. Neurons in the cerebellum (which controls bodily functions) can have 100,000 synapses each! The "white matter" in the mid-brain consists of a dense spiderweb of axons that link almost everything to everything else in the brain. The spinal cord consists of long axons between the brain and the rest of the body.
A biological neuron responds (sends a pulse through the axon, or not) to time-coded pulses from one or more of the synapses that connect to it. The response is nonlinear, and can differ for different synapses. The activity of each neuron is very, very complex. It is very likely that, should we ever determine exactly how a particular type of neuron "works", its activity will be found to require one or more GPU's to simulate with any fidelity. Thus, a true deep learning system would need to have, not thousands or even millions of GPU's, but at least 100 billion, simulating the traffic among somewhere between 100 trillion to 1,000 trillion synapses.
Folks, that is what it takes to produce a "superintelligent" AI. As an absolute minimum.
But let's look at that 100 billion figure. It has long been said that a human brain has 100 billion neurons. People think of this as the population of the "gray matter" that makes up the cerebral cortex, the folded outer shell of the brain, the stuff we think with. Here is the actual breakdown:
Also, on average, a cerebellar neuron has 10-100 times as many synaptic connections as a cerebral neuron, even though it is much smaller. It takes most of the brain's "horsepower" to run the body! Your much-vaunted IQ really runs on about 19% of the neurons. About 1/3 of that is used for processing vision. (Another aside: This is why I claim that the human visual system is by far the most acute and capable; the human visual cortex exceeds the entire cerebral cortex of any other primate).
I wish that the essayists who wrote in Possible Minds had said more about different possibilities for producing intelligent machinery. Simulating natural machinery is a pretty tall order. The hottest deep learning system is about one-millionth of what is needed to truly replicate human brain function, and by "about" I mean, within a factor of 10 or 100. Considering that Moore's Law ended a decade ago, it may never be worthwhile. Even if Moore's Law restarted, a factor of one million in processing power requires 40 years. Getting the power consumption down to a few hundreds of watts, rather than several million watts (today) and possibly tens of billions of watts in the middle-future, may not be achievable. I hold little hope for "quantum computers". Currently they don't even qualify as a fad, just a laboratory curiosity with coherence times in the range of milliseconds. Quantum coherence needs lots of shielding. Tons. We'll see.
So, robot overlords? At the moment, I put them right next to faster-than-light starships, on the fantasy shelf.
I almost titled this post "Artificial Intelligence Through a Human Lens". But that describes only a few of the essays in Possible Minds: 25 Ways of Looking at AI, edited by John Brockman.
When I was an active programmer (these days I'm called a "coder"), my colleagues and I often spoke of "putting the intelligence into the code". By "intelligence" we really meant the logic and formulas needed to "get the science done". During that 40-plus year career, and for the half-decade since, I've heard promises about the "electronic brains" we worked with. They were going to replace all of us; they would program themselves and outstrip mere humans; they would solve the world's ills; they would develop cures for all diseases and make us immortal. The spectrum of hopes, dreams and fears shared among professional programmers exactly matched the religious speculations of earlier generations! To many of my former colleagues, the computer was their god.
The term "artificial intelligence" may have first hit print soon after ENIAC became public, but it really took off after 1960, nearly sixty years ago. As far as I can tell, there has been about one article expressing fears for every two articles expressing hopes, over that entire time span. But in the popular press, and by that I mean primarily science fiction, the ratio has been about 4 or 5 dystopian visions for every utopian one…even with the popularity of Isaac Asimov's I, Robot books.
An aside: Asimov was famously neurotic. In his Robot fiction, the people are neurotic while the robots are deified. However, a great many of the short pieces explore all the ways a robot can go wrong, either because of a misapplication of one of the famous "Three Laws", or by a dilemma that arises when two of the laws are in conflict. The scary old film Forbidden Planet has its own version of Robbie the Robot temporarily driven to catatonia by being ordered to shoot a person.
While the pros have been dreaming of the wonderful things "thinking machines" could do, the SciFi community has put equal thought into what they might do, that we don't want them to do. Colossus is probably the best example (the book is much better, and infinitely more chilling, than the film).
While I thoroughly enjoyed reading Possible Minds, I finally realized that the essays tell us a lot about the writers, and very little about machine intelligences.
Some of the essays limn the history of the field, and the various "waves" (or fads!) that have swept through about once a decade. The current emphasis is "deep learning", which is really "neural nets" from about 30 years ago, made much deeper—many more layers of connectivity—and driven by hardware that runs several thousand times faster than the supercomputers of the 1980's. But a few authors do point out the difference between the binary response of electronic "neurons" and the nonlinear response of biological neurons. And at least a couple point out that natural intelligence is embodied, while hardly anyone is giving much attention to producing truly embodied AI (that is, robots of the Asimovian ilk).
Another aside: Before I learned the source of the name "Cray" for the Cray Computer Corporation, I surmised that it represented a contention that the machine had brain power equal to a crayfish. Only later did I learn of Seymour Cray, who has named his company for himself. But even today, I suspect that the hottest computers in existence still can't do things that the average crayfish does with ease.
What is a "deep learning" neural net? The biggest ones run on giant clusters of up to 100,000 Graphics Processing Units (GPU's), and simulate the on-off responses of billions of virtual "neurons". Each GPU models several thousand "neurons" and their connections. They are not universally connected, that would be a combinatorial explosion on steroids! But so far as I can discern, each virtual neuron has dozens to hundreds of connections. The setup needs many petabytes of active memory (not just disk). Such a machine, about the size of a Best Buy store, is touted as having the complexity of a human brain. Maybe (see below). The price tag runs in the billions, and the wattage in the millions.
More modest, and less costly, arrangements, perhaps 1/1000th the size, are pretty good at learning to recognize faces or animals; others perform optical character recognition (OCR). The Weather Channel app on my phone claims to be connected to Watson, the IBM system that won a Jeopardy episode a few years back. The key fact about these systems is that nobody knows what they are really doing "in there", and nobody even knows how to design an oversight system to find out. If someone figures that out, it will probably slow them to uselessness anyway. At least the 40-year-old fad of "expert systems" was able to explain its own reasoning, but proved incapable of much that might be useful.
To expand on the difference between virtual neurons in a deep learning system, and the real neurons found in all animals (except maybe sponges): A biological neuron is a two-part cell. One end includes the axon, a fiber that transmits a signal from the cell to several (or many, or very many) axon terminals, each connected to a different neuron. The other end has many dendrites, which reach toward cells sending signals to the neuron. An axon terminal connects to a dendrite, or directly to the cell body, of a neuron, through a synapse. That is one neuron. It receives signals from tens to hundreds to thousands of other neurons, and sends signals to a similar number of other neurons. Neurons in the cerebellum (which controls bodily functions) can have 100,000 synapses each! The "white matter" in the mid-brain consists of a dense spiderweb of axons that link almost everything to everything else in the brain. The spinal cord consists of long axons between the brain and the rest of the body.
A biological neuron responds (sends a pulse through the axon, or not) to time-coded pulses from one or more of the synapses that connect to it. The response is nonlinear, and can differ for different synapses. The activity of each neuron is very, very complex. It is very likely that, should we ever determine exactly how a particular type of neuron "works", its activity will be found to require one or more GPU's to simulate with any fidelity. Thus, a true deep learning system would need to have, not thousands or even millions of GPU's, but at least 100 billion, simulating the traffic among somewhere between 100 trillion to 1,000 trillion synapses.
Folks, that is what it takes to produce a "superintelligent" AI. As an absolute minimum.
But let's look at that 100 billion figure. It has long been said that a human brain has 100 billion neurons. People think of this as the population of the "gray matter" that makes up the cerebral cortex, the folded outer shell of the brain, the stuff we think with. Here is the actual breakdown:
- Total average human brain: 86 billion neurons (and a similar number of "glia", that help the neurons in poorly-understood ways)
- Cerebral Cortex (~80% of brain's mass): 16 billion neurons
- Cerebellum (18% of brain's mass): 69 billion neurons
- Brain Stem (~2% of brain's mass): 1 billion neurons
Also, on average, a cerebellar neuron has 10-100 times as many synaptic connections as a cerebral neuron, even though it is much smaller. It takes most of the brain's "horsepower" to run the body! Your much-vaunted IQ really runs on about 19% of the neurons. About 1/3 of that is used for processing vision. (Another aside: This is why I claim that the human visual system is by far the most acute and capable; the human visual cortex exceeds the entire cerebral cortex of any other primate).
I wish that the essayists who wrote in Possible Minds had said more about different possibilities for producing intelligent machinery. Simulating natural machinery is a pretty tall order. The hottest deep learning system is about one-millionth of what is needed to truly replicate human brain function, and by "about" I mean, within a factor of 10 or 100. Considering that Moore's Law ended a decade ago, it may never be worthwhile. Even if Moore's Law restarted, a factor of one million in processing power requires 40 years. Getting the power consumption down to a few hundreds of watts, rather than several million watts (today) and possibly tens of billions of watts in the middle-future, may not be achievable. I hold little hope for "quantum computers". Currently they don't even qualify as a fad, just a laboratory curiosity with coherence times in the range of milliseconds. Quantum coherence needs lots of shielding. Tons. We'll see.
So, robot overlords? At the moment, I put them right next to faster-than-light starships, on the fantasy shelf.
Subscribe to:
Posts (Atom)