Can we (collectively) know everything, or will some things remain forever beyond our ken? The answer depends on how much there is to know. If knowledge is, quite literally, infinite, then given the universe as we understand it, there is no possibility that everything can be known. But there is another way to look at the question, one taken by Professor Marcus du Sautoy in The Great Unknown: Seven Journeys to the Frontiers of Science: are some things forever unknowable by their very nature? His "Seven Journeys" are studies of the cutting edge of seven scientific and socio-scientific disciplines; they are explorations into what can be known accordingly.
The seven disciplines are simply stated: Chaos (in the mathematical sense), Matter, Quantum Physics, The Universe, Time, Consciousness, and Infinity (again, in the mathematical sense). Six of these are related to the "hard" sciences, while Consciousness is considered a "soft" problem by many, but in reality, it may be the hardest of all!
I knew beforehand of the three great theoretical limits to the hard sciences that were gradually elucidated in the past century or so: Heisenberg Uncertainty, Schrödinger Undecidability, and Gödel Incompleteness. Each can be considered from two angles:
- Heisenberg Uncertainty (H.U.) is the principle that the combination of momentum and position can be known to a certain level of precision, but no further. It primarily shows up in the realm of particle physics. Thus, if you know with very great accuracy where a particle is or has been (for example, by letting it pass through a very small hole), you cannot be very certain of its momentum, in a vector sense. In the realm of things on a human scale, diffraction of light expresses this. If you pass a beam of light through a very small hole, it fans out into a beam with a width that is wider the smaller the hole is. This has practical applications for astronomers: the large "hole" represented by the 94-inch (2.4 meter) aperture of the Hubble Space Telescope prevents the "Airy circle" of the image for a distant star from being smaller than about 0.04 arc seconds in visible light, and about 0.1 arc seconds in near-infrared light. The mirror for the James Webb Space Telescope will be 2.7 times larger, and the images will therefore be 2.7 times sharper. But no telescope can be big enough to produce images of "infinite" sharpness, for the aperture would need to be infinite. All that aside, the two interpretations of H.U. are
- The presence of the aperture "disturbs" the path of the particle (in the case of astronomy, each photon), which can somehow "feel" it and thus gets a random sideways "kick".
- The Copenhagen Interpretation, that the particle is described by a wave equation devised by Schrödinger that has some value everywhere in space, but the particle's actual location is not determined until it is "observed". The definition of "observer" has never been satisfactorily stated.
- Schrödinger Undecidability, proposed originally as a joke about a cat that might be both dead and alive at the same moment, is the principle that the outcome of any single quantum process cannot be known until its effect has been observed. The "cat" story places a cat in a box with some poison gas in a flask which has a 50% chance of being broken open in the next hour according to some quantum event such as the radioactive decay of a radium nucleus. Near the end of the hour, you are asked, "Is the cat dead or alive?" You cannot decide. Again that pesky "observer" shows up. But nowhere have I read that the cat is also an observer! Nonetheless, the principle illustrates that, while we can know with a certain accuracy the average number of quantum events of a certain kind that might occur, we have no way to know if "that nucleus over there" will be the next to go. Two ways of interpreting this situation are given, similar to the above, firstly that the event sort of "decides itself", and the other, also part of the Copenhagen Interpretation, that only when an outcome has been observed can you know anything about the system and what it has done.
- Gödel Incompleteness is described in two theorems that together proved mathematically that in any given algorithmic system, questions can be asked, and even their truth can be described, but those questions' veracity cannot be proven within that algorithmic system. Most examples you'll find in the literature are self-referential things such as a card that reads on one side, "The statement on the other side of this card is true" and on the other, "The statement on the other side of this card is false." Such bogeys are models of ways of thinking about the Incompleteness theorems, without really getting to their kernel. A great many of them were discussed in gory detail by Doug Hofstadter in his book Gödel, Escher, Bach: The Eternal Golden Braid, without getting to the crux of the matter: Is our own consciousness an algorithmic system? because it seems we can always (given time) develop a larger system in which previously uncrackable conundrums are solvable. But then of course, we find there are "new and improved" conundrums that the tools of the new system cannot handle. An example given in The Great Unknown is the physics of Newton being superseded and subsumed into the two theories of Relativity developed by Einstein. Again, there are two ways this principle is thought of. Firstly, that given time and ingenuity we will always be able to develop another level of "meta system" and solve the old problems. But secondly, we get into the realm of the "hard-soft" problem of consciousness: Is consciousness algorithmic? for if it is, we will one day run out of meta systems and can go no further.
The only way we know to study consciousness is to study our own and that of a small number of animals that seem to be self-aware. Some would posit that we can create conscious artificial intelligence (AI), but this is questionable because all known methods in the sphere of AI studies are algorithmic, even if the algorithm is "hidden" inside a neural network. Since we do not yet know if natural intelligence (NI) is algorithmic, we cannot compare AI to NI in any meaningful sense!
One consequence of a possibly infinite universe is that everything we see around us might be duplicated an endless number of times, right down to the atomic and subatomic level. Thus there could be infinite numbers of the Polymath at Large typing this sentence, right now, in an infinite number of places, though very widely separated, to be sure (say, by a few trillions or quadrillions of light years, or perhaps much, much more). But, if I understand the proposition correctly, that is only possible if space is quantized. Quantization of space is based on the discovery of the Planck length and the Planck time about a century ago. They are the smallest meaningful units of length and time known. The Planck length is about 1.62x10-35 m, or about 10-20 the size of a proton. If space is quantized, it is most likely quantized on this scale. The Planck time is the time it takes a photon to travel a Planck length, or about 5.4x10-44 sec.
If space is quantized with the space quantum being a Planck length, that means that positions can be represented by very large integers, and that those positions will be not just very precise, but exact. How large an integer? If we consider only the visible universe, which has a proper radius of about 75 billion light years, or 7.1x1026 m, you'd need a decimal integer of 44+26+1 = 71 digits, or a binary word (for the computer) containing 236 bits or 29.5 → 30 bytes.
The trouble comes when you want to learn positions to this kind of precision/exactitude. To learn a dimension to an accuracy of one micron you need to use light (or another sort of particle such as an electron) with a wavelength of a micron, or smaller, to see it. To see the position of a silicon atom in a crystal, you need x-ray wavelengths smaller than 0.2nm (or 200 pm), which comes to 6,200 eV per photon. X-rays of that energy are a little on the mild side. But to "see" a proton, you are getting in the sub-femtometer range, which requires gamma ray photons with several million eV each. Twenty orders of magnitude smaller yet, to be able to distinguish a Planck length, would require such energetic gamma rays (about an octillion eV each) that two of them colliding would probably trigger a new Big Bang.
By the way, photon energies of billions to trillions of eV would be needed to pin down the locations of the quarks inside nucleons, which is what would actually be needed to get a "Star Trek Transporter" to work, at both the scanning and receiving end. Each such photon has the energy of a rifle bullet. You would need several per quark of your sample to transport. Maybe that's why the transporter hasn't been invented yet, and probably never could be…even if Dilithium and Rubindium get discovered one day.
Also, just by the bye, in a quantized universe there would be no irrational numbers, not truly. I am not sure how lengths "off axis" could be calculated, but they would somehow have to be jiggered to the next quantum of space. There goes Cantor's Aleph-1 infinity!
OK, I got so wrapped up in all of this that I hardly reviewed the book. It's a great read, so get it and read it and go do your own rant about the limits of knowledge!
No comments:
Post a Comment