The noted blogger Fjordman is filing this report via Gates of Vienna.
For a complete Fjordman blogography, see The Fjordman Files. There is also a multi-index listing here.
This essay was originally published in five parts at various sites: Part 1, Part 2, Part 3, Part 4, and Part 5
The introduction of the telescope in Western Europe in the 1600s revolutionized astronomy, but it did not found it as a discipline. Astronomy had existed in some form for thousands of years prior to this. It is consequently impossible to assign a specific date to its beginning. This is not the case with astrophysics. People in ancient and medieval times might speculate on the material makeup of stars and celestial bodies, but they had no way of verifying their ideas.
Anaxagoras of Clazomenae in the fifth century BC was the first Pre-Socratic philosopher to live in Athens. He championed many controversial theories, including his claim that the stars are fiery stones. He allegedly got this idea when a meteorite fell near Aegospotami. He assumed that it came from the Sun, and since it consisted largely of iron he concluded that the Sun was made of red-hot iron. Not a bad guess for his time, yet he had no way of proving his claims. Neither did Asian or Mesoamerican observers. Some sources indicate that Anaxagoras was charged with impiety, as most ancient Greeks still shared the divine associations with the heavenly bodies, but political considerations may have played a part in this process as well.
As late as in 1835 Auguste Comte (1798-1857), the French philosopher often regarded as the founder of sociology, stated that humans would never be able to understand the composition of stars. He was soon proved wrong by two new techniques — spectroscopy and photography.
The English chemist William Hyde Wollaston (1766-1828) in 1800 formed a partnership with his countryman Smithson Tennant (1761-1815), whom he had befriended at Cambridge. Tennant discovered the elements iridium and osmium, extracted from platinum ores, in 1803. The platinum group metals — platinum, ruthenium, rhodium, palladium, osmium and iridium — have similar chemical properties. Osmium (Os, atomic number 76) is the heaviest natural element with a density of more than 22.6 kg/dm3, twice as much as lead at 11.3 kg/dm3.
Platinum (Pt, atomic number 78) and its dense sister metals are very rare in the Earth’s crust. It had been introduced to Europe from South American mines in the 1740s by men such as the Spanish explorer Antonio de Ulloa (1716-1795). Wollaston was the first person to produce pure, malleable platinum and became wealthy from supplying Britain with the precious metal. The Wollaston Medal, granted by the Geological Society of London, is named after him.
The German chemist Martin Klaproth (1743-1817) was born in Wernigerode in Prussian Saxony and worked as an apothecary for years before continuing his career as a professor of chemistry at the newly established University of Berlin. He discovered uranium as well as zirconium (Zr, a.n. 40) in 1789. Uranium (symbol U, atomic number 92) was named for the planet Uranus, which had been discovered just prior to this. Wollaston detected the elements palladium in 1803 and rhodium in 1804. He named palladium (Pd, a.n. 46) after the asteroid Pallas, which had been discovered a year earlier by the German astronomer Olbers and was initially believed to be a planet, until the full extent of the asteroid belt had been grasped.
The birth of spectroscopy, the systematic study of the interaction of light with matter, followed shortly after the creation of scientific chemistry in Europe. William Hyde Wollaston in 1802 noted some dark features in the solar spectrum, but he didn’t follow this insight up. In 1814, the German physicist Joseph von Fraunhofer (1787-1826) independently discovered these dark features (absorption lines) in the optical spectrum of the Sun, which are now known as Fraunhofer lines. He carefully studied them and noted that they exist in the spectra of Venus and the stars, too, which meant that they had to be a property of the light itself.
In the 1780s a Swiss artisan, Pierre-Louis Guinand (1748-1824), began experimenting with the manufacture of flint glass, and in 1805 managed to produce a nearly flawless material. He passed on this secret to Fraunhofer, who worked in the secularized Benedictine monastery of Benediktbeuern. Fraunhofer improved upon Guinand’s techniques and began a more systematic study of the mysterious spectral lines. To the stronger ones he assigned the letters A to Z, a system which is also used today. Yet it was left to two other German scholars to prove the full significance of these unique lines, corresponding to specific chemical elements.
Robert Bunsen (1811-1899) is often associated with the Bunsen burner, a device found in many chemistry laboratories around the word, but the truth is that he made a few alterations to it rather than inventing it. He was born in Göttingen, where his father was a professor of languages. He obtained his doctorate in chemistry at the University of Göttingen and spent years traveling through Western Europe. He eventually settled at the scenic university town of Heidelberg in south-west Germany, where he taught from 1852 until his retirement. In the late 1850s, Bunsen began a new and very fruitful collaboration there with the physicist Kirchhoff.
Gustav Kirchhoff (1824-1887), the son of a lawyer, was born and educated in Königsberg, Prussia, on the Baltic Sea, now the Russian city of Kaliningrad. He graduated from Albertus University there in 1847 and relocated to the rapidly growing city of Berlin. After 1850 he became acquainted with Bunsen, who urged him to follow him to Heidelberg. Kirchhoff in 1859 coined the term blackbody to describe a hypothetical perfect radiator that absorbs all incident light and emits all of that light when maintained at a constant temperature. His findings proved instrumental to Max Planck’s quantum theory of electromagnetic radiation from 1900. He is above all remembered for his collaboration with Bunsen around 1860.
They demonstrated in 1859 that all pure substances display a characteristic spectrum. Together, Bunsen and Kirchhoff assembled the flame, prism, lenses and viewing tubes necessary to produce the world’s first spectrometer. They identified the alkali metals cesium (chemical symbol Cs, atomic number 55) and rubidium (Rb, a.n. 37) in 1860-61, showing in each case that these new elements produced line spectra that were unique for them, a chemical “fingerprint.” The dark lines in the solar spectrum show the selective absorption of light, caused by the transition of an electron between specific energy levels in an atom, in the gases of various elements that exist above the Sun’s surface. In the first qualitative chemical analysis of a celestial body, Kirchoff in the 1860s identified 16 different elements from the Sun’s spectrum and compared these to laboratory spectra from known elements here on Earth.
The great physicist George Gabriel Stokes (1819-1903) attended school in Dublin, Ireland, but later moved to England and Cambridge University. He theorized a reasonably correct explanation of the Fraunhofer lines in the solar spectrum, but he did not publish it or develop it further. According to the Molecular Expressions website, “ Throughout his career, George Stokes emphasized the importance of experimentation and problem solving, rather than focusing solely on pure mathematics. His practical approach served him well and he made important advances in several fields, most notably hydrodynamics and optics. Stokes coined the term fluorescence, discovered that fluorescence can be induced in certain substances by stimulation with ultraviolet light, and formulated Stokes Law in 1852. Sometimes referred to as Stokes shift, the law holds that the wavelength of fluorescent light is always greater than the wavelength of the exciting light. An advocate of the wave theory of light, Stokes was one of the prominent nineteenth century scientists that believed in the concept of an ether permeating space, which he supposed was necessary for light waves to travel.”
Fluorescence microscopy has become an important tool in cellular biology. The Polish physicist Alexander Jablonski (1898-1980) at the University of Warsaw was a pioneer in fluorescence spectroscopy. Stokes was a formative influence on subsequent generations of Cambridge men and was one of the great names among nineteenth century mathematical physics, which included Michael Faraday, James Joule, Siméon Poisson, Augustin Cauchy and Joseph Fourier. The English mathematician George Green (1793-1841), known for Green’s Theorem, inspired Lord Kelvin and devised an early theory of electricity and magnetism that formed some of the basis for the work of scientists like James Clerk Maxwell.
Astrophysics as a scientific discipline was born in mid-nineteenth century Europe, and only there; it could not have happened earlier as the crucial combination of chemical and optical knowledge, telescopes and photography did not exist before. In case we forget what a huge step this was, let us recall that as late as the sixteenth century AD in Mesoamerica, the region with the most sophisticated American astronomical traditions, thousands of people had their hearts ripped out every year to please the gods and ensure that the Sun would keep on shining.
Merely three centuries later, European scholars could empirically study the composition of the Sun and verify that it was essentially made of the same stuff as the Earth, only much hotter. Within the next few generations, European and Western scholars would in less than a century proceed to explain how the Sun and the stars generate their energy and why they shine. By any yardstick, this represents one of the greatest triumphs of the human mind in history.
Photography was born in France in the 1820s with Joseph-Nicéphore Niépce, who teamed up with the painter Louis Daguerre. As Eva Weber writes in her book Pioneers of Photography, “In March 1839 Daguerre personally demonstrated his process to inventor and painter Samuel Morse (1791-1872) who enthusiastically returned to New York to open a studio with John Draper (1811-1882), a British-born professor and doctor. Draper took the first photograph of the moon in March 1840 (a feat to be repeated by Boston’s John Adams Whipple in 1852), as well as the earliest surviving portrait, of his sister Dorothy Catherine Draper.”
The American physician Henry Draper (1837-1882), son of John Draper, was a pioneer in astrophotography. In 1857 he visited Lord Rosse, or William Parsons (1800-1867), famous for his construction in Ireland in the 1840s of the most powerful reflecting telescope in the Victorian period, frequently called the Leviathan. It remained the world’s largest telescope until the early twentieth century, but was often shut down due to the wet Irish weather. Most major ground-based observatories after it have been built in the clear air of a remote mountaintop, from the peaks of Hawaii to the dry mountains of Chile in South America.
Henry Draper eventually became a passionate amateur astronomer. After reading about the work on star spectra carried out by William Huggins and Joseph Lockyer he built his own spectrograph. He died at a young age, but his widow established the Henry Draper Memorial. This funded the Henry Draper Catalog, a massive photographic stellar spectrum survey.
The first successful daguerrotype photography of the Sun was made in 1845 by the French physicists Louis Fizeau and Léon Foucault, who are mainly remembered for their accurate measurements of the speed of light. Warren de la Rue (1815-1889), a British-born astronomer, astrophotographer and chemist educated in Paris, designed a special telescope dubbed the photoheliograph. On an expedition to Spain in 1860 during a total solar eclipse, his images demonstrated clearly that the corona is a phenomenon associated with the Sun.
The technique of spectral analysis caught on after the work of Robert Bunsen and Gustav Kirchhoff. One of those who quickly took it up was the great English chemist William Crookes (1832-1919), who discovered the metal thallium (Tl, atomic number 81) in 1862. The Englishman William Huggins (1824-1910) built a private observatory in South London and tried to apply this method to other stars. Through spectroscopic methods he showed that they are composed of the same elements as the Sun and the Earth. He collaborated with his friend William Allen Miller (1817-1870), a professor of chemistry at King’s College, London.
According to his Bruce Medal biography, “Huggins was one of the wealthy British ‘amateurs’ who contributed so much to 19th century science. At age 30 he sold the family business and built a private observatory at Tulse Hill, five miles outside London. After G.R. Kirchhoff and R. Bunsen’s 1859 discovery that spectral emission and absorption lines could reveal the composition of the source, Huggins took chemicals and batteries into the observatory to compare laboratory spectra with those of stars. First visually and then photographically he explored the spectra of stars, nebulae, and comets. He was the first to show that some nebulae, including the great nebula in Orion, have pure emission spectra and thus must be truly gaseous, while others, such as that in Andromeda, yield spectra characteristic of stars. He was also the first to attempt to measure the radial velocity of a star. After 1875 his observations were made jointly with his talented wife, the former Margaret Lindsay Murray.”
The Austrian mathematical physicist Johann Christian Doppler (1803-1853) was born in Salzburg, the son of a stonemason, and studied in Vienna. In 1842 he proposed that observed frequency of light and sound waves is dependent upon how fast the source and observer are moving relative to each other, a phenomenon called the Doppler Effect. For instance, most of us have heard how the sound of a car or a train changes in frequency as it moves toward us and then away from us. A more correct explanation of the principle involved was published by the French physicist Armand-Hippolyte-Louis Fizeau in 1848. The Doppler Effect has proved to be an invaluable tool for astronomical research. Most notably, the motions of galaxies detected through this manner led to the conclusion that the universe is expanding.
In 1864, probably as a result of discussions with his countryman William Huggins, the astronomer Joseph Norman Lockyer (1836-1920), originally from the town of Rugby in the West Midlands of England, obtained a spectroscope. In 1868 he was able to confirm that bright emission lines from prominences of the Sun could be seen at times other than during total solar eclipses. The same technique had been demonstrated by the French astronomer Pierre Janssen (1824-1907). Janssen was born in Paris, where he studied mathematics and physics, and took part in a long series of solar eclipse-expeditions around the world. Lockyer and Janssen are credited with independently discovering helium (chemical symbol He, atomic number 2) in 1868 through studies of the solar spectrum. Helium, from the Greek Helios for the Sun, remains the only element so far discovered in space before being identified on the Earth. Lockyer was also the founder of the leading British scientific journal Nature in 1869.
While the center of astronomy was still in Western Europe, Europeans overseas were starting to leave their mark, above all in North America. The US physicist Henry Rowland (1848-1901) did notable work in spectroscopy, and the American astronomer Vesto Slipher (1875-1969) was the first person to measure the enormous radial velocities of spiral nebulae. As the excellent reference book The Oxford Guide to the History of Physics and Astronomy states:
“In 1868, however, Huggins found what appeared to be a slight shift for a hydrogen line in the spectrum of the bright star Sirius, and by 1872 he had more conclusive evidence of the motion of Sirius and several other stars. Early in the twentieth century Vesto M. Slipher at the Lowell Observatory in Arizona measured Doppler shifts in spectra of faint spiral nebulae, whose receding motions revealed the expansion of the universe. Instrumental limitations prevented Huggins from extending his spectroscopic investigations to other galaxies. Astronomical entrepreneurship in America’s gilded age saw the construction of new and larger instruments and a shift of the center of astronomical spectroscopic research from England to the United States. Also, a scientific education became necessary for astronomers, as astrophysics predominated and the concerns of professional researchers and amateurs like Huggins diverged. George Ellery Hale, a leader in founding the Astrophysical Journal in 1895, the American Astronomical and Astrophysical Society in 1899, the Mount Wilson Observatory in 1904, and the International Astronomical Union in 1919, was a prototype of the high-pressure, heavy-hardware, big-spending, team-organized scientific entrepreneur.”
George Ellery Hale (1868-1938), a university-educated solar astronomer born in Chicago, represented the dawn of a new age, not only because he was American and the United States would emerge as a leading center of astronomical research (although scientifically and technologically speaking it was a direct extension of the European tradition), but at least as much because he personified the increasing professionalization of science and astronomy.
The telescope Galileo used in the early 1600s, although revolutionary at the time, was a simple refractor. The sheer weight of the glass lens makes a refracting telescope larger than one meter in diameter impractical. The introduction of the reflecting mirror telescope by Newton in 1669 paved the way for virtually all modern ones. Hale built the largest telescope in the world no less than four times: Once at Yerkes Observatory, then the 60-and 100-inch reflectors at Mt. Wilson and the 200-inch reflector at Mt. Palomar. As an undergraduate student at the Massachusetts Institute of Technology, Hale co-invented the spectroheliograph, an instrument to photograph outbursts of gas at the edge of the Sun, and discovered that sunspots were regions of lower temperatures but strong magnetic fields. He hired Harlow Shapley and Edwin Hubble and encouraged research in astrophysics and galactic astronomy.
There is still room for non-professional astronomers; good amateurs can occasionally spot new comets before the professionals do. Yet it is a safe bet to say that never again will we have a situation like in the late eighteenth century when William Herschel, a musician by profession, was one of the leading astronomers of his age. From a world of a few enlightened and often wealthy gentlemen in the eighteenth century would emerge a world of trained scientists in the twentieth; the nineteenth century was a transitional period. As the example of William Huggins demonstrates, amateur astronomers were to enjoy a final golden age.
The English entrepreneur William Lassell (1799-1880) had made good money from brewing beer and used some of it to indulge his interest in astronomy, employing very good self-made instruments at his observatory near the city of Liverpool. Liverpool was the fastest-growing port in Europe, and the world’s first steam-hauled passenger railway ran from Liverpool to Manchester in 1830. The Industrial Revolution, where Britain played the leading role, was a golden age for the beer-brewing industry. The combination of beer and science is not unique; the seventeenth-century Polish astronomer Hevelius came from a brewing family, and the English scientific brewer James Joule seriously studied heat and the conservation of energy.
In 1846 William Lassell discovered Triton, the largest moon of Neptune, shortly after the planet had itself been mathematically predicted by the French mathematician Urbain Le Verrier and spotted by the German astronomer Johann Gottfried Galle. Lassel later discovered two moons around Uranus, Ariel and Umbriel; a satellite of Saturn, Hyperion, was spotted by him as well as the American father-and-son team William Bond (1789-1859) and George Bond (1825-1865). William Bond was a clockmaker in Boston who became a passionate amateur astronomer. In 1848, with his son George, he discovered Hyperion. They were among the first in the USA to use Daguerre’s photographic process for astrophotography.
The US astronomer Asaph Hall (1829-1907) discovered the two tiny moons of Mars, Deimos and Phobos, in 1877 and calculated their orbits. While only a few kilometers in diameter, the moons could be seen by viewers using smaller telescopes, which means their discovery owed as much to Hall’s observational skills as to his equipment. Asaph Hall was the son of a clockmaker and worked for a while with George Bond at the Harvard College Observatory.
Photos taken by the European Space Agency’s Mars Express spacecraft of Phobos, the larger of the two tiny, potato-shaped Martian moons, have showed potential landing sites for Russia’s unmanned Phobos-Grunt mission, which is designed to bring samples of the Martian moon back to the Earth after 2012. The Russian Space Agency intends to include a Chinese Mars orbiter, Yinghuo-1, together with the mission. It will be China’s first interplanetary probe. China in 2003 became only the third nation to achieve human spaceflight, after the Soviet Union/Russia and the United States, and has plans for manned missions to the Moon.
The largely self-taught American astronomer Edward Barnard (1857-1923), originally a poverty-stricken photographer, made his own telescope and after some notable observations joined the initial staff of the Lick Observatory in 1887. He introduced wide-field photographic methods to study the structure of the Milky Way. The faint Barnard’s Star, which he discovered in 1916, had the largest proper motion of any known star. At a distance of about six light-years it is the closest neighboring star to the Sun next to the members of the Alpha Centauri system, around 4.4 light-years away. In 1892 he observed Amalthea, the first Jovian moon to be discovered since the four largest ones described by Galileo Galilei in 1610. The astronomer Charles Perrine (1867-1951), born in the USA but based in Argentina for many years, discovered two additional moons around Jupiter: Himalia in 1904 and Elara in 1905.
The Swiss natural philosopher Pierre Prévost (1751-1839), the son of a clergyman from Geneva, Switzerland who served as a professor of physics at Berlin, showed in 1791 that all bodies radiate heat, regardless of their temperature.
Early estimates of stellar surface temperatures gave results that were far too high. More accurate values were obtained by using the radiation laws of the Slovenian physicist Joseph Stefan from 1879 and the German physicist Wilhelm Wien from 1896. Stefan calculated the temperature of the Sun’s surface to 5400 °C, the most sensible value until then. The Stefan-Boltzmann Law, named after Stefan and his Austrian student Ludwig Boltzmann, suggests that the amount of radiation given off by a body is proportional to the fourth power of its temperature as measured in Kelvin units.
The surface temperature is not necessarily dependent upon the size of the star (the core temperature is a different matter). You can easily find red supergiants with many times the mass of the Sun, but with a surface temperature of less than 4000 K, compared to the Sun’s 5800 or so K. The surface temperature of a bright red star is approximately 3500 K, whereas blue stars can have ones of tens of thousands of degrees. Dark red stars have surface temperatures of about 2500 K. Blue stars are extremely hot and bright and live short lives by astronomical standards. The bright star Rigel in the constellation of Orion is a blue supergiant of an estimated 20 solar masses, shining with tens of thousands of times the Sun’s luminosity.
If you heat an iron rod with an intense flame it will first appear “red hot.” Heated a little more it will seem orange and feel hotter and then yellow after that. After more heating, the rod will appear white-hot and brighter still. If it doesn’t melt, further heating will make the rod appear blue and even brighter and progressively hotter. The same basic principle applies to stars, too.
Stars, molten rock and iron bars are approximations of an important class of objects that physicists call blackbodies. An ideal blackbody absorbs all of the electromagnetic radiation that strikes it. Incoming radiation heats up the body, which then reemits the energy it has absorbed, but with different intensities at each wavelength than it received. This pattern of radiation emitted by blackbodies is independent of their chemical compositions. Authors Neil F. Comins and William J. Kaufmann III explain in Discovering the Universe, Eighth Edition:
“Ideal blackbodies have smooth blackbody curves, whereas objects that approximate blackbodies, such as the Sun, have more jagged curves whose variations from the ideal blackbody are caused by other physics. The total amount of radiation emitted by a blackbody at each wavelength depends only on the object’s temperature and how much surface area it has. The bigger it is, the brighter it is at all wavelengths. However, the relative amounts of different wavelengths (for example, the intensity of light at 750 nm compared to the intensity at 425 nm) depend on just the body’s temperature. So, by examining the relative intensities of an object’s blackbody curve, we are able to determine its temperature, regardless of how big or how far away it is. This is analogous to how a thermometer tells your temperature no matter how big you are.”
Photography made it possible to preserve images of the spectra of stars. The Catholic priest and astrophysicist Pietro Angelo Secchi (1818-1878), born in the city of Reggio Emilia in northern Italy, is considered the discoverer of the principle of stellar classification. He visited England and the USA and became professor of astronomy in Rome in 1849. After the introduction of spectrum analysis by Kirchhoff and Bunsen, Secchi was among the first to investigate the spectra of Uranus and Neptune. On an expedition to Spain to observe the total solar eclipse of 1860 he “definitively established by photographic records that the corona and the prominences rising from the chromosphere (i.e. the red protuberances around the edge of the eclipsed disc of the sun) were real features of the sun itself,” not optical illusions or illuminated mountains on the Moon. In the 1860s he began collecting the spectra of stars and classified them according to spectral characteristics, although his particular system didn’t last.
The Harvard system based on the star’s surface temperature was developed from the 1880s onward. Several of its creators were women. The US astronomer Edward Pickering (1846-1919) at the Harvard College Observatory hired female assistants, among them the Scottish-born Williamina Fleming (1857-1911) and especially Annie Jump Cannon (1863-1941) and Antonia Maury (1866-1952) from the USA, to classify the prism spectra of hundreds of thousands of stars. Cannon developed a classification system based on temperature where stars, from hot to cool, were of ten spectral types — O, B, A, F, G, K, M, R, N, S — that astronomers accepted for world-wide use in 1922. Maury developed a different system.
Edward Pickering and the German astronomer Hermann Karl Vogel (1841-1907) independently discovered spectroscopic binaries — double-stars that are too close to be detected through direct observation, but which through the analysis of their light have been found to be two stars revolving around one another. Vogel was born in Leipzig in what was then the Kingdom of Saxony, and died in Potsdam in the unified German Empire. He studied astronomy at the Universities of Leipzig and Jena, joined the staff of the Potsdam Astrophysical Observatory and served as its director from 1882 to 1907. Vogel made detailed tables of the solar spectrum, attempted spectral classification of stars and also made photographic measurement of Doppler shifts to determine the radial velocities of stars.
Another system was worked out in the 1940s by the American astronomers William Wilson Morgan (1906-1994) and Philip Keenan (1908-2000), aided by Edith Kellman. They introduced stellar luminosity classes. For the first time, astronomers could determine the luminosity of stars directly by analyzing their spectra, their “stellar fingerprints.” This is known as the MK (after Morgan and Keenan) or Yerkes spectral classification system after Yerkes Observatory, the astronomical research center of the University of Chicago. Morgan’s observational work helped to demonstrate the existence of spiral arms in the Milky Way.
Maury’s classifications were not preferred by Pickering, but the Danish astronomer Ejnar Hertzsprung (1873-1967) realized their value. As stated in his Bruce Medal profile, “Hertzsprung studied chemical engineering in Copenhagen, worked as a chemist in St. Petersburg, and studied photochemistry in Leipzig before returning to Denmark in 1901 to become an independent astronomer. In 1909 he was invited to Göttingen to work with Karl Schwarzschild, whom he accompanied to the Potsdam Astrophysical Observatory later that year. From 1919-44 he worked at the Leiden Observatory in the Netherlands, the last nine years as director. He then retired to Denmark but continued measuring plates into his nineties. He is best known for his discovery that the variations in the widths of stellar lines discovered by Antonia Maury reveal that some stars (giants) are of much lower density than others (main sequence or ‘dwarfs’) and for publishing the first color-magnitude diagrams.”
The American astronomer Henry Norris Russell (1877-1957) spent six decades at Princeton University as a student, professor and observatory director. From 1921 on he made annual visits to the Mt. Wilson Observatory. “He measured parallaxes in Cambridge, England, with A.R. Hinks and found a correlation between spectral types and absolute magnitudes of stars — the Hertzsprung-Russell diagram. He popularized the distinction between giant stars and ‘dwarfs’ while developing an early theory of stellar evolution. With his student, Harlow Shapley, he analyzed light from eclipsing binary stars to determine stellar masses. Later he and his assistant, Charlotte E. Moore Sitterly, determined masses of thousands of binary stars using statistical methods. With Walter S. Adams Russell applied Meghnad Saha’s theory of ionization to stellar atmospheres and determined elemental abundances, confirming Cecilia Payne-Gaposchkin’s discovery that the stars are composed mostly of hydrogen. Russell applied the Bohr theory of the atom to atomic spectra and with Harvard physicist F.A. Saunders made an important contribution to atomic physics, Russell-Saunders coupling (also known as LS coupling).”
Herztsprung had discovered the relationship between the brightness of a star and its color, but published his findings in a photographic journal which went largely unnoticed. Russell made essentially the same discovery, but published it in 1913 in a journal read by astronomers and presented the findings in a graph, which made them easier to understand. The Hertzsprung-Russell diagram helped give astronomers their first insight into the lifecycle of stars. It can be regarded as the Periodic Table of stars. The Indian astrophysicist Meghnad Saha (1893-1956) provided a theoretical basis for relating the spectral classes to stellar surface temperatures.
Changes in the structure of stars are reflected in changes in temperatures, sizes and luminosities. The smallest ones, red dwarfs, may contain less than 10% the mass of the Sun and emit 0.01% as much energy. They constitute by far the most numerous types of stars and have lifespans of tens of billions of years. By contrast, the rare hypergiants may exceed 100 solar masses and emit hundreds of thousands of times more energy than the Sun, but they also have lifetimes of just a few million years. Those that are actively fusing hydrogen into helium in their cores, which means most of them, are called main sequence stars. These are in hydrostatic equilibrium, which means that the outward radiation pressure from the fusion process is balanced by the inward gravitational force. When the hydrogen fuel runs out, the core contracts and heats up. The star then brightens and expands, becoming a red giant.
The Eddington Limit, named after the English astrophysicist Arthur Eddington, is the point at which the luminosity emitted by a star is so extreme that it starts blowing off its outer layers. It was thought to be reached in stars around 120-140 solar masses. In the early stages of the universe, extremely massive stars containing hundreds of solar masses may have been able to form because they contained practically no heavy elements, just hydrogen and helium. Wolf-Rayet stars are very hot, luminous and massive objects that eject significant proportions of their mass through solar wind per year. They are named after the French astronomers Charles Wolf (1827-1918) and Georges Rayet (1839-1906), who discovered their existence in 1867.
It is highly likely that there is an upper limit to how large stars can become, but we do not yet know precisely how big this limit is. It was once believed to be around 150 solar masses for stars existing today, but astronomers have come across one at more than 300 solar masses. The star R136a1 is the most massive one ever observed and also has the highest luminosity of any star found to date — almost 10 million times greater than the Sun. It was discovered in 2010 inside two young clusters of stars by a European research team led by Paul Crowther, professor of astrophysics at the University of Sheffield in England. The theoretical models we currently operate with cannot fully explain the evolution of such extremely massive objects.
A common, medium-sized star like the Sun will remain on the main sequence for roughly 10 billion years. The Sun is currently in the middle of its lifespan, as it formed 4.57 billion years ago and in about 5 billion years will turn into a red giant. Even today, the Sun daily emits an estimated 30% more energy than it did when it was born. The so-called faint young Sun paradox, proposed by Carl Sagan and his colleague George Mullen in the USA in 1972, refers to the fact that the Earth apparently had liquid oceans, not frozen ones, for much of the first half of its existence, despite the fact that the Sun probably was only 70 percent as bright in its youth as it is now. Scientists have not yet reached an agreement on why this was the case.
The magnetic field of the Sun can be probed because in the presence of a magnetic field the energy levels of atoms and ions are split into more than one level, which causes spectral transition lines to be split as well. This is called the Zeeman Effect after the Dutch physicist Pieter Zeeman (1865-1943). Spectroscopic studies of the Sun by the American astronomer Walter Adams (1876-1956) with Hale and others at the Mt. Wilson Solar Observatory led to the insight that sunspots are regions of lower temperatures and stronger magnetic fields than their surroundings. The spectroheliograph for studying the Sun was developed independently by George Ellery Hale and by the talented French astrophysicist Henri-Alexandre Deslandres (1853-1948), working at the Paris Observatory around 1890. Sunspots appear dark to us because they are cooler than other solar regions, but in reality they are red or orange in color.
Richard Carrington and Edward Sabine in Britain in the 1800s had suggested a possible link between the occurrences of solar flares and observations of auroras and geomagnetic storms on the Earth. It takes a day or two for the charged particles of the solar wind to travel from the Sun to the Earth. This is obviously very fast, yet significantly slower than the roughly 8 minutes and 20 seconds that it takes for light to travel the same distance. This indicated that something other than light travels from the Sun to us. Following work by Kristian Birkeland from Norway, the English geophysicist Sydney Chapman and the German astronomer Ludwig Biermann, Eugene Parker in the USA created a coherent model for the solar wind in 1958.
Progress in mapping the Sun’s magnetic field was made in the mid-twentieth century by an American father-and-son team. The prominent solar astronomer Harold D. Babcock (1882-1968) studied spectroscopy and the magnetic fields of stars. Horace W. Babcock (1912-2003) was his son. The two Babcocks were the first to measure the distribution of magnetic fields over the solar surface. These fields change polarity every 11-year cycle, indicating that solar activity varies with a period of around 22 years. They developed important models of sunspots and their magnetism. In the early 1950s, Horace Babcock was the first person to propose adaptive optics, a methodology that provides real-time corrections with deformable mirrors to remove the blurring of ground-based astronomical images caused by turbulence in the Earth’s atmosphere. Adaptive optics works best at longer wavelength such as infrared.
To learn more about the Sun’s interior, astronomers record its vibrations — a study called helioseismology. In principle this is related to how geophysicists use seismic waves to study the interior of the Earth. While there are no true sunquakes, the Sun does vibrate at a variety of frequencies, which can be detected. Its magnetic field is believed to be created as a result of its rotation and the resulting motion of the ionized particles found throughout its body.
As we have seen, it was possible for European astrophysicists in the late 1800s to detect the presence of elements such as hydrogen in the Sun, but they did not yet know how big a percentage of its mass consisted of hydrogen. In the 1920s, many scientists still assumed that it was rich in heavy elements. This changed with the work of the English-born astronomer Cecilia Payne, later named Payne- Gaposchkin (1900-1979) when she married a Russian astronomer. Her interest in astronomy was triggered after she heard Arthur Eddington lecture on relativity. She joined the Harvard College Observatory in the USA. By using spectroscopy, Payne worked out that hydrogen and helium are the most abundant elements in stars. Otto Struve (1897-1963), a Russian astronomer of ethnic German origins, called her thesis Stellar Atmospheres from 1925 “the most brilliant Ph.D. thesis ever written in astronomy.”
The Irish astronomer William McCrea (1904-1999) and the German astrophysicist Albrecht Unsöld (1905-1995) independently established that the prominence of hydrogen in stellar spectra indicates that the presence of hydrogen in stars is greater than that of all other elements put together. Unsöld studied under the German theoretical physicist Arnold Sommerfeld at the University of Munich and began working on stellar atmospheres in 1927.
The English mathematical physicist James Jeans (1877-1946) worked on thermodynamics, heat and aspects of radiation, publishing major works on these topics and their applications to astronomy. The English astrophysicist and mathematician Arthur Milne (1896-1950) did research in the 1920s on stellar atmospheres, much of it with his English colleague Ralph H. Fowler (1889-1944). This led to the determination of the temperatures and pressures associated with spectral classes, explaining the origin of stellar winds. The astronomer Marcel Minnaert (1893-1970) was forced to flee from his native Belgium to the Netherlands for taking part in Flemish activism. From 1937 to 1963 he was director of the Utrecht Observatory, where he and his students did quantitative analysis of the solar spectrum.
Newton had speculated on the energy source of the Sun. He assumed that it loses mass by emitting light particles and suggested that incoming comets could provide it with more mass to compensate for this. The French physicist Claude Pouillet (1791-1868) in 1837 calculated a decent estimate of the energy emitted by the Sun. However, this would require a mass almost the equivalent of the Earth’s Moon to hit the Sun every year, which was clearly not the case.
In 1854, Hermann von Helmholtz suggested that the Sun was contracting and converting potential energy into radiated energy. This Kelvin-Helmholtz mechanism of gravitational contraction, named after Lord Kelvin and Helmholtz, is relevant for planets like Jupiter, which emits approximately twice as much energy as it receives from the Sun. Its energy comes from radioactive elements in its core and from an overall contraction amounting to a few centimeters per century. Although the required rate of solar contraction was 91 meters per year, this mechanism would have implied an impossible reduction of the Sun’s diameter with 50% over 5 million years. Nineteenth-century physicists were partially right, though; the initial release of gravitational energy ignites nuclear fusion in stars by heating up their cores.
In everyday language we say that stars “burn,” but this should not be taken literally. Burning a fuel — solid, liquid or gas — is called combustion, a chemical process normally involving oxygen in which a substance reacts rapidly with oxygen and gives off heat. The source of oxygen is called the oxidizer. Rocket engines, internal combustion engines and jet engines all depend on the burning of fuel to produce power. The combustion of liquid hydrogen and liquid oxygen is a commonly used reaction in rocket engines. The result is water vapor, H2O. The reason why water cannot burn is because water is already “burnt,” chemically speaking.
In a common fireplace or campfire, the carbon in the wood will combine with oxygen gas in the air to produce heat and carbon dioxide, CO2. Chemical combustion was considered a possible source of solar energy by European scientists in the 1800s, but was eventually rejected as it would have burnt away the entire Sun in a few thousand years. Solar energy is produced in a radically different way, not by combing various elements through normal chemical processes, but by producing new chemical elements through nuclear processes.
The discovery of radioactivity in 1896 suddenly provided a new source of heat. In 1905 Albert Einstein generalized the law of conservation of energy with his famous mass-energy equivalence formula E = mc2, where E stands for energy, m for mass and c for the speed of light in a vacuum. Since the speed of light is very great, the formula implies that very little mass is required to generate huge amounts of energy. The English physical chemist Francis William Aston with his mass spectrograph in 1920 made precise measurements of different atoms. He found that four individual hydrogen nuclei were more massive than a helium nucleus consisting of four nuclear particles. Arthur Eddington argued that these measurements indicated that by converting hydrogen nuclei to helium and releasing about 0.7% of the hydrogen’s mass as energy in the process, the Sun could shine for billions of years.
The great English astrophysicist Arthur Stanley Eddington (1882-1944) was born to Quaker parents and earned a scholarship to Owens College, Manchester, in 1898. He turned to physics and went to Trinity College at the University of Cambridge. He spent seven years (1906 to 1913) as chief assistant at the Royal Observatory at Greenwich. He took inspiration from the Hertzsprung-Russell diagram, made important investigations of stellar dynamics and became an influential supporter of the view that the spiral nebulae are external galaxies. Eddington’s greatest contributions concerned astrophysics. He dealt with the importance of radiation pressure, the mass-luminosity relation and investigated the internal structure and evolution of stars. He wrote several books, some of them for the general reader. His The Internal Constitution of the Stars from 1926 was extremely influential to a generation of astrophysicists. He was one of the first to provide observational support for Einstein’s general theory of relativity from 1916 and explain it to a mass audience. Eddington was also among the first to suggest that processes at the subatomic level involving hydrogen and helium could explain why stars generate energy, but it was left for other scientists to work out the details.
The Sun has a mass of about 1.989×1030 kg, roughly 333 thousand times more than the mass of the Earth, and a mean density of 1408 kg/m³ or 1.408 kg/dm³, a little bit more than water. Its equatorial radius (distance from its center to its surface) is 695,500 kilometers, approximately 109 times Earth’s radius. The energy per time put out by the Sun, its luminosity, is more than 3.8 x 1026 Joules per second (or Watts). The amount of mass that the Sun converts into energy equals more than 4 million metric tons, or 4 billion kg, per second.
The theoretical physicist George Gamow (1904-1968) was born in the seaport city of Odessa in the Russian Empire (now the Ukraine) on the northern shore of the Black Sea. His father came from a military family and was a teacher of Russian literature in high school; his mother’s father was Archbishop of Odessa from the Orthodox Church. At the University of Leningrad he studied briefly under the Russian mathematician Alexander Friedmann, who was interested in the mathematics of relativity. After completing his Ph.D. in 1928, Gamow worked on quantum mechanics at Göttingen, Copenhagen and Cambridge. He couldn’t endure the brutal oppression under the Communist dictator Joseph Stalin, but fled the Soviet Union and moved to the United States in 1934. Gamow introduced nuclear theory into cosmology.
According to classical physics, two particles with the same electrical charge will repel each other. In 1928, Gamow derived a quantum-mechanical formula that gave a non-zero probability of two charged particles overcoming their mutual electrostatic repulsion and coming very close together. It is now known as the “Gamow factor.” The Dutch-Austrian nuclear physicist Fritz Houtermans (1903-1966) together with his British colleague Robert Atkinson (1898-1982) in 1929 predicted that the nuclei of light atoms such as hydrogen could fuse through quantum tunneling, and that the resultant atoms would have slightly less mass than the original constituents. This loss in mass would be released as vast amounts of energy.
The final major piece of the puzzle was the structure of the atom itself. When the neutron had been detected by the Englishman James Chadwick in 1932, physicists finally had sufficient information about the atomic nucleus to calculate the details of how hydrogen can fuse to become helium. This nuclear fusion process was worked out independently by two German-born physicists in the late 1930s: Hans Bethe in the USA and Carl von Weizsäcker in Berlin.
Carl Friedrich Freiherr von Weizsäcker (1912-2007) was born in Kiel, Germany to a prominent family; his father was a German diplomat, and his elder brother would later become German President. He studied physics and astronomy in Berlin, Göttingen and Leipzig (1929-1933) and was supervised by prominent nuclear physicists such as Werner Heisenberg and Niels Bohr. After the Second World War he was appointed head of a department at the Max Planck Institute for Physics in Göttingen, and from 1957 to 1969 he was Professor for Philosophy at the University of Hamburg in northern Germany.
Hans Bethe (1906-2005) studied at the Universities of Frankfurt and Munich, where he earned his Ph.D. under the great German theoretical physicist Arnold Sommerfeld in 1928. He was forced to leave Germany after Adolf Hitler and the Nazi Party came to power there in 1933 since his mother was Jewish, although he had been raised as a Christian by his father. Bethe was at Cornell University in the USA from 1935 to 2005 and became an American citizen in 1941. Weizsäcker was a member of the team that performed nuclear research in Germany during WW2, while Bethe became the head of the theoretical division at Los Alamos during the development of nuclear weapons in the United States. In stellar physics, both men described the proton-proton chain, which is the dominant energy source in stars such as our Sun or smaller, and the carbon-nitrogen-oxygen (CNO) cycle. Author John North writes:
“It was not until 1938, when attending a Washington conference organized by Gamow, that he was first persuaded to turn his attention to the astrophysical problem of stellar energy creation. Helped by Chandrasekhar and Strömgren, his progress was astonishingly rapid. Moving up through the periodic table, he considered how atomic nuclei would interact with protons. Like Weizsäcker, he decided that there was a break in the chain needed to explain the abundances of the elements through a theory of element-building. Both were stymied by the fact that nuclei with mass numbers 5 and 8 were not known to exist, so that the building of elements beyond helium could not take place….Like Weizsäcker, Bethe favored the proton-proton reaction chain and the CNO reaction cycle as the most promising candidates for energy production in main sequence stars, the former being dominant in less massive, cooler, stars, the latter in more massive, hotter, stars. His highly polished work was greeted with instant acclaim by almost all of the leading authorities in the field.”
Bengt Strömgren (1908-1987) was an astrophysicist from Denmark, the son of a Swedish astronomer. He studied in Copenhagen and stayed in touch with the latest developments in physics via Niels Bohr’s Institute there. Strömgren did important research in stellar structure in the 1930s and calculated the relative abundances of the elements in the Sun and other stars.
The nineteenth century German astronomer Friedrich Bessel was the first to notice minor deviations in the motions of the bright stars Sirius and Procyon, which he correctly assumed must be caused by the gravitational attraction of unseen companions. The existence of these bodies was later confirmed. Bessel was also first person to clearly measure stellar parallax in 1838, an achievement which was independently made by the Baltic German astronomer Friedrich Georg Wilhelm von Struve and the Scottish astronomer Thomas Henderson.
In 1862 the American telescope maker Alvan Graham Clark (1832-1897) discovered the very faint companion Sirius B. Because the companion was about twice as far as Sirius from their common center of mass, it had to weigh about half as much (like a child twice as far from the center of a see-saw balancing an adult). The American astronomer Walter Sydney Adams (1876-1956) in 1915 identified Sirius B as a white dwarf star, a very dense object about the size of the Earth but with roughly the same mass as the Sun. Our Sun will eventually end up as a white dwarf billions of years from now, after first having gone through a red giant phase where it will expand greatly in volume and vaporize Mercury, Venus and possibly the Earth.
The astrophysicist Subrahmanyan Chandrasekhar (1910-1995) was born in Lahore into a Tamil Hindu family and got a degree at the University of Madras in what was then British-ruled India. After receiving a scholarship he studied at the University of Cambridge in England and came to the University of Chicago in the United States in 1937, where he remained for the rest of his life. NASA’s Chandra X-ray Observatory from 1999 was named after him. He is remembered above all for his contributions to the subject of stellar evolution.
He was the nephew of the physicist Chandrasekhara Venkata Raman (1888-1970) from Madras, whose discovery of the Raman Effect in 1928, the change in wavelength of light when it is deflected by molecules, “greatly impacted future research regarding molecular structure and radiation.” Raman was knighted by the British in 1929 and won the Nobel Prize in Physics in 1930, the first non-European to win a science Nobel. Rabindranath Tagore (1861-1941), the great Bengali writer and artist from India, had earlier been awarded the Nobel Prize for Literature in 1913. Chandrasekhar shared the Nobel Prize in Physics in 1983.
In 1930, Chandrasekhar applied the new quantum ideas to the physics of stellar structure. He realized that when a star like the Sun exhausts its nuclear fuel it will collapse due to its own gravity until stopped by the Pauli Exclusion Principle, which prevents electrons from getting too close to one another. Stars more massive than the Chandrasekhar Limit of 1.44 solar masses do not stabilize at the white dwarf stage but become neutron stars. The upper limit for a neutron star before it collapses further is called the Oppenheimer-Volkoff Limit after J. Robert Oppenheimer (1904-1967) from the USA and the Russian-born Canadian physicist George Volkoff (1914-2000). The O-V Limit is less certain, but is estimated at approximately three solar masses. Stars of greater mass than this are believed to end up as black holes.
The process of combining light elements into heavier ones — nuclear fusion — happens in the central region of stars. In their extremely hot cores, instead of individual atoms you have a mix of nuclei and free electrons, or what we call plasma. The term “plasma” was first applied to ionized gas by Irving Langmuir (1881-1957), a physical chemist from the USA, in 1923. It is the fourth and by far the most common state of matter in the universe in addition to those we are familiar with from everyday life on Earth: solid, liquid and gas. Extreme temperatures and pressure is needed to overcome the mutual electrostatic repulsion of positively charged atomic nuclei (ions), often called the Coulomb barrier after the French natural philosopher Charles de Coulomb, who formulated the laws of electrostatic attraction and repulsion.
While their work represented a huge conceptual breakthrough, the initial theories of Weizsäcker and Bethe did not explain the creation of elements heavier than helium. Edwin Ernest Salpeter (1924-2008) was an astrophysicist who emigrated from Austria to Australia, studied at the University of Sydney and finally ended up at Cornell University in the USA, where he worked in the fields of quantum electrodynamics and nuclear physics with Hans Bethe. In 1951 he explained how with the “triple-alpha” reaction, carbon nuclei could be produced from helium nuclei in the nuclear reactions within certain large and hot stars.
The fusion of hydrogen to helium by the proton-proton chain or CNO cycle requires temperatures in the order of 10 million degrees Celsius or Kelvin. Only at those temperatures will there be enough hydrogen ions in the plasma with high enough velocities to tunnel through the Coulomb barrier at sufficient rates. There are no stable isotopes of any element with atomic masses 5 or 8; beryllium-8 (4 protons and 4 neutrons) is highly unstable and short-lived. Only at extremely high temperatures of around 100 million K can the sequence called the triple-alpha process take place. It is so called because its net effect is to combine 3 alpha particles, which means standard helium-4 nuclei of two protons and two neutrons, to form a carbon-12 nucleus (6 protons and 6 neutrons). In main sequence stars, the central temperatures are too low for this process to take place, but not in stars in the red giant phase.
Further advances were made by the English astrophysicist Fred Hoyle (1915-2001). He was born in Yorkshire in northern England and educated in mathematics and theoretical physics at the University of Cambridge by some of the leading scientists of his day, among them Arthur Eddington and Paul Dirac. During World War II he contributed to the development of radar. With the German American astronomer Martin Schwarzschild (1912-1997), son of the astrophysicist Karl Schwarzschild and a pioneer in the use of electronic computers and high-altitude balloons to carry scientific instruments, he developed a theory of the evolution of red giant stars. Hoyle stayed at Cambridge from 1945 to 1973. In addition to his career in physics he was known for his popular science works and wrote novels, plays and short stories. He attributed life on Earth to an infall of organic matter from space. He was controversial throughout his life for supporting many highly unorthodox ideas, yet he made indisputable contributions to our understanding of stellar nucleosynthesis and together with a few others convincingly demonstrated how heavy elements can be created during supernova explosions.
The English astrophysicist Margaret Burbidge (born 1919) was educated at the University of London. She worked in the USA for a long time, but also served as director of the Royal Greenwich Observatory in her native Britain. She studied the spectra of galaxies, determining their masses and chemical composition, and married fellow Englishman Geoff Burbidge (1925-2010). He was educated at the University of Bristol and at University College, London, where he earned a Ph.D. in theoretical physics. The American astrophysicist William Alfred Fowler (1911-1995) earned his B.S. in engineering physics at Ohio State University and his Ph.D. in nuclear physics at the California Institute of Technology. He and his colleagues at Caltech measured the rates of nuclear reactions of astrophysical interest. After 1964, Fowler worked on problems involving supernovae and the formation of lighter chemical elements.
Building on the work of Hans Bethe, Hoyle in 1957 co-authored with Fowler and the husband-and-wife team of Geoffrey and Margaret Burbidge the paper Synthesis of the Elements in Stars. They demonstrated how the cosmic abundances of all heavier elements from carbon to uranium could be explained as the result of nuclear reactions in stars. Yet out of these four individuals, William Fowler alone shared the Nobel Prize in Physics in 1983 for work on the evolution of stars. By then Fred Hoyle was known for, among many other controversial ideas, attributing influenza epidemics to viruses carried in meteor streams.
The Canadian scientist Alastair G. W. Cameron (1925-2005) further aided our understanding of these stellar processes. Astrophysicists spent the 1960s and 70s establishing detailed descriptions of the internal workings of stars. Chushiro Hayashi (,1920-2010), educated at the University of Tokyo, together with his students made valuable contributions to stellar models. He found that pre-main-sequence stars follow what are now called “Hayashi tracks” downward on the Hertzprung-Russell diagram until they reach the main sequence. He was a leader in building astrophysics as a discipline in Japan. The Armenian scientist Victor Ambartsumian (1908-1996) was a pioneer in astrophysics in the Soviet Union, studied stellar evolution and hosted international conferences to search for extraterrestrial civilizations.
The important subatomic particles that are called neutrinos entered physics as a way to understand beta decay, the process by which a radioactive atomic nucleus emits an electron. Experiments showed that the total energy of the nucleus plus the electron was less than that of the initial nucleus. The Austrian physicist Wolfgang Pauli in 1930, trusting the principle of energy conservation, proposed that an unknown particle carried off missing energy. If it existed it had to be electrically neutral, possess nearly zero mass and move at close to the speed of light. Enrico Fermi named it the neutrino, meaning “little neutral one” in Italian.
Because of their extremely weak interactions with matter, neutrinos are difficult to detect; billions of them are thought to be going through your body every second. Their existence was confirmed in 1956 through experiments with tanks containing hundreds of liters of water by the scientists Frederick Reines (1918-1998) and Clyde Cowan (1919-1974) in the USA. This great achievement was decades later rewarded with a well-deserved Nobel Prize in Physics.
Scientists realized that nuclear reactions in stars should produce vast amounts of neutrinos, which might provide us with valuable information about places and physical processes that are otherwise hard to observe. Whereas light is easily absorbed as it moves through space, neutrinos rarely interact with anything and unlike many other particles have no charge, so they travel in a straight line from their source without being deflected by magnetic fields. In 1967, the physicist Raymond Davis, Jr. (1914-2006) installed a large tank of cleaning fluid in a deep gold mine in South Dakota in the United States. It was a prototype of sensitive detectors that were normally placed in abandoned mineshafts or other places deep below the Earth’s surface, in sharp contrast to optical telescopes placed on dry mountaintops.
Neutrino observatories have since then been built in many remote places, from the bottom of the Mediterranean Sea to IceCube at the Amundsen-Scott South Pole Station in Antarctica. The Russian researcher Moisey Markov (1908-1994) in the Soviet Union around 1960 suggested using natural bodies of water as neutrino detectors. By the 1980s, Russians realized that they had a massive tank of pure water in their own backyard: Lake Baikal. It contains 20 percent of the world’s unfrozen freshwater and has been isolated from other lakes and oceans for a very long time, leading to the evolution of a unique local flora and fauna. Russians regard it as their Galápagos. The neutrino telescope there operates underwater all year round.
Several tons of lead from an ancient Roman shipwreck has been transferred from a museum on the island of Sardinia to the Italian national particle physics laboratory at Gran Sasso. Once destined to become water pipes, coins or ammunition for the slingshots of Roman soldiers the lead in the ingots, which has lost almost all traces of its radioactivity, will instead form part of experiments to nail down the mass of neutrinos. The Kamiokande detector in the Japanese Alps has been important for similar studies. Raymond Davis and the great Japanese physicist Masatoshi Koshiba (born 1926) shared the 2002 Nobel Prize in Physics for work on neutrinos.
In the 1990s, Japanese and American scientists obtained experimental evidence indicating that neutrinos have non-zero mass, yet it is extremely small even compared to electrons. Neutrinos are light elementary particles, but because there are so many of them their tiny masses can add up to influence the overall distribution of galaxies. A neutrino’s mass is currently believed to be no more than 0.28 electron volts, less than a billionth of the mass of a hydrogen atom, but the value is not yet established with certainty and may turn out to be slightly higher than this.
From the 1960s to about 2002, scientists struggled to explain why the number of observed neutrinos from the Sun appeared to be less than had been predicted. The mystery of the “missing solar neutrinos” was solved when it was understood that neutrinos can change type, and that certain types are more challenging to detect than others. After these adjustments had been made, the number of observed solar neutrinos closely matched theoretical predictions, which indicated that our understanding of nuclear processes within stars like the Sun is reasonably accurate. As the American neutrino physicist John N. Bahcall (1934-2005) put it:
“ Link Text 1% error in the [Sun’s central] temperature corresponds to about a 30% error in the predicted number of neutrinos; a 3% error in the temperature results in a factor of two error in the neutrinos. The physical reason for this great sensitivity is that the energy of the charged particles that must collide to produce the high-energy neutrinos is small compared to their mutual electrical repulsion. Only a small fraction of the nuclear collisions in the Sun succeed in overcoming this repulsion and causing fusion; this fraction is very sensitive to the temperature. Despite this great sensitivity to temperature, the theoretical model of the Sun is sufficiently accurate to predict correctly the number of neutrinos.”
Neutrinos have emerged as an important tool for astrophysicists. 1987 was a landmark year in neutrino astronomy, with the first naked-eye supernova seen since 1604. That event, called SN1987A, took place in our galactic neighbor the Large Magellanic Cloud. The two most sensitive neutrino observatories in the world, one in Japan and another in the USA, detected a 12-second burst of neutrinos roughly three hours before the supernova became optically visible, which, again, seemed to match theoretical predictions for such events pretty well.
In 1911 the American astronomer Edward Pickering differentiated between low-energy novae, often seen in the Milky Way, and novae seen in other nebulae (galaxies) like Andromeda. By 1919, the Swedish astronomer Knut Lundmark (1889-1958) had realized that low-energy novae occur commonly whereas the brighter novae, which are vastly more luminous, occur rarely. The challenge was to explain the difference between them. In 1981, Gustav A. Tammann from Switzerland estimated that three supernovae occur every century in the Milky Way, yet most of them go undetected owing to obscuring interstellar material.
A nova (pl. novae) is a nuclear explosion caused by the accretion of hydrogen from a nearby companion onto the surface of a white dwarf star, which briefly reignites its nuclear fusion process until the hydrogen is gone. From the Earth we will see what appears to be a nova (“new” in Latin), but in reality it is an old star undergoing an eruption. It is possible for a star to become a nova repeatedly as this process does not destroy it, unlike a supernova event which obliterates a massive star in a cataclysmic explosion. A supernova explosion can release extraordinary amounts of energy and for a limited period outshine an entire galaxy.
If a white dwarf gains so much more additional mass that it exceeds the Chandrasekhar Limit of about 1.44 solar masses, electron degeneracy pressure can no longer sustain it. The star will then collapse and explode in a so-called Type Ia supernova. Since this limit is held to be constant, these supernovas have been used as standard candles to measure cosmic distances. Observations of such supernovas were used in 1998 to demonstrate that the expansion of our universe appears to be accelerating. However, some observations indicate that such events can be triggered by two white dwarves colliding, which might make them slightly less reliable as uniform standard candles since the weight limit could be less constant than was once believed.
The neutron was discovered in 1932. Shortly after this, the German-born Walter Baade (1893-1960) and the Swiss astronomer Fritz Zwicky (1898-1974), both eventually based in the USA, proposed the existence of neutron stars. Zwicky had a number of brilliant teachers at the ETH in Zürich, including Herman Weyl, Auguste Piccard and Peter Debye, but left Switzerland for the USA and the California Institute of Technology in 1925 to work with Robert Millikan.
Zwicky was not as systematic a thinker as Baade, but he could have excellent intuitive ideas. He was a bold and visionary scientist, but also eccentric and not always easy to work with. He stated that “Astronomers are spherical bastards. No matter how you look at them they are just bastards.” His colleagues did not appreciate his often aggressive attitude, but he was friendly toward students and administrative staff. In the words of the English-born physicist Freeman Dyson, “Zwicky’s radical ideas and pugnacious personality brought him into frequent conflict with his colleagues at Caltech. They considered him crazy and he considered them stupid.”
Educated at Göttingen, Walter Baade worked at the Hamburg Observatory in Germany from 1919 to 1931 and at the Mount Wilson Observatory outside of Los Angeles, California, from 1931 to 1958. During the World War II blackouts, Baade used the large Hooker telescope to resolve stars in the central region of the Andromeda Galaxy for the first time. This led to the realization that there were two kinds of Cepheid variable stars and from there to a doubling of the assumed scale of the universe. The German American astronomer Rudolph Minkowski (1895-1976) joined with him in studying supernovae. He was a nephew of the German Jewish mathematician Hermann Minkowski, who did important work on four-dimensional spacetime.
The optician Bernhard Schmidt (1879-1935) was born off the coast of Tallinn, Estonia, in the Baltic Sea, then a part of the Russian Empire. He spoke Swedish and German and spent most of his adult life in Germany. During a journey to Hamburg in 1929 he discussed the possibility of making a special camera for wide angle sky photography with Walter Baade. He then developed the Schmidt camera and telescope in 1930, which permitted wide-angle views with little distortion and opened up new possibilities for astronomical research. Yrjö Väisälä (1891-1971), a meteorologist, astronomer and instrument maker from Finland, had been working on a related design before Schmidt but left the invention unpublished at the time.
Zwicky and Baade introduced the term “supernova” and suggested that these events are completely different from ordinary novae. They proposed that after the turbulent collapse of a massive star, the residue of which would be an extremely compact neutron star, there would still be a large amount of energy left over. According to the book Cosmic Horizons:
“ Baade knew of several historical accounts of ‘new stars’ that had appeared as bright naked eye objects for several months before fading from view. The Danish astronomer Tycho Brahe, for example, had made careful observations of one in 1572. Zwicky and Baade thought that such events must be supernova explosions in our own Galaxy. At a scientific conference in 1933, they advanced three bold new ideas: (1) massive stars end their lives in stupendous explosions which blow them apart, (2) such explosions produce cosmic rays, and (3) they leave behind a collapsed star made of densely-packed neutrons. Zwicky reasoned that the violent collapse and explosion of a massive star would leave a dense ball of neutrons, formed by the crushing together of protons and electrons. Such an object, which he called a ‘neutron star,’ would be only several kilometers across but as dense as an atomic nucleus. This bizarre idea was met with great skepticism. Neutrons had only been discovered the year before. The notion that an entire star could be made of such an exotic form of matter was startling, to say the least.”
Astronomers readily accepted supernovas but remained doubtful about neutron stars for many years, believing that such strange objects were unlikely to exist in real life. To transform protons and electrons into neutrons, the density would have to approach the incredible density of an atomic nucleus, about 1017 kg/m3. A neutron star of twice the mass of our Sun would have a diameter of only 20 kilometers and would therefore fit inside any major city on Earth. Despite the name, these objects are probably not composed solely of neutrons. As authors Neil F. Comins and William J. Kaufmann III state in their book Discovering the Universe:
“Its interior has a radius of about 10 km, with a core of superconducting protons and superfluid neutrons. A superconductor is a material in which electricity and heat flow without the system losing energy, whereas a superfluid has the strange property that it flows without any friction. Both superconductors and superfluids have been created in the laboratory. Surrounding a neutron star’s core is a layer of superfluid neutrons. The surface of the neutron star is a solid, brittle crust of dense nuclei and electrons about ?-km thick. The gravitational force of the neutron star is so great at its surface that climbing a bump there just 1-mm high would take more energy than it takes to climb Mount Everest. Neutron stars may also have atmospheres, as indicated by absorption lines in the spectrum of at least one of them.”
Neutron stars were first observed in the 1960s with the rapid development of non-optical astronomy. In 1967 the astrophysicist Jocelyn Bell (born 1943) and the radio astronomer Antony Hewish (born 1924) at Cambridge University in England discovered the first pulsar. They were looking for variations in the radio brightness of quasars and discovered a rapidly pulsating radio source. The radiation had to come from a source not larger than a planet. The Austrian-born, USA-based astrophysicist Thomas Gold (1920-2004) soon identified these objects as rotating neutron stars, pulsars, with extremely powerful magnetic fields that sweep around many times per second as the stars rotate, making them appear as cosmic lighthouses.
Antony Hewish won the Nobel Prize in Physics in 1974, the first one awarded for astronomical research, although his graduate student Bell made the initial discovery. He shared the Prize with the prominent English radio astronomer Martin Ryle (1918-1984), who helped develop radar countermeasures for British defense during World War II and after the war became the first professor of radio astronomy in Britain. Ryle became a leading opponent of the Steady State cosmological model proposed by the English astrophysicist Fred Hoyle.
The process of converting lower-mass chemical elements into higher-mass ones is called nucleosynthesis. One or more stars can be formed from a large cloud of gas and dust. As it slowly contracts due to gravity, the condensation releases energy which in turn heats up the central region of the cloud. The protostar continues to contract until the core temperature reaches about 10 million K, which constitutes the minimum temperature required for normal hydrogen-to-helium fusion to begin. A main sequence star is then born. When a star exhausts its hydrogen supply the pressure in its core falls and it begins to shrink, releasing energy and heating up further. The next step is core helium-to-carbon fusion, the triple-alpha process, which requires a central temperature of about 100 million K. Helium fusion also produces nuclei of oxygen-16 (8 protons and 8 neutrons) and neon-20 (10 protons and 10 neutrons).
At core temperatures of 600 million K, carbon-12 can fuse to form sodium-23 (11 protons, 12 neutrons) and magnesium-24 (12 protons, 12 neutrons), but not all stars can reach such temperatures. Stars with higher masses fuse more elements than stars with lower ones. High-mass stars have more than 8-9 solar masses; intermediate ones 0.5 to 8 solar masses and low-mass stars contain merely 0.08 to 0.5 solar mass. After exhausting its central supply of hydrogen and helium, the core of a high-mass star undergoes a sequence of other thermonuclear reactions at increasingly faster pace, reaching higher and higher temperatures.
When helium fusion ends in the core of a star with more than 8 solar masses, gravitational compression collapses the carbon-oxygen core and drives up the temperature to above 600 million K. Helium fusion continues in a shell outside of the core, and this shell is itself surrounded by a hydrogen-fusing shell. At 1 billion K oxygen nuclei can fuse, producing silicon-28 (14 protons, 14 neutrons), phosphorus-31 (15 protons, 16 neutrons) and sulfur-32 (16 protons, 16 neutrons). Each stage goes faster and faster. At 2.7 billion K, silicon fusion begins. Every stage of fusion adds a new shell of matter outside the core, creating something resembling the layers of a massive onion. The outer layers are pushed further and further out.
Energy production in big stars can continue until the various fusion processes have reached nuclei of iron-56 (26 protons, 30 neutrons), which has one of the lowest existing masses per nucleon (nuclear particle, proton or neutron). The mass of an atomic nucleus is less than the sum of the individual masses of the protons and neutrons which constitute it. The difference is a measure of the nuclear binding energy which holds the nucleus together. Iron has the most tightly bound nuclei next to 62 Ni, an isotope of nickel with 28 protons and 34 neutrons, and consequently has no excess binding energy available to release through fusion processes.
No star, regardless of how hot it is, can generate energy by fusing elements heavier than iron; iron nuclei represent a very stable form of matter. Fusion of elements lighter than this or splitting of heavier ones leads to a slight loss in mass and a net release of nuclear binding energy. The latter principle, nuclear fission, is employed in nuclear fission weapons (“atom bombs”) by splitting large, massive atomic nuclei such as those of uranium or plutonium, while nuclear fusion of lighter nuclei takes place in hydrogen bombs and in the stars.
When a star much more massive than our Sun has exhausted its fuel supplies it collapses and releases enormous amounts of gravitational energy converted into heat. It then becomes a (Type II) supernova. When the outer layers are thrown into interstellar space, the material can be incorporated into clouds of gas and dust (nebulae) that may form new stars and planets. The remaining core of the exploded star will become a neutron star or a black hole, depending upon how massive it is. It is believed that the heavy elements we find on Earth, for instance gold (Au, atomic number 79) are the result of ancient supernova explosions and were once a part of the Solar Nebula that formed our Solar System almost 4.6 billion years ago.
“ Without any nuclear fusion reactions to create the temperatures and pressures needed to support the star, gravity takes over and the star collapses in a matter of seconds. Fowler and colleagues calculated that the energy generated within the collapsing star is so great that it provides the conditions needed to create all the elements heavier than iron. As the outer layers of such a star collapse and fall inwards they are met by a blast wave rebounding from the collapsing core. The meeting of these two intense pulses of energy creates a shock wave that is so extreme that iron nuclei absorb progressive numbers of neutrons, building all the heavier elements from iron to uranium. The blast wave continues to spread outwards, and in its final and perhaps finest flourish it creates a supernova explosion that blows the star apart.”
The Ukrainian-born astrophysicist Iosif Shklovsky (1916-1985), who became a professor at Moscow University and a senior Soviet Union authority on radio astronomy and astrophysics, proposed that cosmic rays from supernovae might have caused mass extinctions on Earth. The hypothesis is difficult to verify even if true, but such explosions are among the most violent events in the universe, and a nearby (in astronomical terms) supernova could theoretically cause such a disaster. Shklovsky made theoretical and radio studies of supernovae.
Since a star that dies passes along its heavier elements, this means that each successive generation contains a higher percentage of heavy elements than the former one. The Sun is a member of a generation of stars known as Population I. An older generation is called Population II. A hypothetical Population III of extremely massive, short-lived stars is thought to have existed in the early universe, but as of 2010 no such objects have been observed in distant galaxies. This constitutes an area of active astronomical research. If such stars are not found then we have to adjust our theoretical models. Astrophysicists currently believe that the young universe consisted entirely of hydrogen and helium with trace amounts of lithium and beryllium, all created through Big Bang or primordial nucleosynthesis. All other chemical elements have been created later through stellar nucleosynthesis and supernova explosions.
Although it took only about a decade for nuclear fission to go from weapons to be used for peaceful purposes in civilian power plants, this transition has been much slower for nuclear fusion. The American physicist Lyman Spitzer Jr., a graduate of the Princeton and Yale Universities, in 1951 founded the Princeton Plasma Physics Laboratory, a pioneering program in thermonuclear research to harness nuclear fusion as a clean source of energy. In Britain, the English Nobel laureate George Paget Thomson and his team began researching fusion. In the Soviet Union, similar efforts were led by the Russian physicists Andrei Sakharov and Igor Tamm. In 1968, a team there led by the Russian physicist Lev Artsimovich (1909-1973) achieved temperatures of ten million degrees in a tokamak magnetic confinement device, which after this became the preferred device for experiments with controlled nuclear fusion.
Progress has been made at sites in the USA, Europe and Japan, but no fusion reactor has so far managed to generate more energy than has been put into it. ITER (International Thermonuclear Experimental Reactor), an expensive international tokamak fusion research project with European, North American, Russian, Indian, Chinese, Japanese and Korean participation, is scheduled to be completed in France around 2018. There is substantial disagreement over how close we are to achieving commercially viable energy production based on nuclear fusion. Pessimists say we are still a century away, while optimists point out that promising advances have been made in recent years using high-energy laser systems.
The American astronomer Gerry Neugebauer (born 1932), son of the great Austrian historian of science Otto Neugebauer, did valuable pioneering work in infrared astronomy. He spent his entire career at the California Institute of Technology. Together with the US experimental physicist Robert B. Leighton (1919-1997), also at Caltech, he completed the first infrared survey of the sky. Leighton is also known for discovering five-minute oscillations in local surface velocities of the Sun, which opened up research into solar seismology. The American physicist Frank James Low (1933-2009) became a leader in the emerging field of infrared astronomy after inventing the gallium-doped germanium bolometer in 1961, which allowed the extension of observations to longer wavelengths than previously possible. He and his colleagues showed that Jupiter and Saturn emit more energy than they receive from the Sun.
Jupiter’s diameter is 142,984 kilometers, more than 11 times that of the Earth. It would take over one thousand Earths to fill up its volume. Jupiter alone contains almost two and a half times as much mass as the rest of the planets in our Solar System combined, but it would nonetheless have needed 75-80 times more mass to become a star. The lowest mass that an object can have and still be hot enough to sustain the fusion of regular hydrogen into helium in its core is about 8% or 0.08 of the Sun’s mass. Jupiter contains merely 0.001 solar masses.
Bodies with 13-75 times Jupiter’s mass fuse deuterium, a rare isotope of hydrogen, into helium, and those with between 60 and 75 times Jupiter’s mass also fuse lithium-7 nuclei (three protons and four neutrons) into helium. Yet this will only occur briefly in astronomical terms due to the limited supply of these materials. Such objects are called brown dwarf s or “failed stars” and are intermediate between planets and stars. Brown dwarfs are not literally brown. They were first hypothesized in 1963 by astronomer Shiv Kumar. The American astronomer Jill Tarter (born 1944) proposed the name in 1975. She later became the director of the Center for SETI Research, which looks for evidence of intelligent life beyond the Earth.
The astronomer Frank Drake, born 1930 in Chicago in the United States, in 1961 devised the Drake Equation, an attempt to calculate the potential number of extraterrestrial civilizations in our galaxy. He has participated in an on-going search for signals of intelligent origin. While this line of work was initially associated with searching for radio waves from other civilizations, more recently those engaged in these matters have started looking for other types of signals, above all optical SETI. Very brief, but powerful pulses of laser light from other planetary systems can potentially carry immense amounts of concentrated information across vast distances of many light-years. Obviously, if extraterrestrial civilizations do exist, it is quite conceivable that they may be scientifically more sophisticated than we are today and may possess some forms of communication technology that are totally unknown to humans.
The search for intelligent extraterrestrial life is not uncontroversial, especially when it comes to so-called “Active SETI” signals, where we beam signals into space in addition to passively recording signals we receive. The famous English mathematical physicist Stephen Hawking believes that intelligent aliens are likely to exist, but fears that a visit by them might have unfortunate consequences for us. “ We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet,” he argues. It is possible to imagine an alien civilization of nomads, looking to conquer and colonize whatever planets they can reach, instead of peaceful interstellar “philosopher kings.” Others find it implausible that intelligent aliens would travel across vast astronomical distances merely to colonize us.
The possibility of life beyond the Earth has been discussed for centuries. The English bishop and naturalist John Wilkins (1614-1672), who proposed a decimal system of weights and measures that foreshadowed the metric system, in The Discovery of a World in the Moone (1638) suggested that the Moon is a habitable world. Wilkins worked in the turbulent age of Oliver Cromwell and the English Civil War and was associated with men who went on to found the Royal Society. He was not the first person to entertain such views, which had been suggested by some ancient Greek authors. No lesser figure than Johannes Kepler had written a story The Dream (Somnium) where a human observer is transported to the Moon.
The author Bernard de Fontenelle (1657-1757), born in Rouen, Normandy, in northern France, in 1686 published Conversations on the Plurality of Worlds (Entretiens sur la pluralité des mondes), which supported the heliocentric model of Copernicus and spoke of the possibility of life on other planets. The colorful German (Hanoverian) storyteller Baron von Münchhausen (1720-1797), who had fought against the Turks for the Russian army, in his incredible and unlikely tales allegedly claimed to have personally visited the Moon.
In From the Earth to the Moon (1865) by the great French science fiction author Jules Verne, three men travel to the Moon in a projectile launched from a giant cannon. William Henry Pickering (1858-1938) from the USA, brother of Edward Pickering and otherwise a fine astronomer, in the 1920s believed he could observe swarms of insects on the Moon’s surface.
In the 1870s the Italian scholar Giovanni Schiaparelli had observed geological features on Mars which he called canali, “channels.” This was mistranslated into English as artificial “canals,” which fueled speculations about the possibility of intelligent life on that planet.
The English author H. G. Wells in 1898 published the influential science fiction novel The War of the Worlds. In it, the Earth is invaded by technologically superior Martians who eventually succumb not to our guns, but to our bacteria and microscopic germs, which we had evolved immunity against but they had not. In 1938, when commercial radio was in its first generation, a drama adoption of Wells’ novel caused panic in the USA as thousands of radio listeners believed that it depicted a real, ongoing invasion. The man behind the broadcast, the American director Orson Welles (1915-1985), also wrote, directed, produced and acted in Citizen Kane from 1941, hailed as one of the best films from Hollywood’s Golden Age.
One of the earliest science fiction films, inspired by the writings of Jules Verne and H. G. Wells, was the black and white silent movie A Trip to the Moon from 1902 by the French filmmaker Georges Méliès (1861-1938). The Austrian-born motion-picture director Fritz Lang (1890-1976) created the costly silent film Metropolis in Germany in 1927. In the commercially successful Hollywood production E.T. the Extra-Terrestrial from 1982, directed by the influential American Jewish film director and producer Steven Spielberg (born 1946), a boy befriends a stranded, but friendly extraterrestrial and helps him to return home.
The American planetary scientist and science writer Carl Sagan (1934-1996), born in New York City, won enormous popularity as well as some criticism as a popularizer of astronomy and was a contributor to NASA’s Mariner, Viking, Voyager and Galileo expeditions to the planets. “ He helped solve the mysteries of the high temperatures of Venus (answer: massive greenhouse effect), the seasonal changes on Mars (answer: windblown dust), and the reddish haze of Titan (answer: complex organic molecules).” The Ukrainian astrophysicist Iosif Shklovsky in the Soviet Union was one of the first major scientists to propose serious examination of the possibility of extraterrestrial life. His book Intelligent Life in the Universe was translated and expanded by Carl Sagan, whose father was a Russian Jewish immigrant.
From infrared radiation was discovered in 1800 and ultraviolet radiation the year after, European scientists mapped the electromagnetic spectrum — radio waves, microwaves, terahertz radiation, infrared radiation, visible light, UV, X-rays and gamma rays — until the French physicist Paul Villard had found gamma radiation in the year 1900. Astronomers began observing in all wavelengths in the twentieth century, primarily in the second half of it.
Typically, only 2% of the light striking film triggers a chemical reaction in the photosensitive material. Photographic film and plates have been replaced in favor of the highly efficient electronic light detectors called charge-coupled devices (CCDs). CCDs respond to 70% or more of the light falling on them, and their resolution is currently better than that of film. A CCD is divided into an array of small, light-sensitive squares called picture elements or pixels. A megapixel is one million pixels. This is the same basic technology that is used in digital cameras for the mass consumer market. The Canadian physicist Willard Boyle (born 1924) and the physicist George E. Smith (born 1930) from the USA invented the first charge-coupled device in 1969. They shared the 2009 Nobel Prize in Physics in recognition of the tremendous importance their invention has had in the sciences, from astronomy to medicine.
Like radio and gamma ray astronomy, X-ray astronomy took off after WW2. Herbert Friedman (1916-2000), who spent most of his career at the US Naval Research Laboratory, using rocket-borne instruments found that the Sun weakly emits X-rays. The Nobel Prize- winning astrophysicist Riccardo Giacconi was born in Genoa in northern Italy in 1931 and earned his Ph.D. in cosmic ray physics at the University of Milan before moving to the USA. He there became one of the major pioneers in the discovery of cosmic X-ray sources, among them a number of suspected black holes. Giacconi and his American colleagues built instruments for X-ray observations that were launched into space. The first widely accepted black holes, such as the object called Cygnus X-1, were detected in the 1960s and 70s.
A black hole is an object with such a concentrated mass that no nearby object can escape its gravitational pull since the escape velocity, the speed required for matter to escape from its gravitational field, exceeds that of light. In the seventeenth century it had been established by Ole Rømer that light has a great, but finite speed, and Isaac Newton had introduced the concept of universal gravity. The idea that an object could have such a great mass that even light could not escape its gravitational pull was proposed independently in the late 1700s by the English natural philosopher John Michell (1724-1793) and the French mathematical astronomer Pierre-Simon Laplace, yet their ideas had little impact on later developments.
John Michell had studied at Cambridge University in England, where he taught Hebrew, Greek, mathematics and geology. He devised the famous experiment, successfully undertaken by Henry Cavendish in 1797-98 after his death, which measured the mass of the Earth. In addition to this, Michell is considered one of the founders of seismology. His interest for this subject was triggered by the powerful earthquake that destroyed the city of Lisbon, Portugal, in 1755. He showed that the focus of that earthquake was underneath the Atlantic Ocean.
Modern theories of black holes emerged after Einstein’s general theory of relativity from late 1915. The astrophysicist Karl Schwarzschild (1873-1916) was born in Frankfurt am Main in Germany to a Jewish business family and studied at the Universities of Strasbourg and Munich. From 1901 until 1909 he was professor at Göttingen and director of the Observatory there, and in 1909 he became director of the Astrophysical Observatory in Potsdam. On the outbreak of World War 1 in August 1914 he volunteered for German military service in Belgium and then Russia, where he contracted an illness that caused his death at the age of 42.
While on the Russian front, he completed the first two exact solutions of the Einstein field equations, which had been presented in November 1915. For a nonrotating black hole, the “ Schwarzschild radius “ (Rg) of an object of mass M is given by a formula where G is the universal gravitational constant and c is the speed of light: Rg = 2GM/c2. The Schwarzschild radius defines the spherical outer boundaries of a black hole, its event horizon. Ironically, Schwarzschild himself apparently did not believe in the physical reality of such objects.
According to scholar Ted Bunn, “Almost immediately after Einstein developed general relativity, Karl Schwarzschild discovered a mathematical solution to the equations of the theory that described such an object. It was only much later, with the work of such people as Oppenheimer, Volkoff, and Snyder in the 1930s, that people thought seriously about the possibility that such objects might actually exist in the Universe. (Yes, this is the same Oppenheimer who ran the Manhattan Project.) These researchers showed that when a sufficiently massive star runs out of fuel, it is unable to support itself against its own gravitational pull, and it should collapse into a black hole. In general relativity, gravity is a manifestation of the curvature of spacetime. Massive objects distort space and time, so that the usual rules of geometry don’t apply anymore. Near a black hole, this distortion of space is extremely severe and causes black holes to have some very strange properties.”
If the mass creating a black hole was not rotating then the black hole does not rotate, either. Nonrotating ones are called Schwarzschild black holes. When the matter that creates a black hole possesses angular momentum, that matter collapses to a ring-shaped singularity located inside the black hole between its center and the event horizon. Such rotating black holes are called Kerr black holes, after the mathematician Roy Kerr (born 1934) from New Zealand, who calculated their structure in 1963. A black hole is empty except for the singularity.
The American physicist John Archibald Wheeler (1911-2008) is credited with having popularized the terms black hole and wormhole, a tunnel between two black holes which could hypothetically provide a shortcut between their end points. While a popular concept in science fiction literature, it has so far not been proven that wormholes actually exist. Wheeler received many prestigious awards, among them the Wolf Prize in Physics. He was also an influential teacher of many other fine American physicists, among them Richard Feynman as well as Charles W. Misner (born 1932) and Kip Thorne (born 1940), who are both considered among the world’s leading experts on the astrophysical implications of general relativity.
The physicist Yakov B. Zel’dovich (1914-1987), born in Minsk into a Jewish family, played a major role in the development of thermonuclear weapons in the Soviet Union and was a pioneer in attempts to relate particle physics to cosmology. Together with Rashid Sunyaev (born 1943) he proposed the Sunyaev-Zel’dovich effect, an important method for determining absolute distances in space. Sunyaev has developed a model of disk accretion onto black holes and of X-radiation from matter spiralling into such a hole. Working in Moscow, Sunyaev led the team which built the X-ray observatory attached to the pioneering Soviet (and later Russian) MIR space station, which was constructed during the late 1980s and early 1990s.
The English theoretical physicist Stephen Hawking was born in Oxford in 1942 and studied at University College, Oxford. Hawking then went on to Cambridge University to do research in cosmology. In the 1970s he predicted that black holes, contrary to previous assumptions, can emit radiation and thus mass. This has become known as Hawking or Bekenstein-Hawking radiation after Jacob Bekenstein (born 1947), an Israeli Jewish physicist at the Hebrew University of Jerusalem. His work combined relativity, thermodynamics and quantum mechanics. Unlike Einstein’s theories, which have been empirically verified repeatedly, it is possible that Hawking’s ideas about black hole evaporation may never be directly observed.
Stephen Hawking, in collaboration with the English mathematical physicist Roger Penrose (born 1931), developed a new mathematical technique for analyzing the relation of points in spacetime. Hawking became a scientific celebrity, and his serious physical handicap contributed to the general public’s fascination with his person. His popular science book A Brief History of Time from 1988 has sold millions of copies in dozens of languages. In the twenty-first century, following the development of sophisticated electronic computers, complex computer simulations can be used to study the conditions in and around black holes. They are now believed to be dynamic, evolving, energy-storing and energy-releasing objects.
The Dutch-born American astronomer Maarten Schmidt (born 1929) earned his bachelor’s degree at the University of Groningen in the Netherlands and his Ph.D. under the great astronomer Jan Hendrik Oort at the University of Leiden in 1956. Schmidt was one of many prominent Dutch-born astronomers of the twentieth century, a respectable number for such a small nation. Some of them, like Oort, Willem de Sitter, Jacobus Kapteyn and Hendrik C. van de Hulst, remained in the Netherlands, whereas others like Sidney van den Bergh and Gerard Kuiper moved to North America. Sidney van den Bergh (born 1929) has worked on everything from star clusters to cosmology, but the research for which he is best known is in the classification of galaxies, the study of supernovae and the extragalactic distance scale.
Maarten Schmidt joined the California Institute of Technology. In 1963 he studied the spectrum of an object known as 3C 273 and found that it had a very high redshift, indicating that it was extremely far away from us. He investigated the distribution of such quasars (quasi-stellar objects) and discovered that they were more abundant when the universe was younger. The American radio astronomer Jesse Greenstein (1909-2002) collaborated with him in this work. Other quasars were soon found, but it was difficult to explain how they could generate enough energy to shine so brightly at distances of billions of light-years. Quasars were among the most distant, and by extension oldest, objects ever observed in the universe.
The English astrophysicist Donald Lynden-Bell (born 1935) was educated at the University of Cambridge in Britain, where he eventually became professor of astrophysics. He made significant contributions to the theories of star motions, spiral structure in galaxies, chemical evolution of galaxies and the distributions and motions of galaxies and quasars. In 1969 he proposed that black holes are at the centers of many galaxies and provide the energy sources for quasars, powered by the collapse of great amounts of material into massive black holes.
A black hole can attract gas from its neighbors, which then swirls into it in the form of an extremely hot accretion disk. Matter that spirals into it emits copious quantities of X-rays and gamma-rays, which can be detected by us although the black hole itself cannot be directly observed since light cannot escape it. There are several classes of black holes; some are just a few solar masses, formed from the collapse of a large star. There may be an intermediate class of such bodies, too, but most if not all galaxies, including our own Milky Way, are believed to harbor supermassive black holes of millions or even billions of solar masses in their centers. These objects are believed to constitute a key force in shaping the lifecycle of galaxies.
Most astronomers today believe that quasars are created by supermassive black holes that are growing, perhaps forming when two large galaxies collide. Previously, black holes were generally seen as the endpoints of evolution, but a detailed survey has found that giant black holes were already common 13 billion years ago. The universe’s first, probably extremely massive, stars collapsed after a few million years. In a remarkably short period of time and in a process that is not yet fully understood, smaller black holes apparently merged into supermassive centerpieces of star-breeding galaxies and evolved into galactic sculptors.
General relativity allows for the existence of gravitational waves, small distortions of spacetime geometry which propagate through space. However, just like black holes this was initially believed by most scientists to describe purely mathematical constructs, not actually existing physical phenomena. Significant gravitational waves are thought to be generated through the collision and merger of dense objects such as stellar black holes or neutron stars.
In 1974 the American astrophysicists Russell Hulse (born 1950) and Joseph Taylor (born 1941) discovered a pair of pulsars (neutron stars) in close orbit around each other. They shared the 1993 Nobel Prize in Physics for their studies of this pair, whose behavior deviates from that predicted by Newton’s theory of gravity. Einstein’s general theory of relativity predicts that they should lose energy by emitting gravitational waves, in roughly the same manner as a system of moving electrical charges emits electromagnetic waves.
These two very dense bodies are rotating faster and faster about each other in an increasingly tight orbit. The change is tiny, but noticeable, and is in agreement with what it should be according to the general theory of relativity. This is seen as an indirect proof of the existence of gravitational waves. We have to wait until later in the twenty-first century for a direct demonstration of their existence, or to revise our theories if it turns out that they do not exist.
The Polish astronomer Bohdan Paczynski (1940-2007) was born in Vilnius, Lithuania, educated at Warsaw University in Poland and in 1982 moved to Princeton University in the USA. He was a leading expert on the lives of stars. Because gravity bends light rays or, rather, appears to do so because it bends the fabric of space itself, an astronomical object passing in front of another can under certain conditions focus its light in a manner akin to a telescope lens. Paczynski showed that this effect could be applied to survey the stars in our galaxy. This is called gravitational microlensing. The possibility of gravitational lensing had been predicted by Einstein himself, but Paczynski worked out its technical underpinnings. He also championed the idea that gamma ray bursts originate billions of light-years away.
Gravitational lensing has today emerged as a highly useful tool in astronomical research. The phenomenon at the root of gravitational lensing is the deflection of light by gravitational fields predicted by Albert Einstein’s general relativity. The deflection has well-known observable effects, such as multiple images and the magnification of images.
Gamma ray bursts are short-lived but extremely powerful bursts that can briefly shine hundreds of times brighter than a regular supernova. They were discovered in the late 1960s, at the height of the Cold War, by military satellites designed to detect gamma radiation pulses from nuclear weapons tests. In 2008, NASA’s Swift satellite detected such an explosion 7.5 billion light-years away that was so powerful that its afterglow was briefly visible to the naked eye, making it the most distant object ever seen by human eyes without optical aid.
Gamma ray bursts are among the most powerful explosions in the universe. The light from the most distant such event yet recorded reached our world from more than 13 billion light-years away in 2009. That explosion, which lasted just a little more than a second, released roughly 100 times more energy than our Sun will release during its entire lifetime. Most likely, it originated from a dying star with far greater mass than the Sun in the younger universe.
Even after the introduction of the telescope it took centuries for Western astronomers to work out the true scale of the universe. The English astronomer and architect Thomas Wright (1711-1786) suggested around 1750 that the Milky Way was a disk-like system of stars and that there were other star systems similar to it, only very far away from us. Soon after, Immanuel Kant in 1755 hypothesized that the Solar System is part of a huge, lens-shaped collection of stars and that similar such “island universes” exist elsewhere, too. Kant’s thoughts about the universe, however, were philosophical and had little observational content.
Johann Heinrich Lambert (1728-1777), a Swiss-born mathematician, astronomer and philosopher, “ provided the first rigorous proof that p (the ratio of a circle’s circumference to its diameter) is irrational, meaning that it cannot be expressed as the quotient of two integers.” Lambert, the son of a tailor, was largely self-educated and early in his life began astronomical investigations with instruments he built for himself. He made a number of innovations in the study of heat and light and corresponded with Kant, with whom he shares the honor of being among the first to believe that certain nebulae are disk-shaped galaxies like the Milky Way.
William Herschel’s On the Construction of the Heavens from 1785 was the first quantitative analysis of the Milky Way’s shape based on careful telescopic observations. William Parsons in Ireland with the largest telescope of the nineteenth century, the Leviathan of Parsonstown, was after 1845 able to see the spiral structure of some nebulae, what we call spiral galaxies. Already in 1612 the German astronomer Simon Marius had published the first systematic description of the Andromeda Nebula (Galaxy) from the telescopic era, but he could not resolve it into individual stars. The decisive breakthrough came in the early twentieth century.
The Mount Wilson Observatory in California was founded by George Ellery Hale. He offered the young astronomer Harlow Shapley (1885-1972) a research post. Shapley had earned a Ph.D. at Princeton University in 1913 and in 1921 became director of the Harvard College Observatory. Before 1920 he had made his greatest single scientific discovery: That our galaxy was much bigger than earlier estimates by William Herschel made it out to be, and that the Sun was not close to its center. He didn’t get everything right, though. In the Great Debate with fellow American astronomer Heber Curtis (1872-1942) he argued that the mysterious spiral nebulae were merely gas clouds that were a part of the Milky Way and that everything consisted of one large galaxy: our own. Curtis, on the other hand, claimed that the universe consisted of many galaxies comparable to our own. Shapley was reasonably correct regarding the size of our galaxy, but Curtis was right that our universe is composed of multiple galaxies.
Jacobus Kapteyn (1851-1922) started the highly productive twentieth century Dutch school of astronomers. He studied physics at the University of Utrecht in the Netherlands, spent three years at the Leiden Observatory and thereafter founded and led the study of astronomy at the dynamic University of Groningen from 1878 to 1921. Kapteyn observed that many of the stars in the night sky could be roughly divided into two streams, moving in nearly opposite directions. This insight led to the finding of the galactic rotation of the Milky Way.
The Swedish astronomer Bertil Lindblad (1895-1965) was a graduate of the University of Uppsala and directed the Stockholm Observatory in Sweden from 1927-65. He studied the structure of star clusters, but his most important work was regarding galactic rotation. His efforts led directly to Jan Oort’s theory of differential galactic rotation. He confirmed Shapley’s approximate distance to the center of our galaxy and estimated its total mass. Oort at the University of Leiden in 1927 confirmed Lindblad’s theory that the Milky Way rotates, and their model of galactic rotation was verified by the Canadian astronomer John Plaskett (1865-1941), originally a mechanic employed by the University of Toronto physics department. Following the lead of Kapteyn, Lindblad and Oort, the Dutch astronomer Hendrik C. van de Hulst (1918-2000) and others in the 1950s mapped the clouds of the Milky Way and delineated its spiral structure. “Van de Hulst made extensive studies of interstellar grains and their interaction with electromagnetic radiation. He wrote important books on light scattering and radio astronomy. He investigated the solar corona and the earth’s atmosphere.”
The job of cataloging individual stars and recording their position and brightness from photographic plates at the Harvard College Observatory was done by a group of women, “human computers” working with Edward Pickering, among them the American astronomer Henrietta Swan Leavitt (1868-1921). The concept of “standard candles,” stars whose brightness can be reliably calculated and used as benchmarks to measure vast astronomical distances, was introduced by Leavitt for Cepheid variable stars. She became head of the photographic photometry department, and during her career she discovered more than 2,400 variable stars. This work aided Edwin Hubble in making his groundbreaking discoveries.
Scientists are flawed like other people. Newton could be a difficult man to deal with, yet he was undoubtedly one of the greatest geniuses in history. Henry Cavendish was a brilliant experimental scientist as well as painfully shy, and Nikola Tesla was notoriously eccentric. Judging from the many stories about him, Edwin Hubble had an ego the size of a small country, but that doesn’t change the fact that he was a great astronomer whose work permanently altered our view of the universe. He was a sociable man who partied with movie stars like Charlie Chaplin and Greta Garbo and with famous writers such as Aldous Huxley.
His contemporary Milton L. Humason (1891-1972), despite a very limited formal education, was a meticulous observer. In 1919, Hubble joined the Mount Wilson Observatory. The 2.5 meter Hooker telescope there was completed before 1920, at which point it was the largest telescope in the world. Using this, Hubble identified Cepheid variable stars in Andromeda. This allowed him to show that the distance to Andromeda was greater than Shapley’s proposed extent of the Milky Way. Hubble demonstrated that there are countless galaxies of different shapes and sizes out there, and that the universe is far larger than anybody had imagined. He then formulated Hubble’s Law and introduced the concept of an expanding universe. “ His investigation of these and similar objects, which he called extragalactic nebulae and which astronomers today call galaxies, led to his now-standard classification system of elliptical, spiral, and irregular galaxies, and to proof that they are distributed uniformly out to great distances. (He had earlier classified galactic nebulae.) Hubble measured distances to galaxies and with Milton L. Humason extended Vesto M. Slipher’s measurements of their redshifts, and in 1929 Hubble published the velocity-distance relation which, taken as evidence of an expanding Universe, is the basis of modern cosmology.”
The Austrian physicist Christian Doppler described what is known as the Doppler Effect for sound waves in the 1840s and predicted that it would be valid for other kinds of waves, too. An observed redshift in astronomy is believed to occur due to the Doppler Effect whenever a light source is moving away from the observer, displacing the spectrum of that object toward the red wavelengths. Hubble discovered that the degree of redshift observed in the light coming from other galaxies increased in proportion to the distance of those galaxies from us.
The work of Walter Baade in the 1940s and the American astronomer Allan Sandage (born 1926) in the 1950s resulted in revisions of the value of Hubble’s Constant and by extension the age of the universe. Sandage earned his doctorate under Baade and went on to determine the first reasonably accurate value for age of the universe. “ He has calibrated all of the ‘standard candles’ to determine distances of remote galaxies and has several times presented (often with Gustav Tammann) revised estimates of the value of the Hubble constant.”
Hubble’s observational work led the great majority of scientists to believe in the expansion of the universe. This had a huge impact on cosmology at the time, among others on the Dutch mathematician and astronomer Willem de Sitter (1872-1934). De Sitter had studied mathematics at the University of Groningen. A chance meeting with the Scottish astronomer David Gill led to an invitation to work at the Observatory at the Cape of Good Hope. After four years there, de Sitter returned to the Netherlands and became a mathematical astronomer, earning his doctorate under Jacobus Kapteyn. He spent most of his career at the University of Leiden, where he expanded its fine astronomy program. He performed statistical studies of the distribution and motions of stars but is best known for his contributions to cosmology.
According to writers J. J. O’Connor and E. F. Robertson, “Einstein had introduced the cosmological constant in 1917 to solve the problem of the universe which had troubled Newton before him, namely why does the universe not collapse under gravitational attraction. This rather arbitrary constant of integration which Einstein introduced admitting it was not justified by our actual knowledge of gravitation was later said by him to be the greatest blunder of my life. However de Sitter wrote in 1919 that the term ‘… detracts from the symmetry and elegance of Einstein’s original theory, one of whose chief attractions was that it explained so much without introducing any new hypothesis or empirical constant.’“ In 1932, Einstein and de Sitter published a joint paper in which they proposed the Einstein-de Sitter model of the universe. This is a particularly simple solution of the field equations of general relativity for an expanding universe. They also prophetically argued in this paper that there might be large amounts of matter which does not emit light and has not been detected.
The cosmologist Georges Lemaître (1894-1966) from Belgium was a Catholic priest as well as a trained scientist. The combination is not unique. The Italian astronomer Angelo Secchi was a priest and the creator of the first modern system of stellar classification; the Bohemian scholar Gregor Mendel, too, was a priest and the founder of modern genetics. World War I interrupted Lemaître’s studies. Serving as an artillery officer he witnessed one of the first poison gas attacks in history. After the war he studied physics and was ordained as an abbé.
In 1925 he accepted a professorship at the Catholic University of Louvain near Brussels. He reviewed the general theory of relativity and his calculations showed that the universe had to be either shrinking or expanding. Lemaître argued that the entire universe was initially a single particle — the “primeval atom” — which disintegrated in a massive explosion, giving rise to space and time. He published a model of an expanding universe in 1927 which had little impact then, but in 1930, following Hubble’s work, Lemaître’s former teacher at Cambridge University, Arthur Eddington, shared his paper with de Sitter. Albert Einstein confirmed that Lemaître’s work “fits well into the general theory of relativity.”
Unknown to Lemaître, another person had independently come up with overlapping ideas. This was the Russian mathematician Alexander Friedmann (1888-1925), who in 1922 had published a set of possible mathematical solutions that gave a non-static universe. Already in 1905 he wrote a mathematical paper and submitted it to the German mathematician David Hilbert for publication. In 1914 he went to Leipzig to study with the Norwegian physicist Vilhelm Bjerknes, the leading theoretical meteorologist of the time. He then got caught up in the turbulent times of the Russian Revolution in 1917 and the birth of the Soviet Union.
Friedmann’s work was hampered by a very abstract approach and aroused little interest at the time of publishing. Lemaître attacked the issue from a much more physical point of view. Friedmann died from typhoid fever in 1925, but he lived to see the city Saint Petersburg renamed Leningrad after the revolutionary leader and Communist dictator Vladimir Lenin (1870-1924). The astrophysicist George Gamow studied briefly under Alexander Friedmann, but he fled the country in 1933 due to the increasingly brutal repression of the Communist regime, which directly or indirectly killed millions of its own citizens during this time period.
Although Lemaître’s “primeval atom “ was the first version of this theory of the origin of the universe, a more comprehensive model was published in 1948 by Gamow and the cosmologist Ralph Alpher (1921-2007) in the USA. The term “Big Bang” was coined somewhat mockingly by Fred Hoyle, who did not believe in it. Gamow decided as a joke to include his friend Hans Bethe as co-author of the paper, thus making it known as the Alpher, Bethe, Gamow or alpha-beta-gamma paper, after the first three letters of the Greek alphabet. It can be seen as the beginning of Big Bang cosmology as a coherent scientific model.
Yet this joke had the practical effect of downplaying Alpher’s contributions. He was then a young doctoral student, and when his name appeared next to those of two of the most famous astrophysicists in the world it was easy to assume that he was a junior partner. As a matter of fact, he made very substantial contributions to the Big Bang model whereas Hans Bethe, brilliant though he was as a scientist, in this case had contributed very little. Ralph Alpher in many ways ended up being the “forgotten father” of the Big Bang theory.
Alpher published two papers in 1948. In another with the scientist Robert Herman (1914-1997) he predicted the existence of a cosmic background radiation as an “echo” of the Big Bang. Sadly, astronomers did not bother to search for this proposed echo at the time; radio astronomy was then still in its infancy. Alpher and Herman went on to calculate the present temperature corresponding to this energy. The remnant glow from the Big Bang must still exist in the universe today, although greatly reduced in intensity by the expansion of space.
The cosmic microwave background radiation, which is considered one of the strongest proofs in favor of the Big Bang theory, was accidentally discovered by Robert Wilson (born 1936) and Arno Penzias (born 1933) in the USA in 1964. Yet they did not initially grasp the full significance of what they had found, whereas Alpher and Herman were totally ignored when Wilson and Penzias received the Nobel Prize in Physics in 1978. In the early 1960s the Canadian James Peebles (born 1935) together with Robert H. Dicke (1916-1997) and David Todd Wilkinson (1935-2002) from the USA had also predicted the existence of the cosmic background radiation and planned to seek it just before it was found by Penzias and Wilson.
An alternative Steady State model was developed in 1948 by the Englishman Fred Hoyle together with Thomas Gold (1920-2004), an Austrian American astrophysicist born in Vienna to a wealthy Jewish industrialist, and Hermann Bondi, (1919-2005), an Anglo-Austrian who was also brought up in Vienna and arrived at Cambridge, Britain in 1937 and worked with Hoyle on radar during WW2. The Steady State model declined in popularity after the discovery of the cosmic microwave background radiation, the clearest evidence discovered so far indicating that something like the Big Bang really happened back in a very distant past.
Although it may sound counterintuitive at first, quantum physicists operate with the concept of vacuum energy, energy that is intrinsic to space itself and can create “virtual” subatomic particles. According to the quality magazine Scientific American, “Far from being empty, modern physics assumes that a vacuum is full of fluctuating electromagnetic waves that can never be completely eliminated, like an ocean with waves that are always present and can never be stopped. These waves come in all possible wavelengths, and their presence implies that empty space contains a certain amount of energy—an energy that we can’t tap, but that is always there. Now, if mirrors are placed facing each other in a vacuum, some of the waves will fit between them, bouncing back and forth, while others will not. As the two mirrors move closer to each other, the longer waves will no longer fit—the result being that the total amount of energy in the vacuum between the plates will be a bit less than the amount elsewhere in the vacuum. Thus, the mirrors will attract each other, just as two objects held together by a stretched spring will move together as the energy stored in the spring decreases. This effect, that two mirrors in a vacuum will be attracted to each other, is the Casimir Effect.” It was first predicted in 1948 by the Dutch physicist Hendrick Casimir (1909-2000).
According to our best current models, in its first 30,000 years the universe was radiation-dominated, during which time photons prevented matter from forming clumps. In the early stages there was an ongoing process of particles and antiparticles annihilating each other and spontaneously coming into being from radiation. Lucky for us there was a tiny surplus of ordinary matter, otherwise matter as we know it could not have existed. After this period the universe became matter-dominated, when clumps of matter could form. Most astrophysicists today believe that during the first 379,000 or so years, matter and energy formed an opaque plasma called the primordial fireball. The cosmic microwave background radiation (CMB) is believed to be the greatly redshifted remnant of the universe as it existed about 379,000 years after the Big Bang. It therefore contains the oldest photons in the observable universe.
Starting from this point, spacetime expansion had caused the temperature of the universe to fall below 3000 K, enabling protons and electrons to combine to form hydrogen atoms, at which point the universe became transparent. Following billions of years of expansion the CMB radiation is now very cold, less than three degrees above absolute zero. This nearly perfect blackbody radiation shines primarily in the microwave portion of the electromagnetic spectrum and is consequently invisible to the human eye, but it is isotropic and fills the universe in every direction we can observe. Only with very sensitive instruments can cosmologists detect minute fluctuations in the cosmic microwave background temperature, yet these tiny fluctuations were of critical importance for the formation of stars and galaxies.
In 1989, NASA launched its Cosmic Background Explorer ( COBE ) satellite into space under the leadership of American astrophysicist John C. Mather (born 1946). Detectors on board the COBE satellite were designed by a team led by American astrophysicist George Smoot (born 1945) and were sensitive enough to measure minute fluctuations, corresponding to the presence of tiny seeds of matter clumping together under the influence of gravity. A follow-up mission was the Wilkinson Microwave Anisotropy Probe satellite — WMAP — from the USA. Mather and Smoot shared the 2006 Nobel Prize in Physics for providing a view of the CMB in unprecedented detail. The European Planck space observatory was launched 2009 and will study the cosmic microwave background radiation in even greater detail over the entire sky.
Alan Guth (born 1947) is a leading American theoretical physicist and cosmologist, born to a middle-class Jewish couple in New Jersey. He graduated from the Massachusetts Institute of Technology (MIT) in 1968 and held postdoctoral positions at Princeton University, Columbia University, Cornell University and the Stanford Linear Accelerator Center. He was initially interested in elementary particle physics but later shifted to cosmology, bridging the gap between the very big and the very small. In the 1980s he proposed that the expansion of the universe was propelled by a repulsive anti-gravitational force generated by an exotic form of matter. “ Although Guth’s initial proposal was flawed (as he pointed out in his original paper), the flaw was soon overcome by the invention of ‘new inflation,’ by Andrei Linde in the Soviet Union and independently by Andreas Albrecht and Paul Steinhardt in the US.”
Andrei Linde (born 1948) is a prominent Russian theoretical physicist, originally educated at Moscow State University and the Lebedev Physical Institute in what was then the Soviet Union. He eventually moved to the West at the end of the Cold War, first as a staff member of CERN in Western Europe, then as a professor of physics at Stanford University in the USA. The idea of an inflationary multiverse (reality consisting of many universes with different physical properties) was proposed in 1982. According to the concept of cosmic inflation championed by Guth and Linde, during fractions of a second the young universe underwent exponential expansion, doubling in size at least 90 times. As the magazine Discover states:
“ Much of today’s interest in multiple universes stems from concepts developed in the early 1980s by the pioneering cosmologists Alan Guth at MIT and Andrei Linde, then at the Lebedev Physical Institute in Moscow. Guth proposed that our universe went through an incredibly rapid growth spurt, known as inflation, in the first 10-30 second or so after the Big Bang. Such extreme expansion, driven by a powerful repulsive energy that quickly dissipated as the universe cooled, would solve many mysteries. Most notably, inflation could explain why the cosmos as we see it today is amazingly uniform in all directions. If space was stretched mightily during those first instants of existence, any extreme lumpiness or hot and cold spots would have immediately been smoothed out. This theory was modified by Linde, who had hit on a similar idea independently. Inflation made so much sense that it quickly became a part of the mainstream model of cosmology.”
According to the models we operate with there are four basic forces at work in the universe: The strong and weak nuclear forces, electromagnetism and gravity. Of these, gravity is by far the weakest one and is therefore more or less ignored when dealing with particles at the subatomic level, yet it is the most important force when dealing with the universe at large.
The strong nuclear force is the most powerful force in nature, but it was the last to be understood. It binds quarks together to make subatomic particles such as protons and neutrons and holds together the atomic nucleus. The electromagnetic force holds atoms and molecules together. As those who have played with magnets will know, like charges (+ +, or — -) repel one another whereas opposites attract. Protons have a positive electrical charge and must therefore feel a very strong repulsive electromagnetic force from neighboring protons. So why does the nucleus hold together? Because the strong nuclear force is so powerful that it cancels out these forces. Yet just like the weak nuclear force it only works over very short distances, which is why large atomic nuclei, bigger than uranium, tend to be unstable and radioactive. As Neil F. Comins and William J. Kaufmann III state in their book Discovering the Universe:
“Their influences extend only over atomic nuclei, distances less than about 10-15 m. The strong nuclear force holds protons and neutrons together. Without this force, nuclei would disintegrate because of the electromagnetic repulsion of their positively charged protons. Thus, the strong nuclear force overpowers the electromagnetic force inside nuclei. The weak nuclear force is at work in certain kinds of radioactive decay, such as the transformation of a neutron into a proton. Protons and neutrons are composed of more basic particles, called quarks. A proton is composed of two ‘up’ quarks and one ‘down’ quark, whereas a neutron is made of two ‘down’ quarks and one ‘up’ quark. The weak nuclear force is at play whenever a quark changes from one variety to another….at extremely high temperatures the electromagnetic force, which works over all distances under ‘normal’ circumstances, and the weak force, which only works over very short distances under the same ‘normal’ circumstances, become identical. They are no longer separate forces, but become a single force called the electroweak force. The experiments verifying this were done at the CERN particle accelerator in Europe in the 1980s.”
Physicists have found that at sufficiently high temperatures, these various forces begin to behave in the same way. If you believe the current theoretical models, from the Big Bang until 10—43 seconds afterward, a moment which has been called Planck time, all the four forces are believed to have been united as one. Before the Planck time, the universe was so hot and dense that the known laws of physics do not describe the behavior of spacetime, matter and energy. At this point, matter as we think of it did not yet exist, but the temperature of the tiny and rapidly expanding universe may have been in the order of 1032 K. After this, gravity separated from the three other forces, which then separated from each other a little later.
The physicist Abdus Salam (1926-1996) from present-day Pakistan together with Sheldon Lee Glashow (born 1932) and Steven Weinberg (born 1933) from the United States shared the 1979 Nobel Prize in Physics for their work in this field. Following up their theoretical models from the 1970s, the Dutch accelerator physicist Simon van der Meer (born 1925) was awarded the Nobel Prize in Physics in 1984 together with the Italian particle physicist Carlo Rubbia (born 1934) for work at CERN and for contributions to the discovery of the W and Z particles, short-lived particles with masses around 100 times the mass of a proton. Rubbia was born in Gorizia, a small town in northeastern Italy next to Slovenia, and was educated at the University of Pisa. Simon van der Meer was born and grew up in The Hague in the Netherlands and worked a few years for the Philips Company before joining CERN.
The twentieth century brought two great new theories of physics: general relativity and quantum mechanics. Albert Einstein introduced the general theory of relativity and was also, along with the great German physicist Max Planck, a co-founder of quantum physics, although he famously had serious reservations about this field in later years. It is often said that Einstein “disproved” or “overturned” Newton’s theories of gravitation, but this is misleading. Newton’s theories work reasonably well for objects that are not extremely massive or move with velocities that approach the speed of light. It would be more accurate to say that Newton’s work on gravity should be considered a special case of general relativity.
The general theory of relativity is one of the best-tested theories of modern physics. It is not “wrong,” although it could be incomplete. There may well be phenomena awaiting discovery that it fails to explain, for example related to the concepts of dark matter and dark energy in cosmology. Yet any modified theory of gravity must contain all that is good about general relativity within itself, just like Einstein’s theory carried Newton’s theory within itself.
The problem is that while the theory of relativity normally works quite well for describing large objects in the universe it is next to useless when dealing with what happens on the subatomic level. This is where quantum mechanics takes over. It, too, has so far been quite successful at predicting empirically observed behavior. Physicists therefore use two sets of rules, one for the very large and one for the very small. The challenge is to bridge these two.
The difficulty in reconciling quantum mechanics (describing the weak, strong and electromagnetic forces) and general relativity (describing gravitation) is that the three former forces are quantized whereas general relativity is not, as far as we know today. The weak, strong and electromagnetic forces are transmitted by particles; photons are the quanta of electromagnetism; gluons are the exchange particles between quarks involved in the strong nuclear force and the W and Z bosons are the particles involved in the weak nuclear force.
A hypothetical particle, the graviton, has been suggested as the force carrier for gravity analogous to the photon, but it has not yet been detected, and no theory of quantum gravity has succeeded. Gravity as described in the general theory of relativity is based on a continuous rather than quantized force; the distortion of spacetime by matter and energy creates the gravitational force, which is to say that gravity is a property of space itself.
To combine gravity and the other three fundamental forces into one comprehensive “Theory of Everything,” some scientists try to imagine that the universe consists of more than the traditional four dimensions we are familiar with when we think of spacetime (three of space plus time). Theories that attempt to mathematically describe this new formulation of the universe are called superstring theories. There are several versions which assume that spacetime has 10 dimensions, or 11 according to M-theory. In addition to the four traditional ones are six that are rolled up into such tiny volumes that we cannot detect them directly.
The notion that the universe could be described with more than four dimensions in order to unify the fundamental forces of gravity and electromagnetism was suggested in the 1920s. The German mathematical physicist Theodor Kaluza (1885-1954), who came from a Catholic Christian family and studied at the University of Königsberg, tried to combine the equations for general relativity with Maxwell’s equations for electromagnetism using five dimensions. Einstein encouraged him and himself spent the last three decades of his career on a fruitless attempt to create a unified theory of gravity and electromagnetism. The Swedish Jewish physicist Oskar Klein (1894-1977), son of the chief rabbi of Stockholm, came up with the idea that extra dimensions may be physically real, but curled up and extremely small. Kaluza-Klein theory lost favor with quantum mechanics, but was later extended with string theory.
Superstring theories assert that what we perceive as a particles are actually tiny vibrating strings, with different particles vibrating at different rates. The interactions between these strings create all of the properties of matter and energy that we can observe. Calabi-Yau manifolds are six-dimensional spaces that, according to string theory, lurk in the tiniest regions of spacetime, down at the Planck length where the quantized nature of gravity should become evident, at 1.6 x 10-35 m, which is almost unimaginably tiny even compared to an atomic nucleus. The Planck time, 10-43 seconds, is the time it would take a photon traveling at the speed of light to cross a distance equal to the Planck length.
The German mathematician Erich Kähler (1906-2000) defined a family of manifolds with certain interesting properties. The Italian American Eugenio Calabi (born 1923) in the following generation identified a subclass of Kähler manifolds and conjectured that their curvature should have a special kind of simplicity. Shing-Tung Yau (born 1949), a mathematician from China based in the USA, proved the Calabi conjecture in 1977. These types of spaces are called Calabi-Yau manifolds. For believers in string theory they form a critical element of the explanation of what appear to us as a variety of natural forces and subatomic particles. They are six-dimensional, but the extra dimensions are “folded up” out of sight from our vantage point in the macroscopic world. At least, that is how the theory goes.
The Italian physicist Gabriele Veneziano (born 1942), working at CERN in Western Europe in the 1960s, made early contributions to the field, but interest in string theory took off in a major way in the 1980s and 1990s. The Englishman Michael Green (born 1946), currently professor of theoretical physics at Cambridge University, together with his American colleague John Schwarz (born 1941) in 1984 extended string theory, which treats elementary particles as vibrations of minute strings, into “superstring” theory. It incorporated a novel relationship called supersymmetry that placed particles and force carriers on an equal footing.
Many leading scholars have since joined this debate. They include Leonard Susskind (born 1940), a Jewish professor of theoretical physics at Stanford University in the United States, Juan Maldacena (born 1968) from Buenos Aires, Argentina who is now a professor at the Institute for Advanced Study at Princeton in the USA, the Iranian American Cumrun Vafa (born 1960) at Harvard University in the USA as well as David Olive (born 1937) and Peter Goddard (born 1945), both mathematical physicists from Britain. The American physicist Joseph Polchinski (born 1954) in the 1990s introduced a novel concept called D-branes.
By the mid-1990s there were as many as five competing string theories, which nevertheless had many things in common. The great American mathematical physicist Edward Witten (born 1951), a professor at the Institute for Advanced Study at Princeton, during a conference in 1995 provided a completely new perspective which was named “M-theory.” According to Witten himself, M stands for magic, mystery or matrix. Before M-theory, strings seemed to operate in a world with 10 dimensions, but M-theory would demand yet another spatial dimension, bringing the total to 11. The extra dimension Witten added allows a string to stretch into something like a membrane or “brane” that could grow to an enormous size.
Edward Witten, widely hailed as one of the greatest scientists of his generation, comes from a Jewish family. His father was a physicist specializing in gravitation and general relativity. Edward Witten was educated at the Brandeis, Princeton and Harvard Universities in his native USA and became a professor at the Institute for Advanced Study at Princeton. His early research focused on electromagnetism, but he developed an interest in what is now known as superstring theory and made very valuable contributions to Morse theory, supersymmetry, knot theory and the differential topology of manifolds. Although primarily a physicist he was nevertheless awarded the prestigious Fields Medal in 1990 for his superb mathematical skills.
Neil Turok (born 1958), a white South African, together with Paul Steinhardt (born 1952), director of the Princeton Center for Theoretical Science in the USA, devised a controversial cosmological model in 2002. They proposed the “ cyclic model “ in which the universe was born multiple times in cycles of fiery death and rebirth. Their idea is based on a mathematical model in which our universe is a three-dimensional membrane or “brane” embedded in four-dimensional space. The Big Bang was caused when our brane crashed against a neighboring one; our universe is just one of many universes in a vast “multiverse.” Enormous “branes” representing different parts of the universe(s) collide every few hundreds of billions of years.
The American physicist Brian Greene (born 1963), a professor at Columbia University in the USA, has done much to popularize these new string theories. According to Greene, “Just as the strings on a cello can vibrate at different frequencies, making all the individual musical notes, in the same way, the tiny strings of string theory vibrate and dance in different patterns, creating all the fundamental particles of nature. If this view is right, then put them all together and we get the grand and beautiful symphony that is our universe. What’s really exciting about this is that it offers an amazing possibility. If we could only master the rhythms of strings, then we’d stand a good chance of explaining all the matter and all the forces of nature, from the tiniest subatomic particles to the galaxies of outer space.”
Superstring theories are consistent with what we know, but critics, of which there are still quite a few, claim that they are too mathematically abstract to predict anything which can be experimentally tested and verified, as should be possible with a proper scientific theory. Its supporters claim that the theories suggest that there should be a class of particles called supersymmetric particles, where every particle should have a partner particle.
CERN, the European Organization for Nuclear Research, has opened their Large Hadron Collider (LHC), the world’s largest and highest-energy particle accelerator, near Geneva on the border between Switzerland and France. There are those who hope that the LHC will be able to detect signs of supersymmetric particles. If so, this finding will not by itself prove superstring theory, but it would constitute a piece of circumstantial evidence in its favor.
Critics who complain that string theory is unnecessarily complex with very little experimental evidence in its favor have a point. It does seem rather drastic to go from four to eleven dimensions, thereby nearly tripling the amount of dimensions in the universe. Yet just because a theory is complex and seemingly counter-intuitive does not necessarily mean that it is wrong, as quantum mechanics and the theory of relativity showed us in the twentieth century.
One humorous illustration of how hard it is to imagine extra dimensions was provided by the English writer Edwin A. Abbott (1838-1926). His satirical novel Flatland: A Romance of Many Dimensions from 1884 is narrated by a being who calls himself “Square” and lives in Flatland, a world populated by two-dimensional creatures with a system of social ranks, where creatures with more sides rank higher and circles highest of all. Women are merely line segments and are subject to various social disabilities. In a dream, Square visits the one-dimensional Lineland, and is later visited by a three-dimensional Sphere from Spaceland. The Sphere tries to convince Square of the existence of a third dimension and mentions Pointland, a world of zero dimensions, populated by a single creature who is completely full of himself.
Perhaps we are all a bit like Square, who finds it very hard to imagine extra dimensions. And most of us have encountered individuals who live in Pointland, occupied only by themselves.
Despite all this progress, countless questions remain unanswered. As Alan Guth notes, even if the present form of the Big Bang theory with inflation should turn out to be correct, it says next to nothing about exactly what “banged,” what caused it to bang or what happened before this event. “ Link Text actually find it rather unattractive to think about a universe without a beginning. It seems to me that a universe without a beginning is also a universe without an explanation.”
Another major question is whether the expansion that our universe appears to be experiencing at the moment will continue indefinitely, or whether there is enough mass to slow it down and eventually reverse it, causing the universe to collapse onto itself in a “Big Crunch.” The Swiss astronomer Fritz Zwicky already in 1933 stumbled upon observations indicating that there is more than visible matter out there and that this “dark matter” affects the behavior of galaxies.
In the 1970s the American astrophysicist Jerry Ostriker (born 1937) along with James Peebles discovered that the visible mass of a galaxy is not sufficient to keep it together. The astronomer Vera Rubin (born 1928) studied under Richard Feynman, Hans Bethe, George Gamow and other prominent scholars in the United States. She became a leading authority on the rotation of galaxies. She teamed up with astronomer Kent Ford (born 1931) and began making Doppler observations of the orbital speeds of spiral galaxies. Her calculations based on this empirical evidence showed that galaxies must contain ten times as much mass as can be accounted for by visible stars. She realized that she had discovered evidence for Zwicky’s proposed “ dark matter,” and her work brought the subject to the forefront of astrophysical research. Rubin is an observant Jew and sees no conflict between science and religion.
In the 1990s, two competing groups began observing a certain type of supernovas as a way to study the expansion of the universe. In 1998 a team led by Saul Perlmutter (born 1959) at Lawrence Berkeley National Laboratory in California completed a search for type Ia supernovas, supplemented by a second team led by Brian P. Schmidt (born 1967) and Adam Riess (born 1969). To everyone’s surprise, their observations indicated that the expansion was not slowing down due to gravitational attraction, as many had suspected, but was speeding up. Further results have confirmed that the expansion of the universe appears to be accelerating.
Astronomers now estimate that out of the total mass-energy budget in our universe, a meager 4% consists of ordinary matter that makes up everything we can see, such as stars and planets, whereas 21% is dark matter. A full 75% consists of “dark energy,” an even more puzzling entity than dark matter. The US cosmologist Michael S. Turner coined the term to describe the mysterious force which seems to work like anti-gravity. In the view of Turner dark energy, the causative agent for accelerated expansion, is diffuse and a low-energy phenomenon. It probably cannot be produced at particle accelerators; it isn’t found in galaxies or even in clusters of galaxies. “ Dark energy is just possibly the most important problem in all of physics. The only laboratory up to the task of studying dark energy is the Universe itself.”
Petr Horava is a Czech string theorist who is currently a professor of physics at the University of California, Berkeley, and co-author with Edward Witten on articles about string and M-theory. He has proposed a modified theory of gravity, with applications in quantum gravity and cosmology. “I’m going back to Newton’s idea that time and space are not equivalent,” Horava says. At low energies, general relativity emerges from this underlying framework and the fabric of spacetime restitches. He likens this emergence to the way some exotic substances change phase. For example, at low temperatures liquid helium’s properties change dramatically into a “superfluid.” Cosmologist Mu-In Park of Chonbuk National University in Korea believes that this gravity could be behind the accelerated expansion of the universe.
A few scientists have controversially proposed resurrecting the discredited light-bearing ether of nineteenth century physics. Niayesh Afshordi, an Iranian -born USA-based physicist, suggests a model where space is filled with an invisible fluid — ether — as predicted by some proposed quantum theories of gravity such as Horava’s. Black holes may give off feeble radiation, as suggested by many quantum theories of gravity. Afshordi calculates that this radiation could heat the ether and, like bringing a pot of water to a boil, generate a negative pressure of “anti-gravity” throughout the cosmos. This would have the consequence of speeding up cosmic expansion, but it took billions of years for black holes to heat up the ether sufficiently. Another, less exotic alternative theory called Modified Newtonian Dynamics has been introduced by the Israeli astrophysicist Professor Mordehai Milgrom. This proposal has received the backing of some notable scientists, but so far only a minority of them.
Perhaps “dark matter” will turn out to be a new class of particles that behave very differently from the kind of matter we are most familiar with. Perhaps “dark energy” will in hindsight turn out to be a fancy name for something that does not exist, a twenty-first century equivalent of phlogiston. Or perhaps we will discover new insights that will fundamentally alter our understanding of gravity, the age of the universe and the fabric of spacetime. Whatever the truth turns out to be, the terms “dark matter” and “dark energy” remind us that scientists cannot yet explain all properties of the visible universe according to known physical laws.
In the late 1800s, many European scholars sincerely believed that they understood almost all of the basic laws of physics. They had reason for this optimism as the previous century had indeed produced enormous progress, culminating in the new science of thermodynamics and the electromagnetic theories of Maxwell. Max Planck was once told by one of his teachers not to study physics since all of the major discoveries in that field had allegedly been made. Lucky for us he didn’t heed this advice but went on to initiate the quantum revolution. We have far greater knowledge today than people had back then, but maybe also greater humility: We know how little we truly understand of the universe, and that is probably a good thing.
No comments:
Post a Comment
All comments are subject to pre-approval by blog admins.
Gates of Vienna's rules about comments require that they be civil, temperate, on-topic, and show decorum. For more information, click here.
Users are asked to limit each comment to about 500 words. If you need to say more, leave a link to your own blog.
Also: long or off-topic comments may be posted on news feed threads.
To add a link in a comment, use this format:
<a href="http://mywebsite.com">My Title</a>
Please do not paste long URLs!
Note: Only a member of this blog may post a comment.