100 Best Books for an Education

A Revision and Update of Will Durant's 100 Best Books for an Education

Note 3

The Eight Principles of Science

Isaac Newton

  1. Mathematics is the language in which knowledge of the universe is encoded.

      1.1  Mathematics, an ancillary tool of logic, serves as the foundation stone upon which all science rests.      

Science seeks to explain the natural world and its explanations get tested using evidence drawn from the natural world that scientists formulate into mathematical statements; predictions can be made from these statements. These predictions must be “falsifiable.” If we can’t formulate it mathematically, and it cannot be used to make a prediction, then it isn’t science! Science assumes that we can learn about the natural world by gathering evidence through our senses and extensions of our senses. Mathematical data is the fundamental building block of science; without it, there can only be speculation. “The history of science teaches us,” writes Paul Lutus, “that overall, equations tend to reflect nature in direct proportion to their beauty and simplicity.”

    Mathematics advances through formal logical proof. “[In a mathematical proof] a conjecture is made (equivalent to a hypothesis in normal science),” continues Lutus, “evidence is gathered and tested, and a solution is produced that must survive the most detailed scrutiny. Unlike normal science, a formal mathematical proof answers a conjecture in a conclusive way, without the possibility of later refutation, although in some cases a more elegant, shorter proof may supersede an earlier approach on aesthetic grounds.” Logical reasoning wholly drives this process.

      1.2 The scientific method is the primary tool used to advance science, and yields results that can be quantified.

Scholars now generally believe that the Arab scientist “Alhazen” perfected the “scientific method.” The illustration summarizes it and shows how it operates. We must always keep in mind that every step of the scientific method involves mathematics. Observations are numerical, experimental testing leads to numerical data, hypotheses are mathematically stated as are the predictions, and theories are generally mathematical statements of nature. Other scientists may not only repeat the experiments but may carry out additional experiments to challenge the findings.

    The end result in science is the formulation of a theory. To begin with, a scientific law is a prediction of what will happen in a natural phenomenon. For instance, let us consider Kepler’s Laws of Planetary Motion. They describe but do not clarify the underlying reason of planetary motion. If every scientist only formulated scientific laws, then nature would be very well described, yet still remain baffling and mystifying. However, a theory is an explanation of how and why that natural phenomenon occurs—it is not a “junior” law awaiting further validation! Newton and later Einstein explained how and why planets move as they do using their respective theories of gravity. Fundamentally, theories are what science is all about.

 

2. The Universe began 13.8 billion years ago in a "Big Bang."

      2.1 The expansion of the universe is based on observations of the red shift/distance relation in visible galaxies.

In the 1920s, Edwin Hubble, using the newly constructed telescope at Mount Wilson Observatory, discovered variable stars in numerous nebulae. Astronomers define nebulae as distinct bright patches or dark silhouettes in the night sky; their nature was a topic of heated debate in the astronomical community: were they interstellar clouds in our own Milky Way galaxy, or entire galaxies lying beyond its limits? This was a tricky question to resolve because it is notoriously difficult to measure the distance to most astronomical bodies since there is no point of reference from which to make comparisons. Hubble’s finding was groundbreaking because these variable stars had a characteristic pattern resembling a class of stars called Cepheid variables. Earlier Henrietta Levitt, working at the Harvard College Observatory, proved that there was a strong correlation between the period (regular cycle) of a Cepheid variable star and its luminosity. By identifying the brightness of a star, it then becomes feasible to determine the distance from the Earth to that star by measuring how bright it appears to us: the dimmer it appears the more distant. Thus, by determining the period of these stars (and hence their luminosity) and their ostensible brightness, Hubble proved that these nebulae were not clouds within the Milky Way, but instead were other galaxies far outside the edge of our own galaxy.

    Hubble based his second revolutionary discovery on the comparison between his measurements of the Cepheid-based galaxy distance determinations and measurements of the relative velocities of these galaxies. He proved that they were remoter and were moving away from us more quickly; he created an equation for this relationship: v = Hod, where v represents the velocity in which a particular galaxy is receding from the Earth and d represent its distance. Scientists presently refer to this constant of proportionality (Ho) as the Hubble constant. Astronomers measure the velocity in km/sec, while they call the most common unit for measuring the distance to nearby galaxies the Megaparsec (Mpc) which is equivalent to 3.26 million light years or 3.08 x 1019 km! Therefore, astronomers define H0 as (km/sec)/Mpc. Hubble’s findings commemorated the establishment of modern cosmology. Currently, Cepheid variables continue to be amongst the best techniques for determining how far other galaxies are and remain critical for establishing the expansion rate (H0) and age of our universe.

      2.2 The expansion of universe, which is too rapid if most matter were ordinary and luminous, infers the existence of great amounts of dark energy.

Observations of Type Ia supernovae at high redshifts by two groups (the Supernova Cosmology Project and the High-Redshift Supernova Team) in 1998 provided the first direct evidence that the universe’s expansion is accelerating rather than slowing down.

    How do astronomers explain this? The most plausible theory to emerge resurrects Einstein’s “cosmological constant”a vacuum energy inherent in space that acts as a repulsive force which scientists call “dark energy.” In contrast to ordinary matter and radiation, its density does not decrease as the universe enlargesit remains nearly constant.  We must recall that H0 multiplied by the distance gives the apparent velocity of a galaxy. Since the distance to any given galaxy is increasing, a nearly constant H0 implies that the apparent velocity will also be increasingi.e. galaxies are rushing away from us because the dark energy that scientists now believe to fill about 73% of the universe is directly causing it.

      2.3 Dark matter makes up a very large percentage of the matter in the universe; conversely, ordinary matter forms only a tiny percentage of the matter in the universe. Hydrogen and helium constitute around 98% of ordinary matter.

Scientists use the term “dark matter” to define non-luminous matter particles whose presence they can detect due to its gravitational effects exerted on luminous matter both within galaxies as well as between them in galactic clusters. The observations (or rather lack of direct observations) would imply that it is not of baryonic origin (i.e. not protons/neutrons), and thus the nature of dark matter is still unknown. According to galaxy formation and evolution models as well as cosmological theories, dark matter forms about 23% of the total mass of the Universe. The converse of this is that “ordinary” matter only makes up about 4% of the universe.

    About one to three minutes after the creation of the universe in the “big bang,” protons (ordinary hydrogen nuclei) and neutrons started to interact with each other to create deuterium, a variant of hydrogen. Deuterium, aka heavy hydrogen (one proton and one neutron), rapidly accumulated another neutron to make tritium another variant of hydrogen. Quickly thereafter tritium adjoined another proton, creating a helium nucleus. Scientists believe that there was one helium nucleus for every ten protons by the three-minute mark of the universe’s creation. After further cooling of the universe, these surplus protons would be able to acquire an electron to create ordinary hydrogen. Therefore, the universe that we see today contains one helium atom for every twelve to thirteen atoms of hydrogen.

 

      2.4 Roughly uniform distribution of matter on the largest scale is observed by plotting the density of the distribution of visible galaxies and by the isotropy of background radiation. 

Galaxies (and the matter within them) are like stars in that they tend not to exist in solitary isolation, but in small groups or great clusters. Clusters may be comprised of a few galaxies (for instance, Stephan’s Quartet) or gigantic systems such as the Virgo Cluster, which comprises around 3,000 members totaling about 1015 solar masses. The tiniest clusters have galaxies dispersed irregularly through relatively small areas of space, while the biggest clusters have their many galaxies symmetrically distributed around their centers. Such uniformity of form makes a case that motions within these clusters are in a balance against the self-gravity of the cluster members. There are actually sheets (or walls) of galaxies surrounding vast voids that are nearly devoid of galaxies. Astronomers have discovered clusters and chains of galaxies across space where such walls intersect. We can more properly compare the actual distribution of visible galaxies in the universe to the interior structure of a sponge, but it is rather uniform in its distribution through space.

    The Wilkinson Microwave Anisotropy Probe (WMAP) team released the first detailed full-sky map of the oldest light in the universe on February 11, 2003. The illustration shows the measurements where red indicates “warmer” and blue indicates “cooler” locations. The map displays miniscule temperature differences within a very evenly dispersed microwave radiation that immerses the universe, which presently averages a freezing 2.73 degrees over absolute zero. The WMAP has determined that the tiny temperature variations fluctuate by only millionths of a degree. Investigations of this microwave radiation show that it was produced a mere 380,000 years after the Big Bang.

  3. The first seconds after the ‘Big Bang’ created all the energy/ matter in the universe.

      3.1 The laws of thermodynamics determine how energy behaves.

There are three such laws. The first law states that energy can neither be created nor destroyed. It infers that the absolute quantity of energy in a closed system (for instance, the universe) stays constant. Energy can neither come into nor go out of a closed system, though it is able to change form. For example, the chemical energy in gasoline releases when the fuel combines with oxygen and a spark ignites the mixture within an automobile’s engine. The chemical energy stored in the gasoline is transformed into mechanical, heat, and sound energy.

    The second law affirms that the quantity of available energy (its ability to do work) in a closed system is constantly decreasing. Energy becomes unavailable for use due to entropy, the degree of disorder or randomness within a system, and the entropy of any closed system is continuously rising. Basically, any closed system moves towards increasing disorganization.

    The third law asserts the impossibility of reaching absolute zero since any system in the universe at absolute zero must be “detached” from the rest of the universe.

     

      3.2 The properties of “black body” radiation, atomic spectra, and light scattering lead to the conclusion that energy is transmitted in “packets” termed quanta. 

Quantum physics is a branch of science that deals with distinct, indivisible units of energy, which the theory of quantum mechanics describes. The principal concepts buttressing it are: 1) energy is not continuously transmitted, but exists as small but separate units termed quanta. 2) elementary particles behave like both particles and waves. 3) A large element of chance intrinsically determines the motion of these particles. 4) It is impossible to ascertain both the position and the momentum of a particle simultaneously. The more accurately one is measured, the less exactly the other can be measured; aka the Uncertainty Principle. 5) The atomic world is nothing like the world we live in.

    At first this may appear like the weirdest of theories, it nonetheless holds much evidence as to the elementary nature of the universe. This theory is more vital than even relativity in the grand structure of the cosmos. Moreover, it depicts the character of the universe as being very dissimilar then the macro world we inhabit. Niels Bohr once said, “Anyone who is not shocked by quantum theory has not understood it.”

      

3.3 The universe began as a “singularity” and produced matter and antimatter, which annihilated each other leaving a “remnant” of matter; this matter (protons, neutrons, and electrons) interacted through four fundamental forces and combined to form atoms.

For every type of matter particle discovered there is a mirror antimatter particle aka antiparticle. These look and behave just like their corresponding matter particles the exception being their opposite electric charge. For example, an electron is electrically negative but its antimatter counterpart, the positron, is electrically positive. Gravity acts on both matter and antimatter identically because gravity is not a charged property and a matter particle possesses the identical mass as its mirror antiparticle. When a particle of matter and antimatter meet, they annihilate each other into pure energy. This was how the engines of Star Trek ships were powered! But why does matter predominate in the universe and not antimatter? In the hot soup of the early universe, matter was ever so slightly favored over its antimatter counterpart through a mechanism that slowed its decay over that of antimatter. Hence, leaving us with the universe in which we inhabit composed almost entirely of matter.

    And how does this matter interact? Essentially in four fundamental ways: through gravity, the weak nuclear force, electromagnetism, and finally the strong nuclear force. Gravity’s reach is infinite and is always attractive, but it is very weak when compared with the others. Currently Einstein’s General Theory of Relativity best explains this force. The weak nuclear force is responsible for the radioactive decay of neutrons and protons in the atomic nucleus via the ejection of electrons (or positrons) and neutrinos. Electromagnetism relates to all interactions involving electric charges, magnetic phenomenon, and electromagnetic radiation (i.e. light waves), and holds electrons about the atomic nucleus. The strong nuclear force is responsible for holding quarks bound together to form protons and neutrons, and for holding protons and neutrons in the nucleus of atoms.

    Many scientists combined their efforts to explain the nature of atoms. Amedeo Avogadro, an Italian scientist, in 1811 proposed a hypothesis currently known as Avogadro’s law. This law declared that equal volumes of all gases at the same temperature and pressure contain the same number of chemical particles. He was able to calculate from gas densities the amount of matter in atoms and molecules; from this, he introduced a basic unit of quantity in chemistry called a mole, which contains 6.022137 x 1023 atoms or molecules of a single type of substance. This quantity is referred to as Avogadro’s number, and represents the number of atoms of carbon required to arrive at exactly 12 grams. Einstein in 1905 explained Brownian motion, the haphazard movement of miniscule particles suspended in a liquid or a gas, as the first experimental proof that atoms exist. Additionally, the ideal gas law, a statement of the connections between pressure, volume (or density) and temperature, can be derived from the kinetic theory of gases. In general, the behavior of matter is well described by this kinetic theory: temperature is explained by atomic theory as the motion of the atoms (the speedier the hotter). Pressure is described as the transmission of momentum from those moving atoms to the walls of the container (faster atoms = higher temperature = more momentum/hits = higher pressure).

    The Zeeman Effect is the term used to explain the split of an atomic spectral line into several parts due to in the occurrence of a magnetic field. In most atoms, several electronic configurations have equal energy, so that conversions between these configurations correspond to a single spectral line. However, the existence of a magnetic field interacts in sundry ways with electrons possessing different quantum states and marginally alters their energies. Where there were numerous configurations with equivalent energies, there are now dissimilar energies that produce numerous very similar spectral lines. A magnetic field also interacts with the electron’s “magnetic moment,” thus also contributing to the Zeeman Effect, and proving that charged electrons “spin” as they orbit the nucleus of atoms.

      3.4 Numerous 19th century experiments in electricity and magnetism led James Clerk Maxwell to combine electricity and magnetism into one force termed: electromagnetism.

"Maxwell’s four equations," writes Michael Fowler, "describe the electric and magnetic fields arising from varying distributions of electric charges and currents, and how those fields change in time. The equations were the mathematical distillation of decades of experimental observations of the electric and magnetic effects of charges and currents. Maxwell’s own contribution is just the last term of the last equation, but realizing the necessity of that term had dramatic consequences. It made evident for the first time that varying electric and magnetic fields could feed off each other--these fields could propagate indefinitely through space, far from the varying charges and currents where they originated.  Previously the fields had been envisioned as tethered to the charges and currents giving rise to them. Maxwell’s new term (he called it the displacement current) freed them to move through space in a self-sustaining fashion, and even predicted their velocity--it was the velocity of light!"      

      3.5 Newton’s laws of motion describe how matter behaves at velocities substantially less than that of light. 

Newton’s three laws of motion are: the law of inertia, the law of force, and the law of reciprocal actions

  • The first law of motion or the law of inertia states that a body remains at rest unless acted upon by a force, or that a body travelling at a uniform velocity (speed and direction) will continue to move at that velocity unless acted on by a force. This is derived from Galileo. 
  • The second law of motion introduces the formula Force = mass · acceleration.
  • The third law is a result of the law of conservation of momentum; it states that each action has an equal and opposite reaction.   

      3.6 Newton’s law of gravity can be derived from his second law of motion and the inverse square law as derived from Kepler’s laws of planetary motion and Galileo's law of falling bodies.  

The illustration provides a good model of Newtonian gravity. As per his second law of motion (F=m·a), gravity being a force is dependent on the masses of the bodies and the product of their acceleration--in this case the universal gravitational constant (6.67300 × 10-11 m3 kg-1 s-2 ). The 'r' represents radius between the center of both masses. Any point source that spreads its influence equally in all directions without a limit to its range will obey the inverse square law. This arises from the principles of geometry: the strength of the effect at any given radius r is the source strength divided by the area of a sphere enveloping it. Since it stems from geometrical factors, the inverse square law pertains to varied phenomena. "Along with his laws of motion," writes Jim Loy, "Newton's law of gravity led directly to mathematical explanations of Galileo's falling object experiments and Kepler's Laws concerning the motions of the planets."   

  

      3.7 Failure of 19th century ether drift experiments, the acknowledgment that the Lorentz group is the natural symmetry group of Maxwell’s electrodynamics, and Einstein’s recognition that the speed of light is invariant to all observers regardless of their frame of reference led him to formulate the Theory of Relativity. To include gravity within Relativity Einstein conceived of it as equivalent to an accelerated frame of reference and devised three tests that could verify it: the anomalous motion of mercury, the bending of starlight by our sun, and the red shift of light caused by the gravity of stars.   

 

In 1873, Scottish physicist James Clerk Maxwell showed that light travels as a wave, but through what medium did it propagate? Scientists believed that light traveled through an invisible substance termed “ether.” Purportedly, ether was distributed throughout space but did not interfere with the motion of the planets and stars. In 1887, American physicists A.A. Michelson and E.W. Morley tried to measure this motion through the ether. Although they made extremely precise measurements, it revealed no trace of the Earth’s motion relative to it. This surprising result meant that ether didn’t exist. They similarly demonstrated that Newtonian mechanics could not explain the behavior of light. After physicists had struggled for years with the problem,* Albert Einstein in 1905 provided a solution with his (special) theory of relativity. He chose as his operating principles 1) that the speed of light is invariant for all observers no matter how fast they are moving; and 2) that all observers moving at constant speeds in a straight line should be able to discover the same laws of physics in both mechanics and electromagnetism.

    Mathematically Einstein demonstrated that if these two tenets were correct, then Newton’s models of space and time were false. In the world of Newtonian mechanics, space and time were “absolute.” In other words, the theory assumes that all observers everywhere in the universe will obtain identical measurements of space and time. According to special relativity, the speed of light in a vacuum is invariable, but observers moving relative to each other will not obtain the same measurements of space and time. Einstein and his professor Hermann Minkowski therefore regarded space and time to be aspects of a separate thing called “space-time.” Einstein further concluded that no signal in the universe travels faster than light travels and that material objects can approach but never reach its speed.

    Special relativity draws many startling conclusions that are contrary to our day-to-day experiences. For example, it infers that as an object travels faster, it grows smaller along its direction of motion. Time likewise runs slower for the object than it does for something that is not moving as fast. At lower speeds, the predictions of relativity thus match Newton’s. At extremely large speeds, the differences in measurements are much greater. Numerous experiments have shown special relativity to be true.

    Newtonian gravity acted instantly everywhere in the universe, in direct contradiction to special relativity’s first postulate. In 1915, Einstein announced his general theory of relativity, which supposes that the consequences of gravity on objects are identical to the effects of non-gravitational forces (accelerating frames of reference) acting on objects. Physicists no longer view gravity as a property of objects interacting together, but of objects interacting with space-time. The theory predicted that nearby massive objects would affect the path of a light beam. Scientists confirmed this prediction in 1919. The theory also predicted the existence of gravitational waves that travel at the speed of light, which physicists confirmed in 2016.

    Scientists have so far authenticated as true both of Einstein’s relativity theories in every possible case that they tested. The theories, together with other 20th century events in physics, have been especially beneficial in developing the modern theory of the atom and serve as the basis for cosmological models of the universe.

* Dutch physicist H. A. Lorentz in 1904 mathematically postulated that an object “shrinks” in the direction of motion.

 

  4. Machines can be used to perform work, or to build other machines/structures.

      4.1 All machines perform work by altering how work is done not how much work is done. They must comply with the equation W = F · D. Whatever is given up in force must by definition be made up by distance and vice versa.

The equation W = F · D defines work: force applied multiplied by the distance that the force applies against resistance. Simple machines don’t change this equation; rather they alter how work is performed. In order to make the job easier one must transfer a lesser force over a greater distance.* To demonstrate this, let’s use the example of an inclined plane the simplest of the simple machines (see illustration). Let’s say we wanted to lift a weight off the ground. If one moves the weight directly upwards, the box requires the full effort or force. However, to make it easier one can push the weight (or load) up an incline that for this example is twice the length of the height.

The task becomes easier because it requires only half the effort but we carry it over twice the distance. Therefore, a large force over a short distance must equal the same amount of work as a small force over a large distance according to our equation—the work remains the same regardless of how we carry it out. The work put in is always equal to the work gotten out if we ignore the effects of frictional losses.

    The ability of a machine to decrease the force necessary to do the work is called its mechanical advantage (MA). In this situation, the effort force decreases by a factor of two. Therefore, this inclined plane here has a mechanical advantage of two. Mathematically, mechanical advantage is equal to the force exerted by the machine (load or resistance force, Fl) divided by the force put in (effort force, Fe) i.e. MA = Fl/Fe.


* Ed. Note: We will discuss the topic of simple machines, and machinery in general, in Appendix 18.

 

      4.2 The principle of the inclined plane serves as the basis for all simple machines.

 

The inclined plane embodies the principle of MA. In fact, all machines are really forms of the inclined plane. Usually engineers categorize simple machines into six types: the inclined plane, wedge, screw, lever, wheel and axle, and pulley. The wedge and screw are simply adaptations of the inclined plane. But what about the other three simple machines, how are they “inclined planes”? Basically, the lever is an inclined plane that uses a fulcrum to achieve an MA. In the illustration, the lever has an MA of four. It isn’t hard to see that the wheel and axle is a form of lever with a greater “reach.” It is capable of exerting an effort much farther than the lever per se. In fact, it can exert it a full 360 degrees. The pulley is a form of lever using ropes, cables, or cords. The effort exerted is half that of the load but must travel twice the vertical distance to lift it, i.e. twice the load only climbs half as far but half the effort had to be applied twice as far. This is another example of MA aka leverage. Finally, complex machines are nothing more than a combination of simple machines.    



  5. Atoms, which are generally indivisible, react with each other to form molecules.

      

5.1 Ninety-four naturally occurring atomic elements form the basis for creating substances. 

Each element consists of an exclusive (unchanging) quantity of positively charged protons residing in a central nucleus—the atomic number for that element—surrounded by an equal number of negatively charged electrons orbiting in a “cloud” about it. All substances in the Universe form through chemical reactions among combinations of the atoms of these elements. Scientists have synthesized ten other elements, however they are extremely short-lived. They exist in nature only under extremely rare conditions.   

    In the Universe, the chemical elements vary significantly in their comparative quantity: hydrogen and helium are the most common elements. All the elements heavier than hydrogen and helium form through nucleosynthesis in the cores of dying, exploding stars. On Earth, the chemical elements also differ greatly in their relative abundance: eight elements make up 99% of Earth’s mass, twenty elements are relatively common, and the other seventy-two are rare to very rare. Twenty-one biogenic elements play a major role in the planet’s ecosystem. Certain elements are radioactive and progressively decay to form daughter elements.     

    Chemists have created the Periodic Table, an illustration displaying the elements in terms of similarities and differences in their chemical behavior. The elements are positioned in rows in increasing order of their atomic number proceeding from left to right. Chemists call the rows periods and customarily identify them with Arabic numerals (e.g. Period 1, 2, 3 . . .). Moving from left to right through the rows (periods) one finds a progression from metals to non-metals. The columns are termed groups and chemists usually denote them with Roman numerals (e.g. Group I, II, III . . .). Eight groups exist and they all have names (e.g. Group VIII is named the Noble or inert Gases). For every group, the remotest electron shell from the nucleus holds an equivalent number of electrons for all members; this number is the group number and plays a major role in the chemistry of the group. The block of elements between Groups II and III chemists call the transition metals, and all have very comparable chemical properties. Chemists call elements 58 to 71 the Lanthanide or Rare Earth elements. Elements 90 to 94 (to 103 if the elements that have been created artificially are counted) are termed the Actinide elements. Chemists call the elements in Group VII halogens. These non-metals have seven electrons in their outer shell. Each forms univalent anions, e.g. Cl-, and all are oxidizing agents, declining in oxidizing power (and chemical reactivity) with an increase in period number. This results in fluorine having the highest reactivity and iodine the lowest. The Noble gases have electronic arrangements that are extremely stable as the remotest electron shell is full. They gain or lose electrons only in extreme conditions and so do not normally enter into chemical reactions. They exist as monatomic gases. Generally, the lower the period/higher the group number (exempting the noble gases) the greater the “electronegativity” of the element i.e. ability to oxidize another atom and snatch away an electron. This plays a major role in chemical reactions and explains why most metals are good conductors of electricity: they easily give up an electron due to their position in the periodic table.

      5.2 Isotopes, different forms of atoms, sometimes spontaneously decay producing radioactivity.

Atoms of the same element often vary slightly in weight due to either missing a neutron or having an extra neutron, and these differing forms are termed isotopes. An atom is still the same element regardless of how many neutrons it may have in its nucleus. For example, carbon atoms come in several varieties, as do most atoms. The “normal” ones are carbon-12 (C-12); those atoms have six protons and six neutrons. A few odd ones may have seven or even eight neutrons such as carbon-14 (C-14). Chemists consider C-14 an isotope of the element carbon.

    However, if we examine the C-14 atom, we see that it is not perpetually stable. At some point in time it loses its additional neutrons and becomes C-12. Scientists call the loss of the neutrons “radioactive decay,” and that decay happens as regular as clockwork. For carbon-14, the decay occurs in approximately 5,730 years. Some isotopes take longer and others decay over a period of only minutes to microseconds. The process of radioactive decay affects both protons and neutrons (transmuting them into each other) and emits three products: alpha radiation, beta radiation, and gamma radiation. Alpha radiation results in the ejection from the atom of a helium nucleus with limited penetrating ability. Beta radiation emits either an electron and antineutrino (when a neutron decays) or a positron and neutrino (when a proton decays). Gamma rays are the most energetic form of electromagnetic radiation and have the most penetrating ability. 

 

  6. Whirling clouds of dust and gas condense to form stars in interstellar space. 

      6.1 Hydrogen and helium are the chief components of stars and are the ‘fuel’ and end product of stellar nuclear reactions as determined by spectrographs of solar light.  

Enormous clouds of gas and dust occur throughout space. They are composed of hydrogen and helium and serve as stellar nurseries. Gravity begins contracting these clouds and they become hotter. When the temperature at the core gets to several thousand degrees, the hydrogen ionizes (their electrons are ripped away), and they then become single protons. The contraction of the cloud and the increase in temperature continues until the temperature reaches about 10,000,000 degrees Celsius. At this time, nuclear fusion ensues in a process termed the proton-proton reaction. This reaction occurs when four protons join and two convert into neutrons via radioactive decay and a He-4 nucleus is created. In this process, a small amount of matter is lost and converted to energy as dictated by Einstein’s equation (E=mc²). At this point, the star stops collapsing because the outward force of heat balances the gravity. The remaining ninety-two or so chemical elements are produced in stars and constitute only a few percent of its overall mass;  astronomers refer to these elements as “metals,” even though this includes elements such as carbon and oxygen which are not considered metals in the ordinary sense. Elements heavier than Iron-56 are produced in supernovae when they explode.

    When astronomers pass the light of a star through a spectrograph, they obtain its spectrum. This appears as a regular rainbow of colors—except that there are dark lines in it. Each element absorbs light of a particular frequency—a specific color. If that element resides in the cool atmosphere of its star, then those atoms will capture the light at that color and create the line. Every element has a precise “signature”—a specific set of lines that astronomers can readily identify. 

 

     6.2 The retrograde motion of the planets in conformity with Newtonian mechanics leads to the conclusion that the planetary system is heliocentric. 

Planets, in contrast to the Sun and the Moon, occasionally show a temporary reversal of their direction of motion (from night to night) relative to the background stars before restarting their regular “forward” courses. Astronomers call this phenomenon “retrograde motion.” A geocentric model of the universe is based on the supposition that the Earth is at the center of the universe and that the Sun, Moon, and planets all orbit it. The most successful of these was the Ptolemaic model. To explain retrograde motion in the geocentric model it was essential to assume that planets moved on small circles called epicycles, which in turn orbited Earth on greater circles called deferents as depicted in the illustration. 

    The heliocentric view of the solar system holds that the Earth, like all the planets, orbits the Sun. This paradigm explains retrograde motion and the observed size and brightness variations of the planets in a much more natural way than does the geocentric model. The recognition by Renaissance scientists that the solar system is centered on the Sun, and not the Earth is celebrated as the Copernican revolution, after Nicholas Copernicus, who established the groundwork of the modern-day heliocentric paradigm. Johannes Kepler enhanced the Copernican model with his three laws of planetary motion: (1) planetary orbits are ellipses, with the Sun at one focus. (2) a planet moves faster as its orbit takes it closer to the Sun. (3) The semi-major axis of the orbit relates in a simple way to the planet’s orbital period. The majority of planets travel on orbits whose eccentricities are rather small, so their orbits vary only marginally from perfect circles. Galileo Galilei, one of the fathers of experimental science, crucially helped establish and strengthen the Copernican picture of the solar system via his telescopic studies of the Moon, Sun, Venus, and Jupiter.

    Isaac Newton succeeded in explaining Kepler’s laws in terms of a few general physical principles, now known as Newtonian mechanics. Newton’s laws imply that a planet does not orbit the precise center of the Sun but instead that both the planet and the Sun orbit the common center of mass of both bodies. For any body to breakout from the gravity of a second body its velocity must surpass the escape velocity of the second body. The motion is unbound and the orbital course ceases to be an ellipse, though Newton’s laws nevertheless still describe it.

      

     6.3 We can infer from the wobble in the motion of stars around which a planet orbits, or by the diminution of light caused by a planet’s passing in front of its star, the existence of other planets outside our solar system.

 

From Kepler’s and Newton’s laws we can see that planets orbit in ellipses with the center of mass at one focus and that the star also orbits around the planet-star center of mass but much closer and at a slower orbital speed. There are two manifestations of this motion: astrometric wobble and Doppler wobble. In astrometric wobble, the parent star wobbles back and forth in the sky as seen relative to the distant background stars. However, the size of the wobble will be very small and thereby hard to detect. Additionally, the reflex motion is best seen when we are looking down on the plane of the orbit. The Doppler wobble involves measuring the starlight’s Doppler shift as it wobbles. The star’s spectral absorption lines shift towards the blue when the wobble moves the star towards the Earth, and red when the wobble moves the star away from the Earth. In this way, it is possible to infer the existence of planet sized bodies orbiting distant stars. Additionally, in a technique known as photometry astronomers watch for an unexplained dimming of a star’s illumination because one of its planets passes between that star and the Earth.

 

 7. Planets form from space dust and gas aggregating around a star, and contain their own internal energy.

 

      7.1 Uniformitarianism is the process whereby gradual processes bring about the Earth’s geological features and can be seen today. Examples of present processes accumulating and yielding enormous effects include the Grand Canyon.

 

Uniformitarianism is one of the most important unifying concepts in geology. It developed in the late eighteenth century and suggested that catastrophic processes were not responsible for the landforms that existed on the Earth’s surface, which people based on a biblical interpretation of the history of the Earth. Uniformitarianism was diametrically opposed to the ideas of that time, for it suggested that the earth developed over long periods through a variety of slow geologic and geomorphic processes.

    The concepts buttressing uniformitarianism commenced with the work of Scottish geologist James Hutton. In 1785 at the gatherings of the Royal Society of Edinburgh Hutton expounded the concept that the Earth had a long history and that we could interpret this history in terms of processes currently observed. For example, he suggested that the weathering of bedrock over thousands of years formed deep soil profiles. Accordingly, geologists need not explain the geologic history of the Earth using supernatural theories.

    Hutton’s ideas did not gain the major support of the scientific community until the work of Sir Charles Lyell. In his three-volume work Principles of Geology (1830-1833), Lyell proffered an assortment of geologic evidence from England, France, Italy, and Spain to substantiate Hutton’s ideas and to reject the theory of catastrophism.     

    Thus, uniformitarianism suggests that geologists should use the continuing uniformity of existing processes as the context for comprehending the Earth’s geomorphic and geologic history. And today the majority of theories dealing with landscape evolution utilize the theory of uniformitarianism to explain how the assorted landforms of the Earth originated.

    Uniformitarianism also influenced the development of ideas in other fields. The efforts of Charles Darwin and Alfred Wallace on the origin of species spread the idea of uniformitarianism to biology. The principle that the diversity seen in the Earth’s species is explained by the uniform adaption of genetic traits over long periods is the basis for the theory of evolution via natural selection.

 

 

      7.2 Plate tectonics explains the possibility of fitting continents together puzzle style and the matching of fossils and rock types where the puzzle pieces join is the evidence.

Roughly halfway between the continents are raised areas known as ridges which are similar to under-water mountain ranges. At other locations we find extremely deep trenches, some reaching many thousands of feet in depth. Scientists believe that the ridges represent areas where new crust is being formed as hot magma escapes from the Earth’s core and spreads outward. As the seafloor spreads outward away from the area where magma is being released, the continents are carried across the sea, riding on top of the crust. As new crust is created, older crust submerges back into the mantle, being melted once again. It is believed that the deep ocean trenches are locations where crust is being lowered back into the Earth’s core. The amount of time that it takes for crust to be created, and later destroyed is approximately 100 million years. Thus, most crust has a lifetime of around 100 million years. 

   Because continents do not fall back into the Earth’s mantle, they survive much longer. Many parts of the continents we see today are almost as old as the Earth itself. As new crust is created in particular locations on Earth, it forms what resembles giant plates. One side of the plate is where new crust is being created, while the other side is where older crust is being destroyed. Geologists refer to this process as plate tectonics. As we study plate tectonics, a picture emerges of very old continents riding on top of much younger and ever moving plates. These plates move extremely slowly, at a rate of only about 10 cm per year, and their movements are responsible for earthquakes, volcanic acitivity, and mountain building. 

   

  8. Evolution acts on matter to generate DNA, which is the code from which life emerges, and generates species over time.   

      8.1 The DNA molecule can be formed from chemicals formed in primeval seas. Primitive cells incorporated it and it became the basis of inheritance. Discoveries and observations of the nucleus in cell division revealed its fundamental role. X-ray micrographs yielded the DNA molecular structure.

Chris Gordon-Smith writes:

   

. . . Aleksandr Oparin (in 1924), and John Haldane (in 1929, before Oparin’s first book was translated into English), independently suggested that if the primitive atmosphere was reducing (as opposed to oxygen-rich), and if there was an appropriate supply of energy, such as lightning or ultraviolet light, then a wide range of organic compounds might be synthesized.

    Oparin suggested that the organic compounds could have undergone a series of reactions leading to more and more complex molecules. He proposed that the molecules formed colloid aggregates in an aqueous environment. They were able to absorb and assimilate organic compounds from the environment in a way reminiscent of metabolism and would have taken part in evolutionary processes, eventually leading to the first life forms.

    Haldane’s ideas about the origin of life were very similar to Oparin’s. He proposed that the primordial sea became a “hot dilute soup,” that groups of monomers and polymers acquired lipid membranes, and that further developments eventually led to the first living cells.

    Haldane coined the term “prebiotic soup,” and this became a powerful symbol of the Oparin-Haldane view of the origin of life. . . .  

    In 1952, Harold Urey tried to calculate the chemical constituents of the atmosphere of the early Earth. He concluded that the main constituents were methane (CH4), ammonia (NH3), hydrogen (H2), and water (H2O). He suggested that his student, Stanley Miller, should do an experiment attempting to synthesize organic compounds in such an atmosphere.

    Miller carried out an experiment in 1953 in which he passed a continuous spark discharge at 60,000 volts through a flask containing the gases identified by Urey. Miller found that after a week, most of the ammonia and much of the methane had been consumed. The main gaseous products were carbon monoxide (CO) and nitrogen (N2). In addition, there was an accumulation of dark material in the water. It was clear that the material included a large range of organic polymers.

    Analysis of the aqueous solution showed that the following had also been synthesised: 25 amino acids [the building blocks of proteins], several fatty acids, hydroxy acids, and amide products.  

    The Miller-Urey experiment was immediately recognised as an important breakthrough in the study of the origin of life. It was received as confirmation that several of the key molecules of life could have been synthesised on the primitive Earth in the kind of conditions envisaged by Oparin and Haldane. These molecules would then have been able to take part in ‘prebiotic’ chemical processes, leading to the origin of life.

 

    The Miller-Urey experiment inspired many others. In 1961, Joan Oró found that hydrogen cyanide (HCN) and ammonia in a water solution could make the nucleotide base adenine. The experiment created a considerable quantity of it, which formed from five molecules of HCN. In addition, many amino acids are created from HCN and ammonia in these situations. Experiments performed afterwards revealed that the other RNA and DNA nucleobases could be obtained through simulated prebiotic chemistry with a reducing atmosphere. According to theory, primitive cells incorporated this RNA and then DNA and became the first life forms on Earth, including some of the first photosynthetic organisms, which produced the oxygen and ozone present in the atmosphere.

    In the 1830s, Theodor Schwann and Mathias Schleiden (a zoologist and botanist respectively) established the basis of the cell theory, which states that all living organisms consist of one or more nucleated cells that are the fundamental unit of living organisms, and that all cells originate from other pre-existing cells. The nuclei of cells contain very long threads of nucleic acids (discovered by Friedrich Miescher in 1869) and proteins that biologists call chromosomes. In the early twentieth century, the premise that chromosomes contain hereditary information was gaining acceptance from the biological community. Later work revealed that material within the nucleus indeed contains hereditary information that regulates the growth, maturation, and behavior of the cell.

    Once accepted and confirmed that DNA was the source of hereditary information it was a race to discover its structure in hopes of better understanding this fascinating molecule. Many people contributed to the discovery of the structure of DNA but it was James Watson and Francis Crick who ultimately developed the first three-dimensional model of a DNA strand in 1953. Rosalind Franklin had contributed greatly to the discovery of the structure of DNA with the information she gathered using X-ray diffraction. In 1953, scientists understood that DNA was comprised of sugars (deoxyribose), phosphates, and four distinct nitrogen bases: adenine, cytosine, guanine, and thymine. Scientists also discovered, by comparing the DNA in diverse animals, that the ratio of each nitrogen base in the DNA of different species of animals varies but the number of adenine molecules equals the number of thymine molecules, and the number of guanine molecules equals the number of cytosine molecules. Employing these important bits of information, Watson and Crick labored jointly to build the first three-dimensional model. They concluded that it was a double helix with adenine hydrogen bonding with thymine and cytosine hydrogen bonding with guanine. To reproduce it “simply” unwound.         

      8.2 Evolution by natural selection is the origin of species. Charles Darwin made an analogy to artificial selection in describing natural selection. Transitional forms were used as examples. Microevolution can be demonstrated today. 

Natural selection is a process in nature by which the organisms best suited to their environment are most likely to leave offspring, a process termed survival of the fittest. The British naturalists Charles R. Darwin and Alfred R. Wallace first explained the theory of natural selection in detail in the 1850’s. They believed all plants and animals had evolved—that is, developed by changing over many generations—from a few common ancestors by means of natural selection. Plants and animals produce many offspring, but some of the young die before they can become parents. Accordingly it is natural selection theory that determines which members of a species die prematurely and which ones survive and reproduce.

   The theory of natural selection is based on the great variation among individuals. Each individual has a unique combination of traits and most of these traits are inherited. Secondly, only a limited supply of food, water, and other necessities of life exists for all the organisms; therefore, they must constantly compete for them. They also struggle against such dangers as being preyed upon or by unfavorable weather. In any environment, some members of a species have combinations of traits that help them in the struggle for life, other members have traits that are less suitable for that environment. The organisms with the favorable traits are most likely to survive, reproduce, and pass on those traits to their young. Organisms that are less able to compete are likely to die prematurely or to produce few or inferior offspring. As a result, the favorable traits replace the unfavorable ones in the species.

   However, changes in those inherited traits might be valuable, too, if the environment changed. A muscular fin, for example, would be of great value in enabling a lunged fish to crawl out of a drying pond. After generations, the selectively valuable structure might no longer look like the original. Nonetheless, the underlying “raw material” could be recognized. The main limbs of whales, mice, bats, and humans have similar components. Regardless of their function, they are homologous structures—i.e. they have a common origin. By contrast, structures with the same function but different evolutionary origin, such as the wings of insects and those of birds, are analogous structures. This is known as convergent, or parallel, evolution.

   Geographic barriers are the best stimulants of evolution. Formation of a mountain range, for example, can divide a species into isolated units and thus block gene exchange. Also, a few members of a species might wander across a mountain chain and establish an isolated population. Eventually, the mountains might erode enough for descendants of the isolates to regain contact with descendants of the parent group.

   Many biologists rejected the idea at first. They incorrectly thought that natural selection and evolution would eventually stop because a species would have used up all its possible variations. Since then, scientists have learned that cells of every living thing have mutable genes, which determine the organism's hereditary traits. Variations occur in part because genes for new traits are constantly being introduced into a species and shuffled among individuals by reproduction.

   Microevolution is defined as a change in gene frequency in a population. Because of the short timescale of this sort of evolutionary change, we can often directly observe it happening. Pesticide resistance, herbicide resistance, and antibiotic resistance are all examples of microevolution by natural selection. 

      8.3 Mendel’s experiments with peas plants led him to formulate the laws of inheritance.

Gregor Mendel (1822–84) was an Austrian monk and botanist who formulated the basic laws of inheritance. His experimentations with the reproduction of garden peas led to the development of the science of genetics. Yet Mendel’s discoveries remained virtually unknown for thirty-five years after he published his results.

    In 1856 in the present-day Czech Republic Mendel began experimenting with garden-peas in his monastery garden. His conclusion was that there existed traits such as blossom color that was caused by the occurrence of paired elements of heredity that we now know as genes. Mendel expounded his work to the local Natural Science Society in 1865 in a paper entitled Experiments with Plant Hybrids. In 1900, independent research by three other scientists, who had discovered his paper, confirmed Mendel’s results.

    Mendel experimentally bred thousands of pea plants and studied the characteristics of each successive generation. In pea plants, a male gamete, or sperm cell, unites with a female gamete, or egg cell, to form a seed. Mendel concluded that plants hand down traits through hereditary elements in the gametes. He inferred that every plant inherits a pair of genes for every characteristic, one gene from both parents. Based on his experimental results, he discovered the first law of inheritance, the law of dominance: if a plant inherits two different genes for a characteristic, one will be dominant and the other recessive, and the characteristic of the dominant gene will emerge in the plant. For instance, he found the gene for round seeds to be dominant, and the gene for wrinkled seeds to be recessive. Therefore, any plant that inherits both genes will have round seeds. Mendel also discovered his second law of inheritance: pairs of genes segregate (separate) in a random fashion when a plant’s gametes form. Consequently, a parent plant passes down one gene of each pair to its offspring. This became known as Mendel’s Law of Segregation. In addition, Mendel concluded that a plant inherits each of its characteristics independently of other characteristics, and scientists now refer to this as Mendel’s Law of Independent Assortment. Scientists have found some exceptions to his conclusions, primarily in the Law of Independent Assortment related to genes on the same portion of a chromosome, but biologists have in general proven his theories correct.

     

      8.4 Correlation of observed pathogens with illness led to the germ theory of disease.    

      8.5 Infectious processes lay at the heart of 90+% of all chronic disease.