Physics for Quantitative Poets

My evergreen attempt to sense the nature of the Universe, even through some equations.

Rob James

August 27, 2025

I am a sucker for the anything in the category of “Physics for Poets.” There is even a book by that very name (by Robert H. March) and I have a number of others on my shelf. One of them, Stephen Hawking’s A Brief History of Time (1998), is hailed as the book most widely purchased yet most rarely read.

I forever hope there is a royal road to comprehending the cosmos without having to work problem sets or evaluate Hamiltonians. Still, I want more of an understanding than simply parroting lines like “the ruler shrinks” or “matter tells spacetime how to curve.” I am willing to step into the mathematics just far enough to grasp the meaning of the most fundamental precepts in each part of the discipline. Although I do own full textbooks like Halliday & Resnick, I am not about to try calculating the accumulated forces on a block sitting on an inclined plane, tethered to a freely rotating and magnetized block and tackle, being heated by a blowtorch, moving near the speed of light. So I classify myself as a quantitative poet.

I got hooked on the Leonard Susskind, Sean Carroll, and R. Shankar books and videos during the tail end of the COVID sheltering in place. The latest entrant in the P for P Sweepstakes is Tomasz Bigaj’s Physics for Philosophers (privately published, 2025). It is an excellent treatment of the basic results of scientific investigations. Bigaj presents them in the service of the ontological, epistemological and methodological (big words!) questions in which this Warsaw philosophy professor is most interested: What is really there? How do we know? What makes a good theory, a good proof, a good experiment?  (Conspicuous by their absence are ethics and sprituality!) I will return  to those weighty questions some other time, but first let me marinate in the physics physics.

 

1.       FROM ANCIENTS TO GALILEO

Across many global cultures, first there was naked-eye astronomy: the “fixed” stars rotating around a point near Polaris with rising and setting constellations as visualized by any given society; the Sun annually traveling through what we call the Zodiac, at its daily height tracing a figure-eight in the sky; the seasons marked by two solstices and two equinoxes; and five wandering planets, the Moon and tides, and occasional comets and meteors.

Let’s face it, though, it was the Greeks who started the science science. Eratosthenes conceived of  the Earth as a sphere, long suspected by anyone seeing ships disappear over the horizon hull-first or noting the circular shadows during lunar eclipses. He recorded its radius with amazing (lucky?) accuracy. Aristarchus under-calculated relative distances to the Sun and to the Moon—a ratio of twenty to one, as opposed to the actual 390:1—but even the 20:1 suggested an astoundingly vast space.

What to do about those planets, occasionally moving backwards (retrograde) and two hugging close to the Sun? Eudoxus stacked them in concentric spheres around the Earth. Ptolemy codified this geocentric system with deferent main cycles, epicycles on the deferents (to explain retrograde motion), shifting the location of Earth off center to the eccentric (to explain variations in brightness), and centering the orbits on an equant (to explain variations in speed). Each feature was added ad hoc to address a specific problem. Proposals of heliocentrism around a rotating and revolving Earth were made in antiquity, but they were discounted because (i) nobody felt or observed any rotational movement and (ii) in a revolutionary system, the fixed stars (presumed to be near by) should exhibit parallax. The complex Ptolemaic system held the field for a long time, though it grew  out of whack with better measurements. King Alfonso X of Castile in 1252 observed that had he been “present at the Creation,” he would have cooked up a better scheme.

Nicholas Copernicus advocated heliocentrism, but with circles equipped with epicycles! (An annual revolution of an epicycle turns a circle into an ellipse, basically.) His model was less ad hoc and it suggested verifiable predictions, like there being four not three phases of the inner planets. Tycho Brahe blended the models by suggesting the planets orbit the Sun but the Sun orbits the Earth—which surprisingly can be made to work, sort of. Tycho also observed a supernova, so the fixed starts were not totally fixed.

Johannes Kepler’s careful observations produced his famous three “laws,” really three observed data patterns: (1) planets travel in ellipses with the Sun at one focus; (2) each planet’s orbit sweeps out equal areas in equal times; and (3) for all the planets there is a constant ratio of square of revolution period to cube of the “radius” (the length of the major axis, or half the long diameter). Thinking in such patterns without explanations was de rigeur for the pre-modern mind; just calculate, don’t ask why things are set up that way. (That mindset still holds in some quantum-mechanical circles.)

Galileo Galilei made the next giant leaps. First, he provided ammunition for heliocentrism with advances in astronomy thanks to his newfangled telescope. He confirmed (1) four not three phases of the inner planet Venus; (2) the Moon is rocky and mountainous, not a perfect globe; and (3) Jupiter itself is orbited by four “moons” (hereafter a common noun, not only a proper one).

Second, Galileo advanced a mechanics supplanting that of the revered Aristotle. He said motion does not depend on constant application of a force. He said objects of different weights would fall at equal speeds (using an ingenious thought-experiment with connected light and heavy balls)—that is, they would but for air resistance, which varies with the object’s composition and surface area (contrary to Aristotle’s view that the air itself moves an arrow along). The existing medieval concept of “impetus” (refined by Islamic thinkers like Avicenna) was expanded generally toward our concept of inertia. The inertial frame of an entire system is why we don’t sense the movement of the Earth—after all, a rock dropped from the mast of a moving ship drops at the foot of the mast, not way behind. He didn’t know of the Coriolis effect, but that would also have afforded proof of inertial effects on a rotating globe.

Third, Galileo introduced a concept of “Galilean relativity”: mechanical phenomena look essentially the same for any observers in uniform, rectilinear motion (in math terms, a constant velocity vector, hence constant speed and direction). Those phenomena hold under these “Galilean transformations.” As a result, he developed something like the inverse square law for free fall with uniform acceleration. All in all, pretty impressive at a time when any of the above would have gotten him into the trouble he got into with the Catholic Church authorities. (The early Protestants, relying on Biblical inerrancy (Joshua 10:12-14), were no more flexible.)

 

2.  CLASSICAL NEWTONIAN MECHANICS

It is time to discuss nomenclature. Mechanics is a comprehensive description of the motion of objects. It is divided into kinematics (the study of the present) and dynamics (the study of changes over time). We now say classical mechanics consists of (i) everything developed before special and general relativity, with absolute time and absolute space independent of one another, forces happening simultaneously, and objects and events being at definite points and times,  being  classical Newtonian mechanics, plus (ii) special and general relativity—in other words, everything where the values are specific and definite. Classical mechanics is opposed to quantum mechanics, where probability of values enters the picture. To add to the complexity, quantum field theory is a subdivision of quantum mechanics that examines the specific properties of fields.

Briefly, for atoms and smaller, use QM; for velocities approaching c, use SR; for large masses, use GR; for objects bigger than 10⁻⁹ m use CNM for v<<c and SR/GR for v —>c; for objects smaller than that, use QM for v<<c and QFT for v—>c.

Over the long term, the largest transformations were Newton, Einstein, and the group project of quantum mechanics (even including early Einstein). Classical physics (universe made of particles pushed around by fields in absolute time and absolute space) was stable in 1900 but for three pesky anomalies (constant c despite no detectable aether; ultraviolet theoretical catastrophe both impossible and unobserved, and weird photoelectric effect with no emission even with high energies). Surprises! Special and general relativity dealt with c; quantum mechanics with the other two. Quantum mechanics to 1925; quantum electrodynamics (QED) 1950 leading to quantum chromodynamics (QCD) and quantum field theory; and by 1975 Standard Model or Core Theory. We are still working through all that.

Isaac Newton is of course the big story in classical Newtonian mechanics, whether or not he really stood on the shoulders of giants. First off, there are his three laws (that is, “laws of physics”—these are not “laws” as used in social contexts) of translational motion: (1) an object continues at rest or on its path (i.e., constant velocity vector) absent force; (2) force is proportional to mass and acceleration, or F = ma; and  (3) every action is associated with an equal and opposite reaction, or F₁₂ = -F₂₁. (Some add a fourth, namely that vectors sum using the parallelogram method.)

The laws beautifully characterize projectile motion, and can be used to evaluate the instantaneous change rates (differential calculus) and the accumulated change (integral calculus). Newton’s version used little dots for time derivatives, while Leibniz’s version produced the more general notation of d’s and long S’s. In one of R. Shankar’s books, he casually mentioned that differential calculus is an algorithm—you evaluate the function at x and the tiniest amount different from x, and crank out the answer, while integral calculus is a guess—you work backwards from plausible differentiation (plus that pesky constant C) using the Fundamental Theorem of Calculus. The scales fell from my eyes; I had always regarded differentiation as more intuitive and now I had justification!

[Philosophical interlude, to which I will return separately: Bigaj says determinism can be epistemological (Laplace) or ontological. Propositions can be quartered into analytic a priori (“Every triangle has three angles”), synthetic a posteriori (“Grass is green”), analytic a posteriori (no such thing) and last but not least, the controversial synthetic a priori (i.e., things that are true about the world, but independent of our experience). Kant says the last category exists, like mathematics in nature, the concept that every change has an effect, and concepts of space and time; others say there are no such things.

What are time and space, anyway? Newton advocated for absolutism (time and space would be here absent things and motion), Leibniz for relativism (time and space exist in the presence of things and motion). Boy howdy did these two men dicker, them and their respective English and Continental fan clubs that caused a schism in European scientific circles for centuries.

Newton’s astounding law of universal gravitation, explaining everything from apples falling to Earth to planets in “endlessly falling” orbits: F = (Gm₁m₂)/r². The constant G is tiny, 6.67 x 10⁻¹¹ N m² kg⁻², as later determined by Cavendish with a torsion balance. (Note that an object’s free fall at Earth sea level under the second law of motion is tracked by F = mg where g is -9.8 m s⁻²—so the acceleration rate does not depend on the mass, and thus Galileo was right.) The theory works well to pinpoint the behavior (past and future) of two bodies, but breaks down (as do other theories!) for three bodies. It even explains diurnal (twice daily) tides—the Earth itself moves a little, not just the greater movement of the water in the oceans on either side.

F = ma by itself is really just a definitional statement correlating three quantities; as Frank Wilczek says, it is merely a promise that studying acceleration will be a productive exercise. It can also be described, “force is the rate at which momentum (mv) is changing with time.” Combined with the gravitational law, though, it yields new knowledge:  F = ma = m₂ (Gm₁)/r². Note that mass plays two roles: one as the measure of inertia (now defined as how much something hates to accelerate), and the other as the measure of gravitational attraction. The philosophers say there is no inherent reason why they should be the same. Neener neener.

How does gravity work at a distance? Newton declined to speculate (hypotheses non fingo, Latin for “I don’t bullshit”). That was a big problem for Cartesians, who distrusted voids and felt motion must be conveyed by intervening substance (like vortices, which Cartesians actually believed in and which James Clerk Maxwell used as an intellectual tool). Even after developments with fields, relativity and quantum mechanics, action from afar (Einstein’s Spukhaft, “spooky action at a distance”) remains an intellectual challenge.

Most of the foregoing is amazingly covered in Newton’s initial work, the Principia. There he laboriously reached his results via geometry, not his proprietary and undisclosed version of calculus. Newton did a lot of other things, in optics (separating visible light into the spectrum and conjecturing light “corpuscles” or particles), management of the Mint, anti-trinitarianism, alchemy, and priority battles with Leibniz and others. Westfall’s Never at Rest is a comprehensive biography.

Classical mechanics after Newton extended to the study of springs (Hooke’s linear equation based on displacement and rebound, F = -kx)), simple harmonic motion (simple=energy conserved; harmonic=parabolic), rigid bodies, non-compressible fluids, rotational motion, and conservation of certain properties.

The quantities and laws of rotational motion are isomorphic to those of translational motion. Thus, force F becomes “moment of force,” force applied at a distance and with a clockwise or counter-clockwise twist, more commonly known as torque
(= F d). Mass m becomes moment of inertia I, accumulated in slices of a substance I = Σ m r², thus ingeniously combining information about the slices’ mass, shape, and distribution. Velocity v, acceleration a, and momentum p = mv become angular velocity ω, angular acceleration α, and angular momentum L = I ω respectively. Newton’s second law in the context of rotational motion thus becomes  (torque) = (moment of inertia) x (angular acceleration, α).

Centripetal force (center-seeking) is a real inward force inside a frame, like a planet circling a star. Centrifugal “force” is a fictitious force—we keep going straight forward while a car suddenly goes left, and we “feel” pushed to the right.

Waves were studied. The wave equation is a second-order partial differential equation,
∂²φ(x,t)/∂x² = (1/v²) (∂²φ/∂t²), with that coefficient 1/v² yielding the speed of the wave, v. Thomas Young 1801 used the famous double-slit apparatus to demonstrate the wave nature of light, which coexisted with the Newtonian corpuscular theory for a long while. Decibels are measured β = 10 log₁₀ I/I₀, where I₀ is the human threshold (15 dB is a whisper) while 120 dB, a trillion times higher, is a rock concert!

The next advances were the elementary conservation principles: conservation of momentum; conservation of first the sum of kinetic and potential energy, eventually of all energy; and conservation of angular momentum. (As to the latter, a constant angular momentum L = I ω = 2mr² ω means that an increase in r (e.g., when a skater puts her hands or legs out) is associated with a decrease in angular velocity ω (and vice versa). This apparently is consistent with Kepler’s second law about sweeps and time. Conservation means something doesn’t change over some set of happenings. Later, Emmy Noether shows that every smooth continuous symmetric transformation of a system conserves something. So energy (time), momentum (space), and angular momentum (rotation); some more in quantum mechanics.

Note that  I am using words like “pinpoints” or “is associated/consistent with,” rather than “explains” or “causes.” It is said that there are no “causes” in modern physics, and maybe no “explanations” either. Bertrand Russell said the “causation” concept and language survive as the monarchy survives—as an anachronism that is thought to be a benign absurdity. To say “A causes B” is to say “in some other completely parallel case, or world, or dimension, we do or would observe that no-B implies no-A.” More for the later philosophy post.

Pierre-Simon Laplace (1749-1827) famous quote:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past could be present before its eyes.

Mathematical formulations of classical Newtonian mechanics came in three versions. First came that of Laplace, which requires knowing something like six dimensions for every particle at every moment in time. This is wildly impractical for mortals.

Then came the version of Lagrange and “least action.” The Lagrangian L = T - V (where T is kinetic and V is potential energy) was perfected by Euler. Every possible particle trajectory is an action. One of these actions will obey Newton’s laws, and that one has the smallest action. The principle of least action took centuries to work out. At every point there is x, v, kinetic energy T  and potential energy V (capital V), right? The Lagrangian L(x,v) = T - V = (1/2)mv² – V(x) yields an action S = (integral) L dt. To minimize L you want kinetic energy T to be small, but not so small that the trajectory stops (not zero!), and you want potential energy V to be large, but not so large that it takes a huge T to get to the target. (There was and is unease over using retrospective (almost anthropomorphic!) concepts like least action—compare the similar Fermat principle, holding that light “takes the path” that minimizes the time to reach point x.)

Then came the version of Hamilton, adding the concept of momentum to Lagrange’s system; the “Hamiltonian” itself represents the total energy of a system. These three systems are consistent, with Lagrange used in some applications and Hamilton used in others, especially in general relativity and quantum mechanics.

A practical aside: simple machines are mechanical devices that use leverage to change the direction or magnitude of a given force. The core six are levers (A fulcrum/B nutcracker/C tongs), wheel and axle, pulley, inclined plane, wedge and screw. The output force is increased at the cost of a proportional decrease in the distance moved by the load. More to come in a post about engineering.

 

3. FLUID MECHANICS AND THERMODYNAMICS

There was some dabbling in fluids in antiquity like Archimedes’ principle that the upward buoyant force on an object is equal to the weight of the water volume it displaces, so if the force exceeds the object’s weight, it floats. Bernoulli’s principle is that low pressure is associated with high speed (e.g., an atomizer blows air across a liquid, giving lift to droplets). Warning: it’s not the only explanation for flight—another is upward thrust per Newton’s third law of motion.) Air pressure means water only naturally can be raised 32 feet (mercury a more barometer-convenient 760 millimeters). Navier-Stokes uses F = ma for turbulence around fluid flows. We are lucky that water is weird, as it expands right as it heads toward freezing, 4 C; that keeps the lighter ice floating on top of a pond, allowing life to go on beneath.

Gases were studied in the seventeenth and eighteenth centuries with the perfection of sealed vessels. The caloric fluid theory of heat was eventually ditched in favor of motion of atoms and molecules (thanks to American Benjamin Thompson). The arrow of time begins to perplex people, because classical mechanics works in both directions (past to future and future to past) but heat does not.

Heat is an extensive concept, a process variable, based on size and capable of summation; a quantity of heat Q is in the same units (m d² t⁻²) as work W, kinetic energy T, potential energy V, etc. Heat is disordered, work is ordered. Temperature is an intensive concept, average kinetic energy per molecule, based on relative intensity per unit and measured as an interval from some agreed reference point. A tub of lukewarm bathwater is “hotter” than a cup of boiling soup. Specific heat is the amount of heat needed to heat a chunk of a substance 1 C. The concepts are obviously related:

(change in heat) = (specific heat of a substance) x (mass) x (change in temperature).

Ideal gas law is pV=rRT,  where r = 8.3 Pa m³ K⁻¹ mol⁻¹.  [work on this one]

Laws of thermodynamics: (1) Conservation of energy, change in energy = Q – W, so in a closed system the change is zero. (2) Law of Entropy, which can be variously expressed: (i) (Kelvin) no 100% efficient heat engine, (ii) (Clausius) no spontaneous flow of energy from cold to hot; that is,  (the change in “entropy” (disorder) S is always greater than or equal to the positive value of change in heat over time. (iii) (Maxwell-Boltzmann) entropy never decreases in a closed system (compare to an open system with solar energy, which can produce DNA, organisms, and other constituents of order). (3) There is an absolute zero (-273 C) where motion ceases, and we can’t reach it (we are close, down to 10⁻⁹ K, where superconductivity occurs). “You can’t win, you can’t break even, and you can’t quit the game.” In some discussions there is also a zeroth law—two things each in thermal equilibrium with  a reference thing are in thermal equilibrium with each other. (These extra laws seem rather self-evident.)

Statistical study of molecular motions.came next. The Physics for Poets line is “systems tend toward more ways of being.” Maxwell-Boltzmann distribution asks what fraction of particles have velocities in a small range (like a Gaussian or bell curve). Boltzmann’s mathematical version of the second law: S (entropy of a macrostate) = k log W, where k is
1.38 x 10⁻²³ J K⁻¹ and W is the number of microstate arrangements that produce the given macrostate. Microstates are function so independent variables—this has some relation to J.W. Gibbs and the number of degrees of freedom and phase rules. [work on this] There are a few macrostates with properties like fluid temperature and pressure that can be made of many microstates of particle position, velocity, and momentum.

That arrow of time continues to perplex us. Plus, why did the Universe begin with minimum entropy? Neener neener.

 

4. ELECTRICITY, MAGNETISM & LIGHT

All classical mechanics now reduce to electromagnetism! Bodies do not “collide,” it’s their respective outer electron shells repelling each other. Formerly physics worked with isolated discrete particles and forces, now it works with fields.

Gravity has one type of “charge,” namely mass; electrostatics has two, positive and negative with like charges repelling and unlike charges attracting one another. Coulomb’s law F = k q₁q₂/r² is similar to Newton’s universal gravitation law, but here k is 10 x 10⁹ N m² A⁻² s⁻², much larger than gravity’s tiny G.

F = q₂ (k q₁/r²), where the parenthetical term is the intensity (value) of field E from q₁ felt at point 2, distance r away. Note that if you take charge q₂ away, the field E at that point 2 is the same!! Thus a single electrostatic charge can exist, and it has a field. Current, I, is Q/T; a coulomb is the unit of charge, 6.24 x 10¹⁸ charges; an ampere is the unit of current, A = C/sec.

A Physics for Poets interlude: “Fields respond to our desire for some kind of communication between objects. It takes two charges to feel a force, but only one to produce a field. A field is a force waiting to happen. Field is a space exerting a force on a charged particle (electrical field, E; magnetic field, B).” Coulomb’s law has to be tweaked in relativity, in order to avoid having a withdrawal of a charge at point 2 violating the speed-of-light limit.

Electric potential, voltage, is a scalar, a measure of the strength of the field vector at one point in comparison with the strength at some other point. The unit is a volt, J/C. Think of it sort of like the top of a hill compared with the base (or vice versa). Gauss’s law of electrostatics: the net electric field crossings of any surface equals the net enclosed charge. Capacitance Q/V; the unit is the farad C/V. Resistance R = V/A; the unit is the ohm V/I. Power P is V I or I² R. Ohm’s law is, equivalently, V = IR, I = V/R, and R = V/I).

Magnets, on the other hand, are never single-value—they are always bipolar, always a North and a South. Gauss’s law for magnetism: the net magnetic field crossings of any surface are zero. A magnetic field only exists when there is an electric current J flowing. The unit of magnetism is the tesla, 1 C at 1 m s⁻¹. The Biot-Savant law pinpoints the value and direction of magnetic field B perpendicular to the wire carrying current J. But for electric charges to feel magnetism, there must be a change in the magnetic field.

Faraday’s law explains dynamo generators: a change in the electric field E is associated with the negative of the change in the magnetic field B. So turn a magnet, get electricity. Use the right-hand rule. Inductance: opposition to change in current. Capacitance: opposition to change in voltage. (So LC is like an oscillator.) Impedance is the total opposition to flow of current. It includes resistance, opposition to flow of current in a DC circuit; impedance, opposition to flow of current in an AC circuit; and reactance—both capacitive reactance (as frequency goes up, capacity reactance goes down) and inductive reactance (as frequency goes up, so does inductive reactance).

Ampere’s original law: a current J running through a surface is associated with a magnetic field B. Maxwell noticed that a surface could cut off current (e.g., if sliced inside a capacitor) but field B would still exist, so he added the Maxwell “displacement current” to form the Ampere-Maxwell law. A change in a magnetic field B is associated with not only a current J but also with a change in an electric field E.

Maxwell’s equations (in the form developed by Oliver Heaviside 1881):

Gauss’s Law for Electricity: ∇ · E = ρ / ε₀ 

Gauss’s Law for Magnetism: ∇ · B = 0 

Faraday’s Law of Induction: ∇ × E = −∂B/∂t 

Ampere-Maxwell Law: ∇ × B = μ₀J + μ₀ε₀ ∂E/∂t 

Maxwell allowed electricity and magnetism to speak the same language. Maxwell’s equations collectively predicted the propagation of electromagnetic waves, later shown in the laboratory by Hertz and commercially by Marconi.

Electromagnetic waves propagate at the speed of light, and later it was confirmed that light is in fact an electromagnetic wave. But that wave travels at c, in all reference frames even when it’s traveling in addition to a velocity v? It’s still only c, not c + v? That violates Galilean relativity. Maybe there is an ether? No, Michelson-Morley and other experiments showed no evidence of that.

Anti-climactically, here are some developments in op[tics. The angle of incidence equals the angle of reflection. Snell’s law for different media: n₁ sin 1 = n₂ sin 2. Parabolic mirror sends rays to focus. Reflective mirror turns things upside down, our brains do the rest. Rayleigh scattering of blue explains our sky.

Small samples in double-slit experiment produce pixel behavior rather than Young’s wave interference pattern. Particles start to emerge. Hydrogen ion, then photons, electrons. Einstein predicted photons from thermodynamics? Detected 20 years later in 1925. de Broglie: dual nature of electrons, then protons and neutrons too. Born: probability function to locate the electron. No such function for photons.

 

5. SPECIAL RELATIVITY

Einstein resolved the c problem by preserving Maxwell’s equations and ditching classical theory. All phenomena, not just mechanics but also electromagnetism, are unaffected by uniform relative rectilinear motion. The laws of physics for all observers are the same, including a constant c. Can go from classical Newtonian mechanics momentum p = mv to the relativistic p = m(v/√(1-v)) to p = m (rest energy) + mv²/2 (kinetic energy). When you insert c² instead of c=1 plug units, you somehow get E = mc² (as the rest energy of matter) + mv²/2 + (dinky terms) (as the kinetic energy of matter).

Time dilation and length contraction are “real,” but what changes are not the molecules of an object or clock or the physical processes, but rather time and space, actually spacetime, itself. Every clock runs faster and every rod is longest in its own rest frame, away from masses. (Conversely, a fast particle in the Fermilab accelerator lasts longer than that same particle at rest on the outside.) Time passes slower close to Earth. Traveling at great speeds relative to the ending position slows time down; thus the famous “twin paradox” is not a paradox at all.

(I almost finally see why the straight path is the longest elapsed time. If spacetime is one thing, and one person accelerates and passes through more space, then something has to give and it’s time—that person experiences less time. And there is a complementary shortening of the time of the events of measuring distances, hence the “shrinking ruler.” Keep testing this intuition.

Spacetime “lengths” are dimensionless; spacetime “velocities” are called “rapidities” never exceeding c. Minkowski’s double cone of time-like future and past, sitting in space-like surroundings. Special relativity definitively refuted Newton’s absolutist view of time and space as separate entities, but that doesn’t mean Leibniz’s version of relativism was correct—there  are fields out there. Simultaneity must be abandoned as a universal concept.

[Math interlude: Taylor series expresses an exponential function in trigonometric terms a sum of powers of sines and cosines with coefficients. Euler’s equation eᶦˣ = cos x + i sin x reduces to the famous All-Star Team equation where the exponent is i (pi); that plus 1 is zero.]

 

6. GENERAL RELATIVITY

Special relativity deals with inertial frames (those in uniform, rectilinear motion, constant vector  v so zero vector a). General relativity applies to accelerating frames (changes in a). Einstein’s famous equivalence principle equates gravity with constant acceleration. He used Riemannian non-Euclidean geometry, begrudgingly, to hypothesize the curvature of spacetime associated with the presence of mass.

“Matter tells spacetime how to curve, spacetime tells matter how to move.” Einstein’s field equation for poets basically says

Geometry of Spacetime = Energy of Matter.

The matter side of the equation is the energy-momentum tensor Tµν, which has components in a 16x16 matrix. The base geometry side of the equation, the geometric tensor Gµν, has the wrong number of components. So it must be manipulated to be stated in consistent components using a combination of a Ricci curvature tensor, a metric tensor. and a curvature scalar. The tensors are Riemannian manifolds, containing information on all parts of a curve.

New insight for me is that mass is not just stuff, or just extended stuff. It has characteristics similar to a fluid—energy density ρ=f(xμ) and pressure (stresses and strains) (xν). In physics, a “fluid” is any form of matter extended through space.

There is a metric tensor for spacetime with these two variables. gμν is a 4x4 matrix with observer time in the upper left (rate at which tau time flows with respect to coordinate t time, the other top and left cells are “twists” and the bottom 3x3 are spatial dimensions, negative numbers to fit the t²-x²-y²-z² Minkowski equation.

Somehow you can go from g to a Riemann tensor, which in its raw state is a 4x4 matrix with four variables. Can reduce to a Ricci tensor Rμν (only two variables) and to a R (curvature, a scalar).

An energy-momentum tensor or a stress-energy tensor results. All mass, energy, momentum, pressure, electrical charge, density, and stress of matter are bound up in that stress-energy tensor Tμν.

For a perfect fluid there is a metric with energy-mass density upper left, pressure along the diagonal, and the rest being stresses, momentum, heat, and other variables capable of being zeroed out (“normalized”).

In general relativity, gravity is caused not only by a quantum of stuff, but also by these other characteristics of matter.

So we finally get to Einstein’s theory of general relativity in November 1915, correlating to a=G(m₁/r²) eᵣ, relating the gravity-acceleration equivalent of spacetime curvature to the energy-momentum content:

Rμν - 1/2 R gμν = 8(pi) G Tµν.

Left side (involving the Ricci curvature tensor) is the Einstein tensor. Amazingly, the right side essentially comes from and is equivalent to all of classical Newtonian physics.

In 1917 Einstein added to the left side the cosmological factor, a constant applied to the metric itself, Λ, to keep the universe from expanding. Later he dropped it, calling it his biggest blunder. 1998 evidence of an acceleratingly expanding universe suggests the constant may be needed, in a direction and for a reason he didn’t anticipate.

So gravity has effect at a distance associated with the curvature of spacetime that explains, or is explained by, the presence of matter and energy. Neener neener.

General relativity was first used to explain gap in Mercury precession (0.148 degree/century per Newton, 0.16 observed; Einstein filled 0.012 gap). Then gravitational lensing near massive objects, verified (sort of) during 1919 solar eclipse and Eddington expedition (fudged data!), definitely verified later. Also used to predict the evolution of the universe, the existence of black holes, dark matter, gravitational redshift, gravitational waves (detected 2016), other things that Einstein knew nothing about.

Just as with Newton laws being capable of derivation by Laplace, Lagrange and Hamilton, can do something like a Lagrangian least action approach (Einstein-Hilbert action) and probably Hamilton momentum equations too.

(An aside: Green’s theorem: the two-dimensional case of Stokes’s theorem. A double integral related to the line integral calculating flux of fluids across a surface. It measures the amount of rotation of fluid around the boundary of the surface by integrating the rotations of little bits of the interior (“plaquets”) inside the surface.)

 

7. QUANTUM MECHANICS

An even more complicated and perplexing subject. Anyone who isn’t bothered by quantum mechanics is not paying attention.

There are concepts (states, superposition of states, entanglement of states, probability, Schrödinger’s law of state evolution), interpretations (Copenhagen, Many-worlds/Everett, Bohmian hidden variables, Spontaneous localization), and challenges (EPR spooky action at distance, Bell’s theorem saying hidden variables conflict with quantum mechanics, measurement problem (Schrödinger’s cat), indiscernibility of particles).

Black-body radiation in classical thermodynamics would radiate infinite energies as frequency increases (the “ultraviolet catastrophe”); that’s not seen and would be impossible anyway. Planck 1900 solved by saying energy is only released in quanta dependent on frequency, hv, so a high v would require an impossible energy level. Einstein predicted the photon from the photoelectric effect. Bohr showed that when an electron drops an orbit it emits energy, eventually a photon. And of course Heisenberg and Schrödinger had different but consistent systems for characterizing quantum mechanics. Electron probability distribution is localized when they interact (observation or measurement), collapses the wave function as between the particle and that observer. Particles are “minuscule moving wavelets”; there is no void, only fields that can fluctuate and bring a particle into reality. “The world consists not of things but events, kisses not stones.” [Renormalization (adding large corrections to an equation) is a crutch. [?]]

Double-slit madness. Molecules larger than 800 atoms follow Newton (and Young?), smaller ones create waves until they are observed and then localize? Feynman: the electron and proton are like arm wrestlers—you can’t see it but they are exerting tremendous force on each other.

Einstein’s objection to quantum mechanics was manifest in the EPR paper—assuming locality or hidden variables. They argued it’s impossible for two entangled particles to communicate faster than c. That would be “spooky action at a distance.” But Bell’s theorem cuts against him. I don’t quite have this intuitively down, but the theorem says something like “QM is incompatible with local hidden-variable theories.” “QM is nonlocal, interactions can indeed occur as between events too far apart for light to connect them.” “Hidden parameters lead to an inequality that violates QM under conditions of locality.” “Any theory determinative of all measurable results must be non-local.” In short, Bell’s theorem and the experiments validating it constitute a vote for spookiness.

The quantum mechanics “measurement problem” is exemplified by “Schrödinger’s cat” thought experiment. A “wave function” is not a tangible wave of something, like sound or even like electromagnetism—it’s nothing but vectors in an infinite Hilbert space.

Schrödinger equation:

(H hat) φ = i (h bar) ∂φ/∂t 

where h is the Planck constant, 6.6 x 10⁻³⁴ m² kg s⁻¹; (h bar) is h divided by 2π, 1.05 × 10⁻³⁴ m² kg s⁻¹; and (H hat) is the Hamiltonian differential operator for the total energy of a system. That equation somehow produces the Schrödinger wave function. U is a “linear operator” that itself is a function of H (relating to differences in potential energy among states) and time. “The initial state plus total energy gives you the later state determined by evolution operator U.

Schrödinger described same phenomena as Heisenberg’s matrices with wave function of the entire system . That function is composed of a real part and an imaginary part. Allows function to spin full circle in complex number space with I and R as axes.

From this function we can predict future behavior, in a given wave equation. Ordinary H is the Hamiltonian H(x,p) for position and momentum we use in classical mechanics—it’s a real scalar there. But (H hat) is the Hamiltonian operator, or procedure, that maps φ to a new function (H hat) φ. It’s a function itself. Energetic states move more rapidly. Need to derive a particular operator for a particular wave function.

The wave function is not a wave in a medium like sound, or even in spacetime like electromagnetism, or even spacetime itself like gravity; it is simply a mathematical probability vector field. [work on this] Hmmm. Neener neener.

Interpretations are each problematic. (1) If you reject collapse, 9then the Schrödinger equation is not universal, and there is no evidence for that; similarly, the GRW theory (some particle spontaneously eventually locates at a single point, whereupon all the others do) has no supporting evidence. (2) If you reject idea that well-defined individual states can combine to a less-defined state, you come up with a concept of hidden parameters, Bohm or deBroglie-Bohm, which is in tension with Bell’s Theorem and its empirical evidence. (3) If you reject idea that there is only one resulting state, that leads you to Everett or many-worlds interpretations. Wild stuff; philosopher David Lewis posits that those worlds are real, in order to explain modal concepts like possibility and necessity that occur in our world.

Decoherence. Superposition leads to a mixed state (specifically when a quantum particle encounters a macroscopic object or system), causing the interference effect associated with coherent wave functions to disappear (hence “decoherence”). The particle loses its superposition character and winds up in a single state. That is unhelpful in quantum computing, hence the efforts to avoid decoherence. Heisenberg uncertainty: ΔxΔp ≥ (h bar)/2.

Indistinguishability. In classical mechanics, each particle has a position and a momentum; in principle you could “label” them all. But in quantum mechanics, you can’t tell one entity from another. Fermions (electrons, protons, neutrons, quarks) occupy one and only one state (per the Pauli exclusion principle, though two electrons of different spin can co-habit a state). Bosons (force carriers, photons, weak, gluons, gravitons, Higgs), on the other hand, can pile on to any state; thus, we feel both gravitational and electromagnetic forces. We should reject assumption that quantum particles are individuated objects or that they stay that way.

Overall, quantum theory is fundamental. Physics for Poets quotes: “Particles are probability wave functions of complex variables, whose amplitude is associated with possible configurations. Particles have mass but the Universe has energy. Particles are quantized vibrations in a particle field. Space is an emergent property of quantum entanglement.”

 

8. OTHER TOPICS

Cosmological questions. Is the universe bound or infinite? Is space expanding, and expanding at greater velocities or acceleration? What happened at the Big Bang? Why was there a brief inflationary episode just large enough and long enough at the right time to explain the homogeneity of the universe (I’ve always been suspicious of Guth’s idea). Only 15% is normal matter. What is the 27% dark matter needed to keep galaxies together—matter at the periphery (MACHO), or weakly interacting massive particles (WIMPs) throughout? The universe is expanding too fast—what is the 68% dark energy? What else can we know about black holes, white holes, and the future of the universe? Planck time is 10⁻⁴⁴ s, Planck length is 10⁻³³ cm—beyond those limits we simply cannot know about things.

German soldier Schwarzschild 1915 did math to derive general relativity equations for our solar system.  His equations for the mass-energy adjacent to a planet or star had funny terms. First one goes to zero at r=2GM, implying time stands still for the observer in that frame, while second one goes to infinity at that same radius, the Schwarzschild radius, infinite “pressure” hence trapping all radiation. For Sun (700,000 km radius) that radius is only 3 km; for Earth (6000 km radius) it is under a centimeter! Eventually theorized as the event horizon of a black hole. Many weirdnesses. The Universe as a whole has behaved like a “white hole” since the Big Bang.

Relativize quantum mechanics into Quantum Field Theory. Classical mechanical fields are turned into quantum mechanics quanta. Which is fundamental, the field or the particle? Only QED works so far for the electromagnetic fields. Working on QCD for the strong fields.

Standard Model or Core Theory features two crews. First are the elementary matter particles, fermions, including leptons (electron, mu, tau and their neutrinos), quarks (udstbc), and the antiparticles of all of them. (Hadrons are compound matter particles, consisting of baryons (3 quarks—protons and neutrons) and mesons (2 quarks), which are capable of forming larger groups like octets with symmetry.) Second there are the interaction particles or force carrier particles, the bosons (photons, gluons, W/Z, gravitons, Higgs). Symmetries of various types lead to the idea of super-symmetries (symmetry with invariance for both force and matter, such as SU(5)), maybe to string theory or brane theory.

Four forces: strong 1 (within the nucleus), electromagnetic 1/100 (infinite but cancels out over lots of charges), weak 1/10,000 (within the neutron I guess), gravity 10⁻³⁹ (“I may be weak, but I am infinite, I build up with mass, and I never cancel!”). “Gravity is not so much a force in spacetime as a consequence of mass in spacetime.” Neener neener.

Electromagnetism and weak nuclear force have been unified, sort of, and emerged after Big Bang as “electroweak.” Grand Unified Theory (GUT) would bring together electromagnetic, weak and strong forces. A Theory of Everything would also include gravity, hence “Quantum Gravity.” That might answer the question whether time and space are merely emergent properties of that consolidation.”

Rovelli: space may be made of loops—tiny chainmail-like “spacelets.” It is in our consciousness that an extension in time becomes condensed into a perception of duration. Augustine (within the mind we measure time), Kant (space and time are a priori synthetic creations of our mind, pure reason, but space orders our perception of the external world while time orders our internal sense). Husserl: what we call memory is the present sensation of the past events—only present collection of neurons is phenomenological. “The passage of time is internal, is born in the world itself in the relationship between quantum events that comprise the world and are themselves the source of time.” Neener neener neener neener.