It’s in every physics textbook: objects in a gravitational field always fall at the same speed, regardless of their mass or what they’re made of. But with what degree of confidence do we know this?
The final results of a space experiment to test the so-called equivalence principle have just been published and push the accuracy a hundred times higher than previous measurements, reaching an accuracy of one part in a thousand trillion, which physicists sparingly write as 10-15.
Published in the journal Physical Review Letters and headed by Pierre Touboul, from Paris Saclay University, the work bears the fruits of the Microscope mission, a small satellite (just over 300 kg) developed by Cnes (French space agency) in cooperation with the ESA (its European counterpart).
After its launch in 2016, the mission spent two and a half years harvesting the results of a technically challenging experiment, albeit simple in its description: it consisted of cylinders, either titanium or platinum, placed inside the spacecraft to experience free fall under the field. Earth’s gravity in orbit.
The cylinders, when they threatened to move out of place due to small disturbances in the satellite, were held in position by electrostatic forces (generated by electrical charges at rest). By measuring any differences in this adjustment process between the cylinders, the scientists essentially measured whether objects were “falling” at different speeds. During the entire time of experimentation, they were not.
It’s an ultra-sophisticated version of an experiment performed in the 17th century by Galileo Galilei, by letting spheres of different masses run down inclined planes to measure the time of descent. (It is even said that he made a more dramatic demonstration by dropping two objects from the top of the Leaning Tower of Pisa, but most historians believe this was just a thought experiment.)
Since then, countless tests have been carried out to demonstrate the same empirical fact with increasing confidence. One of the most dramatic (albeit by no means accurate) was performed by Apollo 15 astronaut David Scott on the surface of the Moon in 1971: he dropped a feather and hammer and saw both go to the ground simultaneously (on Earth, the atmosphere would get in the way of the descent of the penalty).
The best performed before the Microscope, on the other hand, had reached a precision of 10-13. Designed to do a hundred times better, the French satellite produced partial results in 2017, taking that figure to 10-14. Now, with the conclusion of the analyses, the coveted 10-15.
Why so much testing?
The reader may wonder where the obsession with testing a phenomenon like this comes from to its most extreme limits. The answer lies in the theory of general relativity, our best answer to date for describing gravity. The equivalence principle, although purely empirical, is at the basis of the theory.
Starting from the Galilean equivalence principle, better elaborated by Isaac Newton, Einstein conceived a generalized version that indicated not only that any object, regardless of its nature and mass, falls at the same speed under a gravitational field, but that being in free fall in a gravitational field and being at rest away from any gravitational field are essentially the same thing, and the same laws of physics apply in both cases.
“There are two definitions of mass, one that sees it as resistance to be set in motion. [a chamada inércia], and the second interprets it as a ‘source’ of gravitational field. In this case, the deformation in space-time that it would cause is the attraction it causes in other massive bodies”, explains Cássio Leandro Barbosa, an astrophysicist at the Centro Universitário FEI. “The first is Newtonian, and the second, Einsteinian. The principle of equivalence is the marriage of the two.”
The problem: although sensible and consistent with the experiments already carried out, the equivalence principle is just that, a principle, an assumption. Clearly, it’s a great approximation of reality. But would it be an absolute approximation?
Physicists have reason to believe that maybe not. This is because there is still a marriage to be made: that of general relativity and quantum mechanics. The first is a classical theory, in the sense that it describes space, time, matter, and energy as continuous — something that can always be divided, indefinitely.
The second is quantum, that is, it presupposes that nature has a minimum granulation of all its fundamental parameters. It gets to a point where you can no longer divide matter or even space.
They are, therefore, opposing views of nature. How can both be perfectly true? For the vast majority of physical problems, this is an issue that doesn’t bother you. Normally, quantum mechanics is a good description of everything that is very small, and relativity, that is very large. Each in its square.
The drama is when the two have to operate together, in extreme circumstances, such as inside black holes or even in the Big Bang, the moment that started the Universe as we know it. To understand these phenomena more deeply, it is necessary to marry the two theories.
By looking for cracks in a fundamental principle of general relativity, scientists are actually looking for a clue to how it might be rewritten to fit with quantum mechanics. The French mission Microscope tried and only confirmed once again the stunning success of the equivalence principle. But there is already a project for Microscope 2, which should raise the precision to 10-17 —and who knows how to find such a sought-after rape.
For now, the conclusions drawn by Galileo with his spheres and inclined planes, as well as by Einstein and his view of gravity as a curvature of space-time, remain perfectly (and not just approximately) valid.