Design arguments and natural science have a complicated relationship. Design arguments infer the existence of God by appealing to facts about our experience that implicate an intelligent creator of the world. Though arguments of this sort go back at least as far as Socrates, this approach to natural theology only blossomed with the invention of natural science. Early science provided ample phenomena on which to base design arguments, and design provided a framework for guessing at the order in nature. In Protestant countries, the two were inseparable for centuries. But as the sciences outgrew that framework, the relationship weakened. It appeared to end outright with the advent of Darwinian evolution. At the very least, the branch of design arguments growing out of the extraordinary complexity and adaptation of living things withered away.
But fortunes have a way of reversing, and late twentieth-century physics has allowed a lofty new design argument to flourish. The so-called Fine Tuning Argument – or FTA for short – claims to find design in the very laws of nature themselves. What the FTA really accomplishes, however, is to lay bare a highly dubious assumption that has quietly risen to prominence in physics if not philosophy: every fact that might have been otherwise has a rational explanation. To see this takes a bit of digging into the logic of design and the nature of physical law.
Let’s start by getting clear on the FTA itself. On the face of it, it’s an argument about surprise, specifically surprise that the laws of physics thought to govern our universe are just right for life. To understand what “just right for life” means, we have to know a little something about the way physics is actually formulated. Modern laws are characterised by a number of dimensionless parameters. The parameters are dimensionless because they are comparative ratios of similar quantities, such as mass to mass or energy to energy. The particular parameters of interest are numbers built into our theories of matter that reflect such things as the relative strengths of forces and relative masses of particles. Typical examples cited are the ratio of the masses of the electron and proton and the fine-structure constant, which reflects how strongly the electromagnetic field interacts with matter. According to the FTA, the universe is just right for life in the sense that the possibility of life is exquisitely sensitive to even a tiny variation of these parameter values. Twiddle any one of them by just a little, and the resulting laws would govern a world hostile to living things. For instance, were the value of the fine-structure constant a little bit lower, it would be impossible for habitable planets with thick atmospheres to form. Were it a bit higher, sunlight would blast apart the large molecules of life. Given this sensitivity to the values of the dimensionless parameters, says the FTA, it is very surprising to find ourselves in a universe that supports life. This cannot be a coincidence. Since the delicate balance of parameter values needed for life is unlikely to be struck upon by chance, we can rule out unintelligent causes of the laws of nature. This leaves us with but one option, namely an intelligent creator of the laws of nature. Surely such an entity deserves to be called God. In this way, the FTA leads us from a surprising state of affairs to the existence of God.
Responses to the fine-tuning argument are varied. Some deny that the constants are really all that fine-tuned. That is, they argue that there’s actually a wide range of possible parameter values that would support life. Others attempt to argue that we’ve left out a potential explanation. Specifically, many physicists assert that one or another multiverse hypothesis makes the fine-tuned laws likely. As the name suggests, a multiverse is a collection of multiple universes that in most models are completely causally disconnected from one another – what happens in one can have no effect on what happens in the others. The laws of physics are treated like properties or features of universes. Just as gum balls come in different colours, so universes come with different physical laws. But the multiverse is not merely a pile of universes. Each multiverse hypothesis includes some way in which universes produce or cause one another. In particular, there is supposed to be a way in which the laws of each new universe are determined as they are added to the multiverse. The mechanisms by which universes are produced and acquire laws are supposed to explain why it’s not so improbable to get the observed outcome of laws that support life. For this reason, critics of the FTA claim that the apparent surprise of fine-tuning should lead us to embrace the multiverse, not God.
The one thing that none of the parties to this debate seem to question is the coherence of claims about the probability or surprisingness of fine tuning. This, I suggest, is a mistake, one that has been obscured by that troublesome assumption I hinted at earlier. But to see what’s gone wrong and to unmask this assumption, we first need to clarify this idea of surprisingness. To that end, let me tell you a tale of two rock piles.
In the little town of North Salem in the state of New York, a hundred meters or so from the banks of the Titicus River, there is a rock. It’s a remarkable rock. For starters, it’s a 60-ton chunk of pink granite. Pink granite is not native to North Salem. Also, the rock is not on the ground. It rests atop the points of five much smaller, irregularly shaped stones. I assume that’s why the locals call it “Balanced Rock.” As remarkable as this arrangement is, we can be pretty sure it was put there by unintelligent causes, namely the machinations of glaciers. There are good reasons to think this rock was carried from the highlands to the north of town by a thick sheet of ice. It was balanced atop the smaller stones as the ice melted and dropped the rubble it had accumulated.
The city of Newport in Rhode Island, on the other hand, sports a genuine mystery. In a small park above the old harbour stands a tower. Made of mortar and stone, it’s a cylinder some nine meters high that rests upon eight columns. Each column connects to its neighbour through a squat arch. No wooden parts of the structure remain, if ever there were any. The presence of joists is suggested by rectangular recesses on the interior walls, but the arrangement is baffling. No one knows who built the tower. No one knows why. But despite all the mystery, no one doubts that someone designed and created it.
Why do we rule out unintelligent causes in one case but not the other? Assuming it’s rational to arrive at different conclusions in these two cases, what justifies the distinction? It all comes down to surprise. We determine whether a particular outcome would be rare or common for a given sort of process by considering the conditions that would have to be met in order for the process to produce that outcome. If we believe the conditions are exceedingly rare or impossible – if the outcome is too surprising – then we have reason to reject that process.
Let’s start with the old tower in Newport. Despite the fact that Narragansett Bay was carved out by the ebb and flow of ice, no one believes that the tower was dropped in place by a melting glacier. What would it take for a glacier to produce this result? Somehow, the route taken by the glacier would have to be such that successive layers of stone and finely ground lime are deposited on or in the ice, and the weather would have to be arranged so that the climate warms in fits and starts timed to allow the lime to mix with water then harden into mortar … I won’t try to spell out the rest. The details of the composition and shape of the landscape the glacier would have to encounter are mind-numbingly complex to the point of being barely conceivable. Those conditions aren’t obviously ruled out by known natural laws. But they are so specific and so geologically unusual that we can with confidence assume they are never realised. At the very least, given our observations of existing glaciers, they must be exceedingly rare. We just don’t see arches and pillars, let along whole towers, dropping out of glaciers. On the assumption that glaciers are responsible, the tower is supremely surprising. And so we dismiss glaciers as a plausible explanation for the origin of the old stone tower.
In the case of Balanced Rock, on the other hand, the conditions necessary for a process of glacial erosion and retreat to transport the boulder from the north and balance it on a handful of lesser stones are, if not common, then at least not so wildly improbable. In fact, there are balanced rocks all over the glacially disturbed landscapes of the north. On the assumption of glaciation, we are not sufficiently surprised. So we cannot rule out the glacier story.
The procedure I just illustrated for eliminating certain unintelligent causes based on surprisingness is central to a very old family of design arguments. The arguments I have in mind are eliminative. The idea is to present an exhaustive list of the possible causes of some phenomenon, and then eliminate all but intelligent agency. The most sophisticated example of this strategy was offered by Richard Bentley and Isaac Newton at the end of the seventeenth century. They pointed to the fact that the planets travel about the sun in closed elliptical orbits, and asked how unintelligent causes could have produced such a system from an initial chaos of disordered matter. The only possible mechanism on their list was the action of gravity. That is, the planets had to form from chance collisions of particles in the void, and then arrange themselves into the orderly clockwork of our solar system. But Newton proved that the only initial conditions that would produce closed orbits are the closed orbits themselves. In other words, the planets had to pretty much form where they are with the very speeds that keep them in their current orbits. Since that represents a vanishingly small fraction of the possible initial conditions, Bentley and Newton argued that we could safely reject this possibility. With unintelligent causes ruled out, that left only God.
You might be worried at this point that eliminative arguments are only truth conducive insofar as you actually have an exhaustive list of causes. Since we seldom know if that’s the case, we are constantly left open to error. That’s true. Indeed, this is what happened to Bentley and Newton when it was realised that planets generally condense from rotating clouds of dust and gas. In other words, we discovered an unintelligent process that creates planets with precisely the positions and speeds required for stable orbits. In fact, modern planet-hunting methods have shown us just how profoundly common the necessary initial conditions are. On the assumption of the nebular process of planet formation, the outcome of planets in closed orbits ceases to be surprising. Ironically, it was Newton and his ilk who ruined the eliminative argument of the ancient Stoics. The Stoics had considered only chance and intelligence on their list of potential causes. Natural science added a lot more. If the history of design arguments tells us anything, it’s that eliminative arguments are a losing strategy.
But problems with eliminative arguments are not what I want to focus on. The relevant lesson to be drawn from these examples is that surprise can be used to eliminate a candidate cause. In a nutshell, here’s the logic of the approach: Surprisingness requires two things. First, there must be a clear set of alternative outcomes or states of affairs, and second, we must know something about how frequently the different possibilities occur on the assumption of some particular cause. If other outcomes were possible but the one realised was extremely unlikely, then we have reason to reject that cause. That conclusion is defeasible; it’s possible that additional information would change our minds. But the argument is strong.
So what about the laws of nature in our universe? Is there a definite answer to the question of how surprising it is to find them with the right parameter values for life? Surprisingness requires in the first place a definite set of alternatives. We are used to thinking of alternative outcomes with respect to the universe in which we live. That is, we consider how things might have turned out at a particular time and place given the laws of nature and the way things were at some other time and place. This is reasoning we’re comfortable with. Science is all about imagining and testing claims about what would result somewhere and somewhen if we changed the initial conditions. But the question at hand is not about alternate events but rather alternate laws governing events. What dictates the possibilities?
Direct observation cannot help us here. We’ve only got the one universe and its laws. Certainly, others are logically possible, but it’s not at all clear what that encompasses. Fine tuners usually consider laws that differ from our own only in the values of those dimensionless parameters. But why not different functions? Or entirely different sets of laws? Consider a concrete example. The time it takes your smartphone to hit the floor when fumbled is proportional to the square root of the height of your pocket above the floor. The constant of proportionality in such a law is the sort of parameter that fine tuners consider fine tuned (though obviously, my example isn’t really a fundamental law). When they imagine other possible universes, they think about the logically possible values of the constant. But the whole formula expresses a function, a mapping from inputs (pocket height) to outputs (time of flight). If you change the constant of proportionality, then you change the overall function. But the functions you can produce just by changing the constant of proportionality make up only a small portion of the possible functions. If logical possibility is to be our guide, we have to consider all functions from heights to times. That means considering laws with different forms. But why stop there? Surely other properties or variables besides those that feature in our current laws are possible. It’s logically possible that the time for my phone to fall depends on how much aether it contains or how close I’m standing to a dragon. Universes with laws involving more or fewer forces, more or fewer types of fundamental particle, or even no particles at all are all logically possible.
Logical possibility is a very weak constraint, and the space of possibilities it invites us to consider is too vast and wild to say anything very definite. Most defenders of the FTA try to avoid this problem by arbitrary fiat. A line is drawn to carve out a well-characterised space of possibilities, namely those universes with laws like ours but for which the dimensionless parameters take on different values. But without justifying this choice – and it’s hard to see how one could justify this choice – the FTA begs the question of surprise.
However, let’s grant for the sake of argument that the choice of a clear space of alternate possibilities for the laws of physics can be justified. What can we say of the relative frequency of those possibilities? For an outcome to be surprising given a particular cause, it must be the case that, not only are other outcomes possible, but they are far more probable. Again, direct observation is impossible. We can’t survey other universes. Instead, we might adopt a principle of indifference and reason that each of the possibilities we identified is equally probable since we have no specific reason to favour one or another. But this strategy runs into some technical problems. To put it bluntly, there’s no way to assign an even probability over all of the values from zero to infinity that a single parameter can take on. Obviously, we need another way to go about this.
Since surprisingness is only relative to a particular hypothesis about origins anyway, perhaps we can use a given multiverse hypothesis to assess the relative frequency of possible outcomes. If life-supporting laws turn out to be surprising for all the plausible multiverse models, then we can still rule out unintelligent causes of the laws. Recall that each multiverse model includes a mechanism for generating new universes with new laws. In other words, each multiverse model involves meta-laws – laws about the laws of nature. We want to use those meta-laws to figure out how surprised we should be about finding life-sustaining laws of physics. Of course, if you’re like me, you might think this is a bridge too far. The laws of nature are the explainers, they are not sensible targets of explanation. But again for sake of argument, let’s suppose it makes sense to talk about meta-laws. After all, there are plenty of physicists out there building multiverse models who seem to think so. Then we face still other problems: First, there are infinitely many meta-laws we can posit that make our own laws highly probable. Here’s one: the only metaphysically possible laws are the ones we have. That obviously explains the facts at hand. Here’s another: universes in the multiverse are a random sample normally distributed around the values of the parameters we have. These are boring theories compared to the baroque stories of universes bubbling out of one another or bursting forth from black holes that you can find filling the pages of scientific journals and popular science magazines. But they make the laws we have unsurprising.
Second, we can’t rule out such silly theories. In fact, there cannot be any empirical reason to favour one multiverse hypothesis over another; all such theories are utterly untestable, even in principle. That’s because multiverse models, though based on physical ideas, bear only a thin connection to empirical facts. The only data against which we can test such a model is the set of laws this universe actually has. Insofar as the models differ about what laws may occur in other universes, we have no ability to assess whether the multiverse is better described by one model or the other. Since the other universes with different laws cannot affect ours, there is nothing we can learn through observation or experiment in our universe about the laws in those universes. All we have is a single example of laws of nature, and a single data point doesn’t get us far in narrowing down the space of viable meta-laws. If we can’t narrow the possibilities, then we can’t find reason to be surprised.
Let’s take stock. First, we have no idea what the alternate possibilities are with respect to the laws of nature. Second, even if we did, there is no way to know how frequent or likely they are relative to one another. The upshot is that we can’t say anything at all about how surprising or not the parameter values are on any hypothesis of unintelligent causes. We therefore have no rational motivation to look for explanations. So why do physicists and philosophers think we do? I suspect the answer is a metaphysical principle that has seeped into physical practice: All contingent facts require an explanation. This is a particularly strong version of the Principle of Sufficient Reason. It is the claim that every fact that might have been otherwise has a reason or explanation for being so. This is why the laws of physics that we have are thought to demand an explanation.
So there we have it. The dark and suspect premise lurking beneath both the FTA and all the speculative accounts of roiling multiverses meant to rebut it. While there are surely those philosophers who subscribe to such a maxim, there are plenty who do not. Either way, fine tuning is a red herring. It is not the surprisingness of the values taken by parameters in the laws of physics that demands explanation. They are neither surprising nor unsurprising. Instead, those who wish to explain the laws of nature with God or with a multiverse must base their appeal on a bald demand for a rational explanation of everything. Any self-respecting empiricist should find that suspect.