UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Chance and time : cutting the Gordian Knot Hagar, Amit 2004

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2004-931269.pdf [ 12.4MB ]
Metadata
JSON: 831-1.0091861.json
JSON-LD: 831-1.0091861-ld.json
RDF/XML (Pretty): 831-1.0091861-rdf.xml
RDF/JSON: 831-1.0091861-rdf.json
Turtle: 831-1.0091861-turtle.txt
N-Triples: 831-1.0091861-rdf-ntriples.txt
Original Record: 831-1.0091861-source.json
Full Text
831-1.0091861-fulltext.txt
Citation
831-1.0091861.ris

Full Text

CHANCE AND TIME Cutting the Gordian Knot by Amit Hagar B.A. , The Hebrew University of Jerusalem, 1996 M.A. , The Hebrew University of Jerusalem, 2000 A THESIS SUBMITTED IN PARTIAL F U L F I L M E N T T H E REQUIREMENTS FOR T H E D E G R E E OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Department of Philosophy) We accept this thesis as conforming to the required standard; T H E UNIVERSITY OF BRITISH COLUMBIA June 11, 2004 © Amit Hagar, 2004 Abstract ii Abstract One of the recurrent problems in the foundations of physics is to explain why we rarely observe certain phenomena that are allowed by our theo-ries and laws. In thermodynamics, for example, the spontaneous approach towards equilibrium is ubiquitous yet the time-reversal-invariant laws that presumably govern thermal behaviour in the microscopic level equally al-low spontaneous approach away from equilibrium to occur. Why are the former processes frequently observed while the latter are almost never re-ported? Another example comes from quantum mechanics where the formal-ism, if considered complete and universally applicable, predicts the existence of macroscopic superpositions—monstrous Schrodinger cats—and these are never observed: while electrons and atoms enjoy the cloudiness of waves, macroscopic objects are always localized to definite positions. A well-known explanatory framework due to Ludwig Boltzmann traces the rarity of " abnormal" thermodynamic phenomena to the scarcity of the initial conditions that lead to it. After all, physical laws are no more than algorithms and these are expected to generate different results according to different initial conditions, hence Boltzmann's insight that violations of thermodynamic laws are possible but highly improbable. Yet Boltzmann introduces probabilities into this explanatory scheme, and since the latter is couched in terms of classical mechanics, these probabilities must be inter-preted as a result of ignorance of the exact state the system is in. Quantum mechanics has taught us otherwise. Here the attempts to explain why we never observe macroscopic superpositions have led to different interpreta-tions of the formalism and to different solutions to the quantum measure-ment problem. These solutions introduce additional interpretations to the meaning of probability over and above ignorance of the definite state of the physical system: quantum probabilities may result from pure chance. Notwithstanding the success of the Boltzmannian framework in explain-ing the thermodynamic arrow in time it leaves us with a foundational puz-zle: how can ignorance play a role in scientific explanation of objective reality? In turns out that two opposing solutions to the quantum measure-Abstract iii ment problem in which probabilities arise from the stochastic character of the underlying dynamics may scratch this explanatory itch. By offering a dynamical justification to the probabilities employed in classical statistical mechanics these two interpretations complete the Boltzmannian explana-tory scheme and allow us to exorcize ignorance from scientific explanations of unobserved phenomena. In this thesis I argue that the puzzle of the thermodynamic arrow in time is closely related to the problem of interpreting quantum mechanics, i.e., to the measurement problem. We may solve one by fiat and thus solve the other, but it seems unwise to try solving them independently. I substantiate this claim by presenting two possible interpretations to non-relativistic quan-tum mechanics. Differing as they do on the meaning of the probabilities they introduce into the otherwise deterministic dynamics, these interpretations offer alternative explanatory schemes to the standard Boltzmannian statis-tical mechanical explanation of thermodynamic approach to equilibrium. I then show how notwithstanding their current empirical equivalence, the two approaches diverge at the continental divide between scientific realism and anti-realism. Contents iv Contents Abstract ii Acknowledgements viii 1 Introduction 1 1.1 Chance and Time 1 1.2 Beliefs 2 1.3 Goals 4 1.4 Strategy 5 1 Cutting the Gordian Knot 7 2 Time-Asymmetries 8 2.1 Asymmetries o/Time 8 2.1.1 Becoming 8 2.1.2 Handedness, Symmetries, and Orientability 12 2.1.3 Precedence 15 2.2 Asymmetries in Time 18 2.2.1 Time Reversal Invariance 18 2.2.2 Irreversibility 19 3 (In)determinism 24 3.1 Definitions 24 3.2 Indeterminism and Indeterminacy 26 3.3 Predictability 30 3.4 Chance, Randomness and Ignorance 32 3.5 Chance and the Laws of Nature 36 4 Cutting the Gordian Knot 41 4.1 Indeterminism and the Asymmetries in Time 42 4.1.1 Nomological Irreversibility 42 Contents v 4.1.2 Accidental Irreversibility 48 4.1.3 Practical Irreversibility 56 4.2 A Brief Summary 59 4.3 Indeterminism and the Asymmetry o/Time 61 4.3.1 The Entropic Heresy 61 4.3.2 Excuse Me, When is The Future? 66 4.3.3 Chance and Becoming 67 4.4 A Heavy burden on Time's Wagon 69 II A Tale of Two Theories 72 5 The Origins of Probability 73 5.1 Chance and the Atomic Hypothesis 74 5.2 The Quantum Measurement Problem 77 5.3 Schrodinger's equation is not always right 80 5.4 The Sirius Problem is a Serious Problem 84 5.4.1 The Spin Echo Experiments 84 5.4.2 Radical Interventionism in the Classical Regime . . . 85 5.4.3 A New Kind of Ignorance 87 5.4.4 Quantum Decoherence 90 5.5 Are Observed Systems Always Open? 95 6 A Tale of Two Theories 99 6.1 Reductionism - The 'New Wave' 100 6.1.1 On Reduction 100 6.1.2 The 'New Wave' 104 6.2 Inter-theoretic reduction—TD and SM 108 6.2.1 An exercise in SM 108 6.2.2 Chance and Ignorance in SM 110 6.3 Reductionism—a Holy Grail or a Misguided Quest? 118 III Constructing the Principles 122 7 For a Fistful of Entropy 123 7.1 Holy Entropy, It's Boiling! 124 7.2 Entropy in SM - Boltzmann vs. Gibbs 129 7.2.1 Gibbs 130 7.2.2 Carnap Against 'Subjectivity' 133 Contents vi 7.2.3 The (Real) Case Against Coarse Graining 135 7.2.4 Boltzmann 137 7.2.5 A Short Bookkeeping 139 7.3 Entropy in the Quantum World 141 7.3.1 Von Neumann's Entropy 141 7.3.2 Wigner's Function 143 7.3.3 Another, (Shorter), Bookkeeping 146 7.4 Taking Sides? 147 8 Constructing the Principles 152 8.1 Principles Al l the Way Down 153 8.2 Thermodynamics as a Principle Theory 157 8.3 From Principles to Laws 162 8.3.1 The First Law—Energy Conservation 162 8.3.2 The Zero and the Third Laws 163 8.3.3 The Second Law and the Myth of Irreversibility . . . 163 8.3.4 The Minus First Law—The Approach to Equilibrium 168 8.4 From Q M to TD—Constructing the Principles 170 8.4.1 Orthodox Q M 171 8.4.2 GRW 171 8.4.3 No-collapse plus Decoherence 173 8.4.4 The Past Hypothesis 174 8.5 Q M as a Principle Theory 175 9 Coda 179 IV Appendices 186 A Lost in Space and Time 187 B Phase Space Jargon Made Simple 191 B . l Basics 191 B.2 Probabilistic Assumptions 199 B.3 Non Equilibrium 203 C The A B C of Decoherence 205 Bibliography 213 L i s t of Figures v i i List of Figures 4.1 B i l l i a r d B a l l s 49 8.1 C a r a t h e o d o r y ' s P r i n c i p l e 161 A ckn owledgem ents v i i i Acknowledgements T h i s d isser ta t ion s ta r ted as an a t t empt to clar i fy some hasty inferences i n the l i tera ture f rom i n d e t e r m i n i s m to non-t ime-reversal - invar iance, and grad-ua l l y took shape as a fu l l b l o w n research project . M a n y thanks to m y advisor Steven Sav i t t for his insistence on the c la r i f ica t ion of the aforementioned i n -ference and for his cons t ruc t ive remarks d u r i n g this project . I also thank two other members of m y P h . D . commit tee , P a u l B a r t h a a n d A l a n R i c h a r d -son, for their helpful comments and suggestions, a n d W i l l i a m Demopou los f rom the U n i v e r s i t y of Wes te rn O n t a r i o for encouragement a n d s t i m u l a t i n g correspondence. O n the other side of the ocean, three figures have inf luenced m y thoughts a n d helped me i n c l a r i fy ing some of the ideas p r o p o u n d e d here. T h e y are O r l y Shenker who is gu i l t y i n expos ing me to the open sys tem approach i n the first place, M e i r H e m m o w h o was pat ient enough to exp l a in to me some of the physics invo lved , a n d m y former M A advisor , I t amar P i t o w s k y , w h o lu red me into the r ea lm of the ph i losophy of physics . D u r i n g the w r i t i n g of the thesis I enjoyed financial suppor t f rom the U n i v e r s i t y of B r i t i s h C o l u m b i a G r a d u a t e Fe l lowsh ip a n d f rom the St . John ' s Col lege R e g i n a l d a n d A n n i e V a n Fe l lowsh ip . F i n a l l y , this work cou ld not have been comple ted w i t h o u t the on going suppor t and encouragement of m y wife, M a r i a A d e l e . Chapter 1. Introduction 1 Chapter 1 Introduction 1.1 Chance and Time N e v e r b e a f r a i d o f d a n g e r s t h a t h a v e g o n e b y ! It's t h o s e a h e a d t h a t m a t t e r E . S c r o d i n g e r , Irreversibility, 1950. T i m e seems essential ly d i rec ted . I f there is one p rope r ty other t h a n d i m e n -s iona l i ty that d is t inguishes the t e m p o r a l f rom the spa t i a l i t is the p rope r ty of apparent directedness. B u t is the d i r ec t ion of t ime an object ive feature of the w o r l d or a subject ive feature of our experience of i t? M a n y seek the basis for the asymmetr ies i n t ime a n d the d i r ec t ion of t ime i tself i n phys i ca l processes and theories tha t exp la in and describe t hem. T h e idea is tha t i f physics is to jus t i fy the hypothes is tha t its laws are v a l i d for eve ry th ing tha t happens i n nature, i t shou ld be able to exp l a in a n d describe the fun-damen ta l asymmetr ies w h i c h define wha t m a y be ca l led a direction in time or even a direction of time. Surpr i s ing ly , the very laws of na ture are, as far as we know, i n p ronounced contrast to these fundamenta l asymmetr ies , a n d so the p r o b l e m of the d i r ec t ion of t ime—acco u n t i n g for these asymmetr ies us ing t ime s ymme t r i c laws—is often considered as one of phys ics ' 'skeletons i n the c u p b o a r d ' . 1 A n o t h e r pe renn ia l top ic tha t unifies physics a n d ph i losophy is the no t ion of chance. W h i l e there is more to chance t han a p h y s i c a l no t ion a n d the concept has wider m e a n i n g a n d appl ica t ions , especial ly i n everyday life, there are also ' laws of chance ' formula ted i n the ca lculus of p r o b a b i l i t y and these are extensively appl icab le i n physics . W h e n inves t iga t ing the mean ing a n d the o r ig in of p r o b a b i l i t y i n a phys i ca l theory one must address several different issues, bu t phi losophers , interested i n the onto logy of the theory as 'See for example Leggett (1987, ch. 5). Although experiments in high-energy physics suggest that neutral kaon decay violates tirne-reversal-invariance, these violations are indi-rect and very slight, and their ultimate cause is still a matter of debate. For these reasons I shall ignore them in what follows. Chapter 1. Introduction 2 they are, tend to focus on two questions, namely, what are the probabilities probabilities for, and how are the probabilities to be understood. These questions lead many to look for the origin, or the nature, of probability itself, and here the notion of chance has an alternative. Just as probabilities can arise from objective physical chance, they could equally well be a result of lack of knowledge or ignorance. The distinction between the notions of chance and ignorance is crucial to the long-standing debate on the character of the fundamental physical laws; whether these are deterministic or indeterministic. Here the literature reveals a Tower of Babel character of discussions, and disagreements prevail, from ideas about the embodiment of determinism as an a priori truth, to conflicting views, in light of the development of Quantum Mechanics (QM), on the meaning of probability and its origins and source.2 These debates are old as the problem of induction, and some even claim that given certain phenomena one can never tell whether deterministic or indeterministic laws are the ultimate reason behind it, that is, whether or not the fundamental laws of physics involve objective physical chance. 1.2 Beliefs The linking of indeterminism and time goes back at least to Aristotle who in his famous struggle against fatalism gave up determinacy of future con-tingencies.3 But even today there are many who believe that the issue of indeterminism is closely related to the puzzle of the direction of time.4 Thus, for example, Bergson has asked: What is the role of time? ... Time prevents everything from being given at once ... Is it not the vehicle of creativity and choice? Is not the existence of time the proof of indeterminism in nature? 5 Vague as these ideas are they are propounded not only by unscientific intellectuals but also by philosophers of science such as Hans Reichenbach. In the hands of Reichinbach indeterminism becomes a key ingredient in the project of supplying a physical basis for the direction of time: 2Throughout the thesis, if not explicitly mentioned otherwise, Q M refers to non-relativistic Q M . Aristotle's 'sea battle tomorrow' (De Interpretations, Ch. 9) has generated a vast literature, but see Sobel (1998, ch.l) for an evaluation of arguments for fatalism. "See, e.g., Arntzenius (1995), Prigogine and Stengers (1996), Elizur (1999; 2001). 5Bergson (1930/1959) cited in Prigogine and Stengers (1996, 14). Chapter 1. Introduction 3 The distinction between the indeterminism of the future and the determinism of the past is expressed in the last analysis in the laws of physics...[T]he concept of becoming acquires a meaning in physics: The present, which separates the future and the past, is the moment when that which was undetermined becomes de-termined, and "becoming" means the same as "becoming deter-mined" . 6 What Reichenbach has in mind is the idea that Q M can vindicate the alleged metaphysical difference between past and future. This idea, that QM, or at least certain interpretations of the theory, are somehow connected to the direction of time is echoed also in J.R Lucas' 'A Century of Time': It is difficult to make sense of quantum mechanics, and the in-terpretation I adopt is rejected by many. But it is natural to try to interpret it realistically, and to construe the collapse of the wave packet as a real event in which the many possible eigen-vectors, with their associated eigenvalues, give way to a single eigenvector with one definite eigenvalue. If this is so there is a definite moment of truth when possibilities become definitely true or definitely false ... [T]here is a worldwide tide of actu-alization - collapse into eigenness ... [QM] not only insists on the arrow being kept in time, but distinguishes a present as a boundary between an alterable future and an unalterable past.7 Q M is infamous for its measurement problem, i.e., the problem that any attempt to reconcile the simple fact that measurements have results with the quantum formalism requires an arbitrary 'shifty split', as John Bell calls it, between the classical and the quantum world. Insoluble as the measurement problem is, it leaves open the question of the interpretation of QM. Indeed, even if most physicists and philosophers agree that Q M involves probabilistic predictions, they disagree on the origin of these probabilities and on the completeness of Q M itself. Einstein, for one, always believed that Q M would be overthrown by a theory more complete, and many after him view Q M probabilities as representing ignorance, or lack of knowledge. Yet in orthodox Q M and in recent attempts to replace it with a theory less instrumental the probabilities arise not from ignorance but from the chancy 6 Quoted from the appendix to his posthumously published work The Direction of Time (1956, 269). 7Lucas in Butterfield (1999, 10). Lucas goes further to claim that the collapse of the wave function is a vindication of absolute time. Chapter 1. Introduction 4 character of nature . Recent ly , D a v i d A l b e r t (1994; 2001) has suggested a deep po ten t i a l connec t ion between the measurement p r o b l e m and the t ime-asymmetr ies we encounter i n everyday life. In so d o i n g A l b e r t has made the first exp l ic i t a t t empt to t rans form the vague beliefs about the re la t ion between t ime a n d chance in to a specific phys ica l f ramework. 1.3 Goals T h e first goal of the d isser ta t ion , to be fulf i l led i n the first par t of the thesis, is to explore the condi t ions under w h i c h a r e l a t ion between the no t ion of chance and the p r o b l e m of the d i r ec t ion of t i m e migh t be establ ished. C o n t e m p o r a r y unde r s t and ing of this p rob l em (or cluster of problems) is far bet ter t han i t was t h i r t y years ago when J o h n E a r m a n spel led i t ou t i n the inf luent ia l paper ' A n a t t empt to a d d a l i t t l e d i r ec t ion to the p r o b l e m of the d i r ec t ion of t i m e ' , 8 bu t a l t hough i t has been c l a imed tha t we can exp l a in some of the problems today , 9 or at least show how wha t was thought to be a p r o b l e m is ei ther an artefact or a genuine p r o b l e m yet not for eve ryone , 1 0 the persistence of those w h o argue for a r e l a t ion between t ime and chance requires once and for a l l a tho rough analysis tha t w o u l d ei ther v ind ica t e beliefs such as c i t ed here or convince us tha t they are wrong . A more ambi t ious task w h i c h is under taken i n the fo l lowing two par ts is to demonst ra te how the d i s t i nc t i on between the not ions of chance a n d ig-norance, the o r ig in of the probabi l i t i es one encounters i n s t a t i s t i ca l physics , migh t be relevant to t h e r m o d y n a m i c i r revers ib i l i ty w h i c h is one of the most discussed t ime-asymmetr ies i n the l i tera ture . A c c o r d i n g to the received wis-d o m one can solve the puzz le of t h e r m o d y n a m i c a s y m m e t r y i n t ime jus t by sweeping i t under the r u g of cosmology a n d combina to r i c s . B u t to th is ex-tent the s t anda rd exp l ana t i on for i r reversible phenomena is s t i l l incomple te since i t endows our ignorance w i t h an exp lana to ry power a n d no jus t i f i ca t ion is g iven to the p robab i l i s t i c assumpt ions i t involves. T h e wel l -documented h i s to r i ca l background for the a t t empts to supp ly a d y n a m i c a l exp lana t ion for i r reversible phenomena , i . e. , the d a w n of the a tomic hypothes is a n d the ear ly a t t empts of L u d w i g B o l t z m a n n to derive the second law of thermo-d y n a m i c s ( T D ) solely f rom d y n a m i c a l cons ide ra t ions , 1 1 indica tes tha t the 8 Earman (1974). 9See for example Savitt (1995, 19) and Callender (1997). 1 0 A s Callender (ibid., S233) notes, whether there is a philosophical puzzle depends upon one's general stance in the philosophy of science. n F o r a complete historical account see Brush (1976b; 1976c, ch. 14; 1983, ch. 2). Chapter 1. Introduction 5 project of accounting for TD with statistical mechanics (SM) might be a good place to start looking for a relation between chance and time. 1.4 Strategy The asymmetry between past and future and the existence of irreversible phenomena are part of our everyday experience and commonsense, and yet modern physics employs time symmetric laws and is commonly viewed as hostile to asymmetries of time such as temporal becoming or the handedness of time. Thus, the first step is to make the definitions of these asymmetries precise and hence susceptible to philosophical and scientific scrutinizing: in setting the stage for the discussion chapter two distinguishes between different notions of time-asymmetries that form the cluster of what, due to Arthur Eddington, 1 2 is now known as 'time's arrows'. The second step, carried out in chapter three, is to distinguish be-tween different notions of indeterminism that might furnish the discussion on chance and time with further analytical tools. After familiarizing our-selves with the two clusters of concepts, we shall turn to fulfil the first goal of this study. In laying down the possible connections between time-asymmetries and the notion of chance we show in chapter four that while a relation between the two clusters of concepts exists, it is very subtle and quite different than what is expressed in the aforementioned beliefs. The conclusion, however, is not entirely negative, as along the way we distill the philosophical stance and the conditions, or presuppositions, under which such a subtle relation between time-asymmetry and the notion of chance can be established. It turns out that chance is neither sufficient nor necessary for solving the problem of the direction of time unless one is guilty of certain philosophi-cal prejudices. In the case of asymmetries of time these include space-time relationalism; in the case of asymmetries in time—a bias towards funda-mentalism and inter-theoretic reductionism. Moreover, even when one is committed to these philosophical positions, it only happens to be the case that the fundamental theory for which time-reversal-invariance fails is also indeterministic and admits genuine chance. Setting the asymmetries of time aside, I focus in the second part of the thesis on the case of the reduction of TD to SM and Q M and on the expla-nation of the most discussed asymmetry in time, namely the approach to thermodynamic equilibrium. There are both historical and practical reasons 1 2Eddington (1928, 69). Chapter 1. Introduction 6 behind the choice of this particular asymmetry in time. First, probabilities entered physics for the first time with the kinetic theory of gases and with the development of SM. Second, contrary to the case of the asymmetries of time, where there still exists no concrete quantum theory of spacetime, the idea of reproducing TD from Q M has already been translated into a precise physical framework. The next two chapters are devoted to the exposition of these two issues. Chapter five reviews the different approaches to the foundations of Q M in this framework. The issue of inter-theoretic reduction itself, both in general and in particular the case of TD and SM, is discussed in chapter six, where a certain view of inter-theoretic reduction, 'the new wave', is defined (but not defended). This view serves as a tool for exploring the roles probabilities play in explaining the thermodynamic arrow in time, whereas the approaches presented in chapter five are described as different levels of inter-theoretic reduction, varying from replacing to retaining the current theory of SM. The standard, Boltzmannian, explanation of the approach to thermody-namic equilibrium relies on a conjunction of three ideas: (1) a distinction between two levels of description (macro and micro); (2) an assumption of equiprobability of the micro-states in a given equilibrium macro-state that makes violations of thermodynamics possible but a-typical; (3) An initial low-entropy state of the universe (the past hypothesis). In this account there is no mechanism, or causal explanation, for the approach to equilib-rium and the latter is described as a transition from an improbable state to a probable one. The mechanical models introduced are highly unrealistic and the probabilistic assumptions they involve are unjustified. Alternatives to the standard account aim to reduce thermodynamics to mechanics by (1) introducing realistic models and by (2) justifying the probabilistic assumptions of statistical mechanics. The nature and the role of the past hypothesis in these alternatives vary, as well as the interpretation of the probabilities they introduce into the dynamics in order to recover thermodynamic behaviour does. A detailed comparison between these alternatives is the task of the final part of the thesis. In chapter seven we discuss how each alternative treats thermodynamic concepts such as entropy. In chapter eight we do the same for thermodynamic laws. The twofold comparison exemplifies the different levels of inter-theoretic reduction that the different approaches achieve, as well as the origin of the probabilities they employ, and serves to expose the epistemological price they both carry. In order to keep the discussion fluent throughout the text, technical details are confined to three appendices. Part I Cutting the Gordian Knot Chapter 2. Time-Asymmetries 8 Chapter 2 Time-Asymmetr ies A l l t h e k i n g ' s horses a n d a l l t h e k i n g ' s m e n c o u l d n ' t p u t H u m p t y D u m p t y i n his p l a c e a g a i n L e w i s C a r r o l l , Through the Looking Glass T i m e is an intriguing domain of inquiry. From everyday life, where there is never enough of it, to modern physics, where recent quantum gravity gurus claim that there isn't any at all, 1 inquiries about time often carry with them an emotional flavour. Philosophical reflections on time go back to Heraclitus, Permenides and Zeno, and in some sense their ideas are still echoed in modern discussions. Among the many interesting subject matters in this domain of inquiry is the puzzle of time-asymmetries: we speak of time's 'arrow' or the 'flow' of time, intending to suggest that time has a direction; indicating a profound difference between 'earlier than' and 'later than'; discriminating between 'past' and 'future'. The subject matter of time-asymmetry constitutes a cluster of inter-related problems. Within this cluster we must first make a distinction be-tween asymmetries in time and the asymmetry o/time itself, and even within these two categories there are further distinctions that should be made.2 A glance in the literature reveals disagreements already on matters of basic def-initions, that stem from philosophical prejudices with respect to the nature of spacetime. 2.1 Asymmetries of T i m e 2.1.1 Becoming Everyday phenomena, from ice melting in a whisky glass, to life itself, ap-pear directed. Physical processes have a certain direction they follow, which implies an asymmetry of these processes in time. There is, however, an-other kind of asymmetry which is so commonsensical that it is often taken 'See, e.g., Barbour (2000). 2See, e.g., Horwich (1987). Chapter 2. Time-Asymmetries 9 for granted—an asymmetry between past and future—which indicates that time itself is asymmetric. Some claim this asymmetry has an epistemic side, i.e., we know more of the past than we know of the future.3 But the stronger claim is that the difference between past and future is not only epistemolog-ical, or conventional, but ontological: past and future, or so the story goes, are ontologically different in that past events have already happened while future events are still unreal. It was Newton who wrote in the Principia that absolute time "of itself, and from its own nature, flows equably without relation to anything exter-nal", and the notion of the 'moving now' is sometimes understood as the arrow, or the direction of time itself. Eddington called this notion of pas-sage 'becoming'.4 This view raises several objections, the strongest being the second temporal order with reference to which such movement can be described.5 One, however, can avoid this problem by making becoming a primitive notion. The most articulate advocate of this view is C D . Broad: To "become present" is, in fact, just to "become", in an absolute sense; i.e., to "come to pass" in the Biblical phraseology, or, most simply, to "happen". . . . I do not suppose that so simple and fundamental a notion as that of absolute becoming can be anal-ysed, and I am quite certain that it cannot be analysed in terms of a non-temporal copula and some kind of temporal predicate.6 Temporal becoming is sometimes regarded as a philosophical quibble, mainly among physicists, who generally believe that physics is incapable of, and altogether uninterested in, capturing an ontological distinction be-tween 'past' and 'future', especially if the latter has to do with a kind of concreteness, or 'coming into being' of events or facts.7 Indeed, time is be-lieved to be homogeneous in the sense that an experiment that is carried 3 The common reference is Reichenbach (1956, ch. 2) where it is claimed that 'we can have records of the past but not of the future'. This idea is further developed, criticized and refined by Horwich (1987, ch. 5). Savitt (1990) offers a criticism on Horwich's account of the epistemological asymmetry within the framework of an isotropic time. 4Eddington (1928, 89). 5Savitt (1995, 7-10). 6 Broad (1933, 280-281). Replacing the spatial analogy of the 'moving now' which yields a regress of temporal orders, Broad's 'absolute becoming' is an alternative for 'passage' within the framework of the 'dynamic' view of time. See also Broad (1923, 66-68) and McCall (1976, 348). 7 For attempts to rehabilitate the 'now' in physics see e.g., Dobbs (1969), McCall (1976), Arthur (1982), Stein (1991) and Shimony (1993, Vol II; ch. 18). Chapter 2. Time-Asymmetries 10 out today will give the same result as one carried out tomorrow, and time homogeneity is captured by time-translation-invariance (TTI). 8 Agreed, the latter should not be regarded as an a priori property of natural laws, but the circumstances in which it is violated are so farfetched in that they are unlike anything found in the history of physics.9 Yet that physics employs symmetries and that these are related to conservation principles is not con-tentious.10 On the contrary, the questions we shall be occupied with here are whether the existence of symmetries in physics in general can be used as an argument against metaphysical notions such as temporal becoming, and whether introducing chance into physics may serve to reconcile the latter with the former. The indifference of physics to temporal becoming is one of the best ex-amples of the problems that one encounters when transferring metaphysical notions into physical language. Apart from symmetry considerations, time in physics is usually treated as a parameter, not a variable, or an observable, with specific spectra or characteristics. In fact, in trying to treat time as an observable in Q M one is only led into trouble. First, it turns out that even if we can construct a time-operator the measurement of which gives the time-of-arrival, i.e., the time that a particle first arrives to a particular location (one could also consider this as the time of first occurrence of any event, say, that an operator yields a specific eigenvalue), then although the particle's arrival would be recorded, any attempt to measure this time by a clock would result in the particle being reflected without us being able to record the time. 1 1 Second, and in a similar way, when one tries to find a sDefine W L as a set of spacetimes with physical laws L . The laws L are time-translation-invariant just in case for any real A , if W € W L , then also WA G W L where WA(t) = W(t + A ) for all times t, which basically means that physical possibilities are closed under the operation of time translation which shifts all of the physical contents of spacetime forwards or backwards on time by a given moment, or, with the proviso that symmetries of laws are as inclusive as symmetries of spacetime (see below), that a world created two days later than ours would have been identical to ours. °Due to Noether's Theorem (see below), a violation of TTI , apart from indicating a zero point for time, would result in failure of conservation of energy. 1 0See Feynman (1965, ch. 3,4). 1 1 Note that this uncertainty is quite different than Heisenberg's uncertainty with respect to time and energy since it appears already in a situation of a single measurement. See Unruh et. al. (1999) who show that the minimum inaccuracy in measuring the time-of-arrival is given by AtA > 1/Ek (2.1) where A is the time-of-arrival operator, k the width of the localization Gaussian and E the energy of the clock in that position. The lower bound on precision is due to the fact that the energy of the clock increases as its accuracy increases, and if this energy is too Chapter 2. Time-Asymmetries 11 'time-operator' in a closed quantum system time itself 'disappears'. A l l this leads many physicists to regard a 'time observable' as having a some-what limited physical meaning. Another puzzling feature of temporal becoming is that while it might make sense in Newtonian spacetime,13 it is much harder to think of in the rel-ativistic spacetime arena, where the relativity of simultaneity in Minkowski spacetime leads one to reformulate one's intuition regarding time. 1 4 That simultaneity can only be defined relative to a frame of reference implies that 'the world in an instant' is a relational concept, but how can we relativize the notion of 'coming into existence'? This difficulty leads many to view temporal becoming as mind-dependent,15 or even as an illusion; altogether non-existent. And yet, as in the case of time-translation-invariance, one should wonder whether the truism that a mathematical description which is static by def-inition cannot represent the 'dynamical' aspect of time has any force as an argument against this aspect.16 Indeed, and contrary to authorities such as Putnam (1967) and Penrose (1989), who use Minkowski spacetime to vindi-cate what Savitt (2001) calls 'chronological fatalism', that is, an attribution of definiteness or concreteness to the entire set of events in spacetime, Stein (1968; 1991) gives a precise definition of becoming in Minkowski spacetime which is relative to a spacetime point—the relation is that between a point x and each'point in or on its past light cone. In replacing the classical notions of time with the relativistic ones Stein demonstrates how temporal becom-ing turns out to be an intrinsic structural feature of spacetime geometry. Attempts to generalize Stein's definition are currently in progress.17 high, then the particle cannot use its own energy to 'turn off' the clock. 1 2 The most recent popular statement about the disappearance of time in quantum gravity is due to Barbour (2000). See also Rovelli (1994). 1 3 Due to absolute simultaneity one can 'slice' spacetime into layers of 'nows' and dis-tinguish between absolute future and past. ^Savitt (2001) gives a concise description of the difficulties one encounters when trying to define becoming in Minkowski spacetime. 1 5Grunbaum's 'The meaning of Time' in Freeman and Sellars (1971, 195-228) and Grun-baum (1973, ch. 10). 1 6See Park (1972, 112): No formula of mathematical physics involving time implies that time passes, and indeed I have not been able to think of a way which the idea could be expressed, even if one wanted to, without introducing hypotheses incapable of verification. "See Myrvold (2002a; 2002b). Chapter 2. Time-Asymmetries 12 2.1.2 Handedness, Symmetries, and Orientability T e m p o r a l b e c o m i n g — a n onto log ica l difference between the tenses—is a to-ken of the a s y m m e t r y o / t i m e . A s such it should be d i s t ingu ished from yet another instance of this type of asymmet ry , name ly the an iso t ropy of t ime or t ime-handedness . T h e la t ter is c o m m o n l y v iewed as an in t r ins ic structural difference i n the two t ime di rec t ions , i l l u s t r a t ed i n H o r w i c h (1987, 39): Imagine an endless, s t ra ight tube. If it g r adua l ly becomes nar-rower i n one d i r ec t ion then we w o u l d regard the tube as anisotro-p ic . B u t i f i t is a perfect cy l inder a n d everywhere a constant colour , tempera ture , e tc .—the same i n b o t h d i rec t ions—the t h i n g w o u l d seem isot ropic , a n d a d i r ec t ion a long i t cou ld be specified on ly by reference to pa r t i cu l a r par ts of the tube. A l t h o u g h i so t ropy is a cont inuous symmet ry , i t is usua l ly confounded i n the l i te ra ture w i t h t ime-reversal - invar iance ( T R I ) . 1 8 S ince the la t ter is a discrete ref lect ion symmet ry , ra ther t h a n an iso t ropy the more appropr ia te no t ion for i ts v i o l a t i o n is handedness. Before m o v i n g on to make th is a sym-me t ry precise i n the r e l a t iv i s t i c spacet ime arena, some general remarks on symmetr ies are i n place. In tu i t ive ly , a sys tem is sa id to possess s y m m e t r y i f one can make a change or a t r ans fo rmat ion i n the sys tem such tha t after the change the sys tem ap-pears exac t ly the same as before. N o t e tha t wha t qualifies for a sys tem is open for discuss ion. A n ancient Greek temple might con ta in a symmet ry , as we l l as the a n a t o m i c a l features of l i v i n g organisms. In physics s y m m e t r y has become unders tood as the fundamenta l ingredient to the fo rmula t ion of the laws a n d of spacet ime. Space and t ime fo rm the stage u p o n w h i c h the dynamics of compl i ca t ed phys i ca l systems is p layed out . Y e t , i n a pro-found way, the symmetr ies of space a n d t ime con t ro l the dynamics . T h i s in te r re la t ionship between s y m m e t r y pr inc ip les a n d d y n a m i c s is the content of 'Noe ther ' s T h e o r e m ' , w h i c h states tha t for every cont inuous s y m m e t r y there must exist a conservat ion law a n d vice v e r s a . 1 9 T h e re la t ion between the symmetr ies of laws a n d the symmetr ies of spacet ime is c r u c i a l to the fo l lowing discuss ion. T h e ques t ion is w h i c h set 1 8 W e shall discuss the definition of TRI in the following section. In short, time reversal transformation is a mapping / : t —» —t. If a law is formulated as differential equations and the mapping / from every solution of the equations to another solution of the equation holds, we can say that the law is TRI . 1 9 For the specific relations between spacetime symmetries and conserved quantities See Elliot and Dawber (1979, ch. 15). Chapter 2. Time-Asymmetries 13 of symmetr ies is more inc lus ive . If symmet r i es of laws were more inc lus ive t han spacet ime symmetr ies , then spacet ime w o u l d con ta in more s t ruc ture t han is needed to suppor t the laws. Our s w o u l d then be jus t one of possible spacet imes under a g iven set of laws. O n the other hand , symmetr ies of d y n a m i c a l laws shou ld be at least as wide as symmet r i es of spacetime; for i f the laws a l low one h i s to ry but not another , then those histories cannot be connected by a spacet ime symmet ry—othe rwi se there w o u l d be no way to express the difference between the a l lowed a n d the p r o h i b i t e d histories i n terms of the behav iour of phys i ca l magn i tude on the spacet ime canvas. T h a t symmetr ies of d y n a m i c a l laws shou ld be at least as wide as s y m -metries of spacet ime is reflected i n the proviso tha t the laws of m o t i o n of a pa r t i cu l a r theory shou ld con ta in quant i t ies tha t are invar iant under the s y m m e t r y group of the cor responding u n d e r l y i n g spacet ime. F o r example , for a theory to be G a l i l e a n invar iant i t means tha t i ts laws w i l l take the same fo rm i n any i ne r t i a l frame, tha t is, none of the i ne r t i a l frames is to be preferred or to be considered as the rest frame. Different frames y i e l d different veloci t ies , bu t none of these veloci t ies shou ld be considered as ab-solute. T o achieve G a l i l e a n invar iance a theory must have G a l i l e a n invar iant quant i t i es—such as accelerat ion or mass—and so, i n the case of N e w t o n i a n mechanics a necessary a n d sufficient c o n d i t i o n for i t to be G a l i l e a n invar ian t is tha t i ts fundamenta l force, i.e., GmMr~2, w i l l be G a l i l e a n invar iant , a n d indeed i t is. R e t u r n i n g to our d iscuss ion of t ime-asymmetr ies , the idea tha t symme-tries of d y n a m i c a l laws shou ld be at least as wide as symmet r i es of spacet ime is manifest i n the long-s tand ing debate on wha t const i tu tes an in t r ins ic fea-ture of spacet ime s t ructure , wh ich , i n our case here, t ransforms in to the debate over the quest ion w h i c h of the symmetr ies is more inc lus ive hence shou ld be g iven precedence i n dec id ing the proper t ies of spacet ime. O n the one hand , one can v i ew the asymmetr ies in t ime as preceding the a s y m m e t r y of t ime, i n the s t rong sense that the former fix, or enta i l , the lat ter . Re ichenbach (1956), G o l d (1962; 1967) a n d P r i c e (1996) serve as examples of such a v iew. T h e precedence of the asymmetr ies i n t ime over the asymmetr ies of t ime, a long w i t h assumpt ions regard ing the na ture of the former (namely their be ing governed by t i m e - s y m m e t r i c laws) lead these authors to the p ro found conc lus ion that a l t hough asymmetr ies i n t ime preva i l i n our loca l ne ighbourhood , t ime i tself has n c unique d i rec t ion . O n the other hand one can regard the a s y m m e t r y of t ime i tself to precede the different asymmetr ies i n t ime to the extent tha t i f i t exists, such an a s y m m e t r y is an in t r ins ic feature of spacet ime w h i c h does not need to be and cannot be reduced to non- t empora l features. T h i s v iew is exp lored by Chapter 2. Time-Asymmetries 14 Earman (1974) who, in a seminal paper, transforms the intractable problem of the direction of time to a set of tractable problems by insisting on defining it in the relativistic spacetime arena. The basic definition that serves as bedrock in this framework is the concept of orientability of the spacetime manifold.2 0 A relativistic spacetime < M, g, V >, where M is a differentiable man-ifold, g is a Lorentzian metric on M and V is the unique symmetric linear connection compatible with g, is said to be temporally orientable if there exists a method to perform a continuous parallel transport of any timelike vector on a closed curve which keeps it timelike while pointing consistently on what we label as its future lobe. This definition helps us to further de-rive the notion of a time-order and of a global time function,21 and will be taken here to represent the general notion of asymmetry o/time: time has a direction if spacetime is orientable.22 Earman warns us that the asymmetry o/time cannot be entailed by the other asymmetries in time and also need not entail them. The correct way to view the relations between these asymmetries according to Earman is to note that the asymmetry o/time imposes constraints on what is physically acceptable in terms of asymmetries in time. 2 3 These two physical notions of time-asymmetry, the global asymmetry of spacetime and the local asymmetries in spacetime, are basically two different ways to define the time-order relation of 'later than'. We can follow Ear-man and define it according to global spacetime features, or we can follow almost everyone else and define it according to the local physical processes occurring in the spacetime manifold. As expected, problems arise when the two views—the local and the global—disagree.24 As we have seen, the asymmetry of time has several tokens, one of which is the handedness of time. Indeed, over and above the time-order relation there exists also a metaphysical notion of time-asymmetry which involves a structural distinction between the two time directions. Earman regards this asymmetry as yet another global feature of spacetime, but he urges us to distinguish between two nuances that can capture it. The first, which is often used by philosophers, he calls metaphorical asymmetry of time, much 2 0See also Appendix A . 2 1 Earman (ibid., 17) and Appendix A . 2 2 T h i s definition becomes too weak in strange spacetimes such as Godel's universe. 2 3 I n the same way one interprets the relation between gravity and geometry—the stress-energy tensor and the metric—in general relativity. 2 4 T h e local character of spacetime is compatible with many globally different structures, e. g. , space can be locally Euclidean and have a topological character of a torus. Chapter 2. Time-Asymmetries 15 on a par with 'how time flies'. This type of asymmetry is pre-theoretic and purely phenomenological. The second—defined as the non-existence of a dif-feomorphism on a temporally oriented manifold that preserves its geometry, i.e. its metric and its connection, but reverses its temporal orientation—he calls literal asymmetry of time. Earman argues that time can be asymmetric in the former sense but not asymmetric in the latter, and warns us that the metaphorical notion cannot do any explanatory work because if our current ultimate physical theory explains a temporal asymmetry in time but does it without any reference to the literal asymmetry, then we can always say that the theory is incomplete. 2.1.3 Precedence So far we have described the asymmetry of time as a type of asymmetry between 'past' and 'future' with several tokens. Here we have set aside the epistemological asymmetry and focused on two other tokens: an ontologi-cal asymmetry between the tenses, captured by temporal becoming, and a structural asymmetry between the directions, captured by what is called in the literature time-anisotropy. The former asymmetry is confronted with time-homogeneity and time-translation-invariance; the latter—with time-handedness and time-reversal-invariance.25 If we agree, as Earman urges us to, that questions about the direction of time should be formulated in the arena of a relativistic spacetime, then the two tokens presuppose a definition of time-order relation in terms of temporal orientability of the manifold. Orientability captures the idea that time itself has a direction, in the sense that there exists a time-orientation on spacetime that fixes the distinc-tion between 'past' and 'future' in terms of light cones erected consistently throughout the spacetime manifold. As Earman mentions, the question whether or not the actual world is orientable is almost never discussed in the literature. And yet orientability must be presupposed in order to formulate physical laws, symmetric or asymmetric, and because there is no empirical test to distinguish a spacetime from its covering spacetime, orientability can always be ruled in by a covering spacetime which is orientable.26 Given ori-2 5 Here I presuppose that the symmetries of dynamical laws are at least as inclusive as spacetime symmetries. 2 6 A theorem to this end appears, e.g., in Cotsakis (2002). M is said to be a covering space of N if there exists an imbedding p : N —» M which preserves the metric, where p is a surjective (onto) continuous map with every x 6 N having an open neighborhood U such that every connected component of p _ 1 (C/ ) is mapped homeomorphically onto U by p. In other words, p _ 1 ( [ / ) is a disjoint union of open sets in M each of which p maps homeomorphically onto U. Chapter 2. Time-Asymmetries 16 entability, time-isotropy is then defined as an existence of a transformation that preserves the geometry of spacetime but 'flips' the temporal orienta-tion of the spacetime manifold. But note that throughout the discussion nothing has been said about the nature of 'past' and 'future' qua tenses. Indeed, what criteria should be used in order to distinguish between the tenses over and above the conventionality of naming one of the directions of an orientable manifold as 'future' and the other as 'past'? We have formulated the debate on the direction of time as a debate over precedence. Earman, as we have seen, gives precedence to spacetime structure over physical processes that inhabit it: Assuming that spacetime is temporally orientable, continuous timelike transport takes precedence over any method (based on entropy or the like) of fixing time-direction.27 Earman's insistence on the precedence of the spacetime structure is aimed to correct misunderstandings in the literature with respect to time-asymmetry that furnish claims such as the observer-dependent character of time-handedness.28 If one accepts consequences of the relativistic arena then such claims are indeed unwarranted. Spacetime is not amorphous inasmuch as if an event A is earlier than an event B for a given observer and A and B are causally connectible, then A is earlier than B for every observer, and this is true irrespective of any time-asymmetry. Although the laws may be TRI, causally connectible processes can have no observer that can describe their reversal and regard the original process and its reversal as the same.29 But Earman who establishes a direction of time ignores the direction of time, i.e., he leaves open the question of deciding which direction is the 'right' one. This point is stressed in Grunbaum (1973, 788-801). Grunbaum sees the key to the anisotropy of time in de facto irreversible processes and argues that temporal orientability alone, although necessary, is not sufficient, because it singles out the tenses in name only. According to Grunbaum temporal orientability simply sets the stage for further physical processes in which the asymmetry of time is manifest and which point at a non-arbitrary ontological difference between past and future. As Earman notes, this 'explanation' is purely metaphorical, and its stems from what he calls a 'reductionistic view of spacetime',30 but unless one rejects any relation 2 7 Earman (ibid, p, 22). 2 8 Grunbaum (1971; 1973) and Black (1962, ch. 10). 2 9 Thi s argument along with the implications of spacetime symmetries on the asymmetry of time are discussed in chapter four, section 4.2.1. 3 0 A reductionistic, or a relationalist, view of spacetime holds that spacetime is not an Chapter 2. Time-Asymmetries 17 between law-governed physical processes and the arena they are occurring in one cannot rule out nomological procedure that detects the direction of time, i.e., that decides which of the light cone lobes is the 'future'. Indeed, Healey (1981, 102), while not committing himself to accidental macroscopic irreversible processes, elaborates on Grunbaum and views orientability as necessary but not sufficient for detecting time-handedness: One cannot read off the correct temporal orientation from the temporally asymmetric but undirected description of the content of some portion of the world over some period of time, anymore than one can tell from a map of an unknown city with no direc-tional indications, which direction on the map is North. Although I am inclined to follow Earman's principle of precedence, I tend to agree with Healey that in order to detect the direction of time in one's spacetime one would have to take into account physical processes occurring in it. In other words, one would have to take a relationalist stance with respect to spacetime. Agreed, the circularity here is obvious, as the one who believes that the past is different from the future is the one who will not be satisfied with Earman's principle, but note that Healey's position is to accept the latter and only to supplement it with a physical criterion for picking up the correct direction. In this sense it is weaker than a full-blown relationalist position which rejects spacetime as something that exists over and above relations between processes, and hence might be worthy of the name 'moderate relationalist'.31 Thus it behooves us to examine the second type of time-asymmetry—the asymmetry in time. It is captured by a class of physical phenomena in which an arrow, or a specific direction in time, is manifest insofar as observing these phenomena can indicate (at least locally) a specific time-order. For example, if you were given different snapshots of a physical process in which asymmetry in time is manifest you could arrange these snapshots in a unique sequential order. You could also detect when such sequence is ordered in the 'wrong' way, as the famous example of a film running backwards indicates. A list of the asymmetries, or arrows, in time is presented in Savitt (1995). arena in which events occur but is nothing over and above a network of relations between events. 3 1 I do not know if Healey would agree to such characterization of his view, yet it appears to me that the distinction between relationalists and moderate relationalists is precisely the distinction between ontology and epistemology. Moderate relationalists accept that spacetime has an independent structure but in order to detect that structure they use physical processes i n spacetime. Chapter 2. Time-Asymmetries 18 Savitt notes that the existence of these arrows, if they do exist, is puzzling, because almost all our fundamental theories of physics seems to be time symmetric, or time reversal invariant. This observation leads us to evaluate the different notions of time-reversal-invariance. 2.2 Asymmetries in Time The phenomena in which asymmetry in time is manifest are also called irreversible. As the term 'irreversibility' indicates a possibility, it should be distinguished according to (1) classes of entities it qualifies and (2) levels of application, i.e., nomological, accidental, and practical. First let us point at different types of entities that can bear the adjective 'irreversible'. There exist only three of them. The first and most commonly used (or abused) are physical processes themselves. The second are laws that describe or govern such processes, and the third are theories which employ laws. Problems arise when one groups together other types of entities under this loaded adjective. It is often said that systems are irreversible,32 or even time itself is irreversible,33 but this is nonsense. Irreversible just means non-invariant under time reversal, thus as an adjective can only apply to the first three nouns mentioned above. 2.2.1 Time Reversal Invariance The following definitions due to Savitt (1994) shall serve us in the discussion: ( T R I i ) For a law of a particular theory to be TRI means that the law will be invariant under time reversal transformation. In other words, let us define the time reversal transformation as a mapping f : t —* —t. If the laws are formulated as differential equations and the mapping / from every solution of the equations to another solution of the equation holds, we can say that the law is T R I . 3 4 (TRI2) For a theory to be invariant under time reversal, or TRI, means that if the theory accepts as possible a dynamical process in a physical system S that develops from an initial state Si to a final state Sj in an interval t, then it will allow it to develop from the reverse, or mirror state, Sj* of Sf to the reverse, or mirror state, of Si in an interval 3 2Hutchinson (1993). 3 3 Mhelberg (1980, 152). 3"Callender (1996) calls it 'Formal TRP. Chapter 2, Time-Asymmetries 19 t. How the state S of a physical system is specified and how it relates to its mirror state SR will depend upon the particular physical theory in question.35 (TRI3) Finally, a process can be reversible if the sequence of states that constitute it can be undone. If an initial state Si develops to a final state Sf, according to some theory T then Sj", the mirror state of Sf, must develop to S^, the mirror state of Si. This definition is stronger than the former, as it constrains a TRI2 theory to allow only the exact reversal of a dynamical process.36 2.2.2 Irreversibility As we are dealing with a kind of modality, we can redefine the notions of irreversibility in terms of nomological, accidental and practical ability. Nomological irreversibility should be interpreted as a twofold claim. First, that the fundamental laws of nature are irreversible, thus any reversal of the direction of the motion governed by these laws in time is nomologically impossible. Second, that our fundamental theory is non-TRFj. Remarkably, almost all our current theories are TRI2. Thus if we take these theories to be fundamental, that is, if we take these theories to resist reduction to a more fundamental theory, then the asymmetry in time that irreversibility phenomena manifest cannot be a nomological irreversibility; It must be ei-ther accidental or practical irreversibility. Accidental irreversibility means that even though the laws allow the occurrence of dynamical processes in re-verse, as a matter of fact these processes do not happen, or at least, we rarely observe them. The last and weakest notion of irreversibility is practical ir-reversibility. In short, it amounts to the observation that it is practically impossible to recover the exact initial state of a dynamical process in a physical system. If by accidental irreversibility we mean that reversibility is not a nomo-logical impossibility, then we find ourselves in a rather awkward position. 3 5 Earman (1967). Callender (ibid.) calls it 'Motif TRP. But see also Albert (2001, ch.l) where it is claimed that this definition is mistaken. Albert claims that in order to achieve time-reversal we should first reverse the sequence of states and then reverse its dynamical conditions, in exact opposite to what is suggested here (and in many textbooks). These operations commute in Newtonian mechanics, says Albert, where the dynamical conditions (the operation that leads to SR) are only velocity reversals. Yet in any other fundamental theory, they do not commute, hence these theories are indeed non-TRl2 in Albert's sense. Albert's own rejoinder (ibid., 15) is to attribute TRI2 to our theories with respect to particle's positions. See Arntzenius (2004) for a criticism on Albert's view. 3 6Callender (ibid.) calls it 'Actual history TRP. Chapter 2. Time-Asymmetries 20 A f t e r a l l , we do not see cigarettes b u i l t up f rom their ashes into the smoker ' s m o u t h , or gases tha t sudden ly shr ink in to a confined vo lume i n their con-tainer . In order to 'save the phenomena ' we need to say tha t a l though revers ib i l i ty is indeed possible , i t is h igh ly improbab le . Y e t , now we need to supp ly a re la t ion between poss ib i l i ty a n d probabi l i ty , tha t is, we need to impose a p r o b a b i l i t y measure on a l l possible states i n such way tha t w o u l d make revers ib i l i ty possible bu t h igh ly improbab le . T h e domains of t h e r m o d y n a m i c s ( T D ) and s ta t i s t i ca l mechanics ( S M ) are perfect p laygrounds for such considerat ions . In T D one defines equ i l ib -r i u m states as states i n w h i c h macroscopic parameters , such as t empera ture or pressure, r ema in s ta t ionary i n t ime . T h e second law of T D dictates tha t when a closed sys tem suffers an i r reversible quasi-s ta t ic p roces s , 3 7 a ther-m o d y n a m i c state funct ion , entropy, can be associated w i t h each e q u i l i b r i u m state i n such way tha t as the sys tem evolves ent ropy can never decrease. I n an a t t empt to give a mechan ica l descr ip t ion of the second law, tha t is, to reconci le t h e r m o d y n a m i c i r reversible processes w i t h u n d e r l y i n g TRI2 laws of m o t i o n , L u d w i g B o l t z m a n n soon a r r ived to the conc lus ion tha t the second law cannot be der ived f rom the equat ions of mot ions alone: i f one defines ent ropy as a func t ion of the d y n a m i c variables of a closed i n d i v i d u a l sys-t e m (whose phase space is b o u n d ) , 3 8 t hen the quas i -per iod ic character of the d y n a m i c s prevents ent ropy f rom increas ing for a l l t imes, and the TRI2 character of the d y n a m i c s prevents ent ropy from increas ing for a l l i n i t i a l states. T h u s , B o l t z m a n n had to invoke p robab i l i s t i c considerat ions . H e d i v i d e d the phase space in to 'cel ls ' a n d defined a s t a t i s t i ca l mechan ica l 'degeneracy ' func t ion of the d y n a m i c var iables of an i n d i v i d u a l sys tem tha t quantifies over the possible microscopic configurat ions of tha t sys tem i n a g iven macroscopic s t a t e . 3 9 W i t h th is funct ion the no t ion of p h y s i c a l poss ib i l i ty became precise. B u t how does one t rans la te poss ib i l i t ies (possible micro-conf igura t ions) to p robab i l i t i e s? In order to connect the two one m a y pu t to work a f ami l -3 7Quasi-static processes are slow, smooth changes from one equilibrium state to another. 3 8 0 n the basic definition of phase space see Appendix B. 3 9 There is an ongoing debate on the role of the underlying dynamics in the foundations of S M . Indeed, Boltzmann 'derived' the second law twice. His first 'proof—the H Theo-rem (1872), attacked later by Loschmidt's reversal objection, relied entirely on dynamical considerations and was modified by Boltzmann (1877) to contain probabilistic assump-tions. This modification, as well as Boltzmann's later discussions on the matter, e. g. , the debate in Nature in 1894-5 and Boltzmann's response to further attacks such as Zermelo's (and Poincare's) recurrence objection, arc interpreted by some, e.g. Ehrenfest (1912) and Klein (1973) (but see also Von Plato (1991) and Gallavotti (1995; 1999) for criticism), as a shift from dynamical considerations to statistical and combinatorial ones. Chapter 2. Time-Asymmetries 2 1 iar metaphysical principle: the principle of indifference, which allows one to treat all possible states as equiprobable.40 With the postulate of equiproba-bility one can partition the possibility space, or the phase space representing the system's evolution, into equal volumes, impose a measure on it, and thus link the notion of phase space volume with the notion of probability. Given that the probability distribution is uniform, the phase point of a given sys-tem is just as likely to be in one region in phase space as in any other region of the same volume which corresponds -to the macroscopic constraints, and hence a system is likely to be found more in regions of phase space whose volume is bigger than others.41 Since almost all of one's observations point that some states, e.g., equilibrium states in which irreversible processes ter-minate, are overwhelmingly abundant, one can then give them an equivalent representation in one's possibility space. In other words, one can say that they inhabit an overwhelming volume of phase space. These considerations led Boltzmann to insist on the distinction between microscopic and macroscopic descriptions and to equate TD entropy with a 'degeneracy' function—an extensive function of an individual system in a given macro-state M , that measures (up to an additive constant) the number of accessible micro-states x that represent the given macro-state M on a logarithmic scale, which, with the help of the natural Lebesgue measure imposed on phase space, is equivalent to the volume of phase space occupied by the micro-states that are determined by the macro-state: s B = i n o g | r M ( x ) | ( 2 . 2 ) where K is Boltzmann's constant and | r ^ ( x ) | is the volume of phase space associated with M. In sum, thermodynamic irreversibility turns out to be an accidental irre-versibility and the second law is only typical: Boltzmann's entropy demon-strates that the number of the microscopic configurations that manifest an equilibrium state is overwhelmingly larger than the number of those micro-scopic configuration that represent non-equilibrium. ' 1 0 This justification which is propounded in, e.g., Bricmont (2001) who regards S M probabilities as Bayesian probabilities, goes back to Jaynes' (1983) 'maximum entropy' approach to the foundations of S M . As stated above, Boltzmann tried first to justify equiprobability and uniform distribu-tion with ergodicity and the H theorem respectively. Today many believe that the ergodic project has failed: first, ergodicity, i.e, equivalence of time averages and phase averages, is very hard to prove even for systems with few degrees of freedom, and second, the time scales for systems with many degrees of freedom are basically infinite. As a result, some textbooks of SM treat equiprobability as a priori necessary for the success of S M , e.g, Tolman (1938, 63-70). Chapter 2. Time-Asymmetries 22 Yet, relating possibilities with probabilities is not the end of the story. In fact, the aforementioned process of establishing this relation, while 'explain-ing' entropy increase and the approach to equilibrium as a typical feature of TD systems, a transition from improbable to probable states, raises a diffi-cult problem. If the states that stand in the end of an irreversible process, i.e, equilibrium states, are overwhelmingly probable, then how come we are not in one such state? To describe equilibrium as overwhelmingly probable would only seem appropriate if the world were almost always in an approx-imate equilibrium state, and very rarely fluctuated away from equilibrium. Boltzmann speculated first that this was the case in our world, and that we indeed inhabit a huge cosmic fluctuation,42 but then this state of affairs seems to go against our experience and memories.43 In order to see why recall that we started with the premise that our the-ory is reversible, or TRI2, which means that nothing prevents entropy from increasing towards the past. Thus put, Boltzmann's problem for account-ing with SM for the thermodynamic arrow in time is not that nomologically possible phenomena do not occur, but that SM, the theory that explains TD phenomena that do occur, predicts falsified predictions: uniform probability distribution entails, along with a TRI2 theory, that, contrary to our expe-rience and memories, entropy increases towards the past. In other words, even if reversibility is possible but highly improbable, then, given a TRI2 theory, it is as likely that entropy has decreased in our past as it will increase in our future. One, however, can always postulate, as Boltzmann himself did, that our actual world simply started with a low entropy state, making accidental irreversibility dependent on the initial conditions of our world, or, on what is commonly known as the past hypothesis: 4 4 That in nature the transition from a probable initial state to improbable state does not take place as often as the converse, can be explained by assuming a very improbable initial state of the entire universe surrounding us. 4 5 I would not like to get into a discussion here on whether the past hy-pothesis really explains accidental irreversibility, or even whether the past hypothesis itself demands an explanation.46 When it comes to experiments "^Boltzmann (1895, 446-448). "Feynman (1965, ch. 5). ""The term was coined by Albert (2001, ch.4). "^Boltzmann (ibid., 447). 4 6 F o r a view that accepts the former and denies the latter see Callender (1997; 1998, Chapter 2. Time-Asymmetries 23 i n one's lab , the past hypothes is is of a p r a c t i c a l impor tance : one prepares a sys tem i n low ent ropy state a n d watches how it approaches to e q u i l i b r i u m . W e k n o w tha t for ce r ta in i n i t i a l condi t ions a n d for ce r ta in macro-s ta te func-t ions, r e l axa t ion t imes can even be i n f i n i t e . 4 7 T h e issue is m u c h more myste-r ious when one's sys tem is the universe as a whole . T h e p rob l em is not tha t such specia l i n i t i a l condi t ions are i m p r o b a b l e , 4 8 or tha t they require 'spe-c i a l p r e p a r a t i o n ' , 4 9 bu t ra ther tha t i f our theories encompass the universe as a whole , i.e., i f the universe as a whole is considered a t h e r m o d y n a m i c a l sys tem, and i f our theories are TRI2, t hen low entropy state cou ld also exist at the other ' end ' of the universe as wel l as at the ' i n i t i a l ' e n d . 5 0 Obv ious ly , to impose such b o u n d a r y c o n d i t i o n on our universe is a serious move, espe-c i a l ly i f one tends to a t t r ibu te precedence for phys i ca l a s y m m e t r y i n t ime over the a s y m m e t r y of t ime itself. Does th is mean tha t somewhere i n the future t ime i tself w i l l f l ip d i rec t ions? H o w can such d r a m a t i c assert ion be a consequence of innocent p robab i l i s t i c assumpt ions? W e sha l l address this puzz le i n chapter four. In the mean t ime, a n d after expos ing the subtlet ies of the asymmetr ies i n t ime (and of t ime) a n d the different levels of i r revers ib i l i ty , we t u r n to ana lyze the d i s t inc t ions between d e t e r m i n i s m a n d inde t e rmin i sm; ignorance a n d chance. 149-150). For a view that insists on the latter see Price (1996; 2002). 4 7 T h e recognition that initial conditions and time-scales are related is exemplified in the vast literature on the Fermi-Pasta-Ulam (FPU) problem, e.g. , Ford (1992). F P U studied a chain of perturbed harmonic oscillators and were surprised to discover that the system did not relax to equilibrium, in the sense that the initial energy did not become equipartitioned among the oscillators. The paradox turned discovery when it was realized that the initial conditions that were used in the experiment, that is, the initial energy and the Hamiltonian chosen by F P U , were precisely those pertaining to the regime of quasi-integrable systems. As Ford (ibid., 278) puts it, "[njonintegrable systems are thick as fleas and integrable are scarce as 'hen's teeth'." Later on it was proved that a periodic Hamiltonian system, even when perturbed, demonstrates for enormously long time-scales a kind stability i.e., 'islands' of periodicity. These 'islands' form a non-vanishing (positive) measure subset of phase space volume and hence the theorem, due to Kolmogorov, Arnold and Moser ( K A M ) is sometimes invoked as an objection against the relevance of ergodicity to the foundations of S M . See Goldstein (1958, 530), Sklar (1993, 169-174). 4 8Penrose (1989, 444) estimates the probability for such initial condition as 1 0 - 1 0 but what is the natural measure we can impose on possible universes? 4 9 W e can prepare low-entropy states in the lab but who prepared the low-entropy state of the universe? 5 0 Pr ice (2002) dramatically refers to this problem as 'Boltzmann's time-bomb'. Chapter 3. (In)determinism 24 Chapter 3 (In) de te rmin i sm T o o k e e n a n eye for a p a t t e r n w i l l find it a n y w h e r e T . L . F i n e , Theories of Probability D iscussions on determinism transcend the realm of physics and bear upon many fields, e. g. religion, ethics and free will, but even within physics, which is the main concern of the dissertation, the term 'determinism' is often used (or abused) in a way that, similarly to the problem of the direction of time, suggests it should be viewed as a cluster of concepts rather than as a single-ton. In this chapter I shall try to distinguish between the different notions that form this cluster in order to furnish the discussion on the possible relation between time and chance with analytical tools. As we shall see, indeterminism does not automatically imply objective physical chance, in the sense that failure of the determinism need not be due to an irreducible stochastic element in nature. The implication in the other direction is more appealing, and it is precisely here where this meaning of probability might make contact with the issue of time-asymmetry. But before we can explore this possible connection let us first clarify the basic terms. As always, when in doubt, it is a good practice to return to the source. In this case the au-thority is John Earman's (1986) seminal study A Primer on Determinism. 3.1 Definitions Defining determinism is not a trivial matter. As Earman notes, nothing is more fruitless as a controversy on' an ambiguous question, and in the case of determinism a lot of dead wood has piled because philosophers paid no attention to physics. Earman starts with an intuitive definition and his method is to make this definition as precise as possible while it co-varies in accord with the developments in physics. A deterministic world-view is best captured, according to Earman, in William James' words: What does determinism profess? It professes that those parts of the universe already laid down absolutely appoint and decree Chapter 3. (In)determinism 25 what the other parts shall be. The future has no ambiguous possibilities hidden in his womb: the part we call the present is compatible with only one totality.1 Next, and in accord with another study on (in)determinism due to Born (1964, 9), Earman distinguishes determinism from causation and from the epistemological notion of predictability, which is epitomized in Laplace's frequently quoted passage: An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motion of the largest bodies as well as of the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes.2 Earman's intuitive definition is then spelled out as follows: The world W G VV is Laplacian deterministic just in case for any W G W, if W and W agree at any time, then they agree for all times.3 where (1) W stands for the collection of all physically possible worlds (that is, worlds that satisfy the natural laws obtaining in the actual world); (2) 'the world-at-a-given-time' is assumed to be an invariantly meaningful notion; and (3) agreement means agreement on all physically relevant properties. Recall that in contrast to Earman we are not interested here in verifying whether determinism holds for a specific physical theory, or for a specific possible model of the world. Our aim is only to equip ourselves with enough analytical tools in order to examine whether lack of determinism has any-thing to do with questions about the direction of time. Thus, we shall concentrate here on distinctions that might prove useful to this purpose, the first being the distinction between the two arenas in which one can posit questions about determinism in physics: the structural and the dynamical. The former puts constraints on the possibility of determinism in terms of spacetime structure; the latter—in terms of existence and uniqueness of so-lutions to differential equations. This duality merely represents the fact that physics employs two different classes of laws: dynamical, which describe or 'James (1884) cited in Earman (1986, 4). 2Laplace (1820) cited in Earman (1986, 7) and Nagel (1961, 281). 3 Earman (1986, 13). Chapter 3. (In)determinism 26 govern the motion of matter in space and time, and instantaneous, which limit what can happen in a single instant of time.4 Keeping this limited goal in mind, and in order to demonstrate the intri-cate relations between these two classesof constraints, let us start with what is commonly believed to be a paradigm for a deterministic theory—classical Newtonian mechanics (CM). 3.2 Indeterminism and Indeterminacy C M is usually taken as an example of a deterministic theory. In this frame-work we take the state of a closed physical system which consists of point masses at a certain time t and we describe it by a list of the positions and momenta of the masses. Given that list we can claim that (a) the theory that describes this system is complete if and only if from the description we can derive all the physical features of the system,5 and that (b) the theory that describes this system is deterministic if and only if (a) holds and the state description at time t uniquely fixes the state description at any other time. The laws of motion of Newtonian mechanics employ continuous dif-ferential equations. These equations admit a unique solution for a given set of initial data and this ensures us that if we take Newtonian mechanics to be complete, i.e., if (a) holds, then we must also take it to be deterministic. Yet, and as Earman notes, all this is too quick. First, Newtonian laws are singular when it comes to dimensionless point masses and hence (b) does not always hold. Second, these laws place no limitations on the velocity at which causal signals can propagate,6 and hence (a) cannot be established unless we impose boundary conditions at infinity by fiat.7 4 The discovery of the basic laws of the former started the Newtonian revolution; the first instance of the latter was pointed out by Gauss. See Kuchaf in Butterfield (1999, 170). Completeness is a domain-relative property that should not be confused with fun-damentality. A theory completely describes a physical system S during a time interval T if every attribute of 5 is precisely determined for every temporal instant t 6 T. A fundamental theory is one whose entities and properties are not constituted or realized by other entities or properties, and whose laws do not hold in virtue of other laws. Thus Newtonian mechanics might be complete but not fundamental. 6 This fact is intimately related to the structure of Newtonian spacetime which allows action-at-a-distance. Even if the system under consideration is extended to include the entire universe, it is not automatically 'closed' in the operative sense from outside influences, and the physical possibility of 'invaders' threatens to trivialize the domains of dependence of any time slice of classical spacetime (see also appendix A) . With the domains of dependence trivialized, determinism 'not only doesn't get to first base, it never even has the chance to come out Chapter 3. (In)determinism 2 7 When we move on to special relativistic physics the spacetime struc-ture (or rather, the spacetime transformations group) changes and Newto-nian absolute simultaneity disappears. We then have non-trivial domains of dependence, Cauchy surfaces, and enough spacetime structure to secure a launching pad for Laplacian determinism, at least in one spatial dimension, and, with further constraints imposed on the dynamical laws also in three dimensions.8 Again, danger looms when one turns to consider tachyons, or when one moves to general relativistic spacetimes where the existence of Cauchy surfaces is not always guaranteed, and the initial value problem is not always well posed, but these problems should not concern us here. The important point the discussion drives home is the following: determinism is an ontological matter of completeness, existence and uniqueness, and as such, it is a property of the physical world (described as a spacetime model) and of theories which describe either the dynamics of spacetime or the dy-namics on spacetime. As Earman notes, we cannot just read off the lesson for determinism from various branches of physics, for the implications we read will depend on our judgements of the adequacy of these theories which in turn will depend on our views about determinism. But determinism is a powerful probe for ontological and methodological problems, and my hope is that it would indeed prove useful in the context of time-asymmetries. C M turns out to be deterministic only when one confines it to large scale bodies and when 'invaders' from infinity are banned by boundary conditions imposed by fiat. What about quantum mechanics (QM)? The answer is complicated, mainly because until now the discussion assumed a well defined ontological world structure, and Q M is still open to interpretations about its foundations. Nevertheless, we can still dig up some useful distinctions such as the distinction between indeterminism and indeterminacy. In some sense Q M is even more deterministic then C M . 9 Indeed, the Schrodinger equation is a linear equation that evolves unitarily in a closed system, and hence takes a pure quantum state into another. In the case of a free particle with mass m that moves in one spatial dimension q, for example, the Schrodinger equation reads: and if, following Born, the wave function ip(q,t) is interpreted as a proba-bility amplitude then what was imposed by fiat in C M is given now for free: of the on deck circle.' (Earman 1986, 35). 8 Earnian (ibid., 61-63) and Appendix A . 9 Nagel (1961, 305-316). Chapter 3. (In)determinism 28 we have immediately the boundary conditions at infinity sufficient to prove uniqueness, as for probability to normalize we need that for all t and if we restrict attention to complex valued square integrable functions, the initial data ijj(q,0), — oo < q < + 0 0 , fix a unique solution of (3.1) for all times. Another argument in favour of determinism in Q M is that contrary to C M , Q M might not demonstrate the kind of exponential instability, or sensitivity to initial conditions, commonly known as 'deterministic chaos' (more on this below).1 0 So why is Q M commonly considered as indeterministic? The answer to this question lies in the quantum state-description. Supplying as it does— under the assumption that the theory is complete—all there is to know about the world, the quantum state-description gives us only statistical in-formation, i.e., a probability distribution over possible values. When we try, moreover, to measure simultaneously both position and momentum of a particle, or any other pair of non-commuting observables, then, contrary to C M , the theory itself dictates that we will get as close as we want to the precision of one, on account of the precision of the other. In other words, when the position of a particular particle is completely fixed, the theory still gives only a probability distribution for its momentum. When we actually measure these properties, our measurements in the long run confirm the probabilities predicted by the theory. This fact, however, does not allow us to raise our predictability of a single simultaneous measurement, in the sense that the definite values it gives us are not contained in the quantum state-description.11 The lower bound on the precision of measurements is captured in Q M by Heisenberg's uncertainty principle.1 2 We should note that such lower 1 0 Attempts to define quantum chaos are reviewed in Casati and Chirikov (1995, 3-54.). In short, the problem is that in the quantum regime, the energy levels are discrete so the wave function is an almost periodic function of time and hence the periodicity cannot be ignored. Next, there are no trajectories, and finally, the uncertainty principle smoothes over regions in phase space so the whole notion of complexity on infinitely fine scales has no meaning. See also Belot (1998) and chapter six, section 6.3. 1 1 Operationally this means that systems prepared identically and with the maximum accuracy permitted by the theoretical scheme, ,i.e., pure states, when subjected to identical measurements, can give different results. 1 2Heisenberg's uncertainty principle maintains that a measurement of observables which are Fourier transforms of each other, such as position and momenta of a particle, is con-strained by the relation ApAq ~ h where Ap and Ag are the variations or the accuracies (3.2) Chapter 3. (In)determinism 29 bound exists already in C M , 1 3 but the crucial difference is that measure-ment errors in C M not only can be predicted and calculated, but also can be compensated for. In other words, measurement errors in C M are already encompassed in the theory. In Q M these errors are ineliminable: the un-certainty principle is a mathematical theorem of the theory. This is why if we take Q M to be complete in the sense that the state description gives us all there is to know about the world, then in order not to contradict this completeness we have to say that the indeterminacy is not a feature of the theory but a feature of the world, i.e., that the uncertainty is not epistemological, but ontological.14 This crucial point helps us to distinguish between indeterminism and indeterminacy. Modulo the interpretation of the quantum state-description, Q M should be considered deterministic because given its completeness, i.e., notwithstanding the fact that its state-description gives us only probabilities and not definite values, its state-description evolves deterministically. Thus, given a quantum state of a physical system in a particular point in time, Schrodinger's equation uniquely fixes the quantum state of this system in another time. The problem, however, lies in the interpretation of the probabilities that the quantum state codifies. Given several 'no-hidden-variables' theorems, we know that we cannot generate the probabilities codified by the quantum state for ranges of values of the observables of a quantum system from a measure function on a probability space of elements representing all possible assignments of values to these observables, if these values are required to satisfy certain locality or structure preserving constraints.15 And so the reason why Q M is considered indeterministic is that cer-tain structural features of Hilbert space tell us something about the inde-terminacy of the quantum world. If we accept the locality constraints as with which the position and momenta are measured respectively, and h is planck constant divided by 2n. The uncertainty principle is nothing more than the inevitable result of treating particles as waves. 1 In classical electrodynamics we measure the electrical charge of a field by 'bringing a test charge from infinity', but when we do that, we change both the distribution of charges in the field and its dipole moment. The reasoning goes like this: the uncertainty principle is a theorem of Q M . It imposes boundaries on our knowledge which are not calculable from Q M itself. But if we do not want to contradict QM's completeness, then we have to say that our knowledge cannot be completed by any other possible theory, i.e., we make the principle ontological and say that our knowledge is limited because there is nothing more to be known. l j T h e famous theorem of Kochen and Specker proves that Q M prevents one from si-multaneously distributing values on a disjunction of well chosen finite set of parameters in unmeasured states. See Redhead (1987, ch. 5) and Pitowsky (1989, 109-117). Chapter 3. (In)determinism 30 reasonable, the logico-algebraic structure of Q M prevents us from assigning definite values to all observables, hence we cannot interpret the quantum probabilities as measures of ignorance of the actual unknown values of these observables as we do in C M . 1 6 The difference between Q M and SM, which marks the departure from determinacy and determinism, lies in the origin of probabilities and in the interpretation of the quantum state. While both SM and Q M involve proba-bilities, in the former we introduce probabilities into C M and interpret them as resulting from ignorance, either of (1) the exact instantaneous micro-state of a system or of (2) the exact result of the deterministic evolution of this micro-state. Thus, these probabilities are eliminable in principle. On the other hand, many view the probabilities of Q M as emerging as a result of pure chance. Due to several 'no-hidden-variables' theorems it is widely held that the observables of a quantum system do not posses a definite, value if not measured; only a collection of propensities, or potentialities to display a value when measured, and hence many regard the indeterminacy in (1) as not eliminable in principle.1 7 What about (2)? Orthodox Q M is vague about the transitions from probabilities to definite outcomes (hence the notorious measurement problem). These transitions from potentialities to actualities still await an encompassing theory. Such a theory would then make also (2) ineliminable and would render Q M completely stochastic by turning its equations of motion indeterministic. 3.3 Predictabil i ty After clarifying the objective or ontological sense of determinism let us dis-cuss now its verificationist, or epistemological interpretation. In short, epis-temological indeterminism is captured by the concept of predictability, i.e, to what extent can we infer with probability 1 from a particular state of a physical system in a certain time its states in other times. 1 6 Note that Heisenberg's uncertainty principle which refers only to reciprocal relation-ship between the statistical distributions of certain (non-commuting) observables for a given quantum state, and says nothing about hypothetical value assignments to observ-ables, leads to similar considerations, but it does so only under its ontological interpreta-tion, hence quantum indeterminacy, rather than being its consequence, serves as one of its reasons. 1 7Agreed, there is another tradition, stemming from Bohm and de Broglie's 'pilot-wave' theory (Bohm 1952), that regards quantum probabilities on a par with SM probabilities, e.g., John S. Bell's 'On the impossible pilot-wave' in Bell (1987, 159-168) and Goldstein, Diirr and Zanghi (1992). Einstein himself, although unsympathetic to Bohm's theory, shared its intuition that Q M probabilities are a result of the theory's being incomplete. Chapter 3. (In)determinism 31 When the unpredictability is a nomological inability and the theory is complete, the notions of epistemological indeterminism and objective or on-tological indeterminism coincide. In this case the unpredictability is a logical consequence of the theory and the character of the world. Yet, Earman re-jects the view that predictability and determinism are co-extensive; a view he attributes to Popper (1982). Breakdown of prediction is not a failure of determinism, because the latter is a question of existence independent of human knowledge. Following Earman, we can say that unpredictability is only a physical inability. A process can be considered as physically un-predictable if there exists no physical procedure whatsoever to predict its outcome, inasmuch as there is no constructive proof of the existence of a solution to the unbounded set of equations that describe it. The famous Laplace's demon demonstrates this notion, as a superhuman creature that can calculate such unbounded set.1 8 An even weaker form of unpredictability is practical unpredictability, which amounts to our own inability to calculate even a bounded set of equations which is large enough. The latter two notions of epistemological indeterminism are best cap-tured by what is called 'deterministic chaos'. If measurement errors prevent us from knowing the initial state of a system with exact precision, then even if the dynamics are deterministic, we will not be able to predict with cer-tainty the state of a system at different times. Yet as we have seen above in the case of C M , if the degree of precision of the initial data is linearly related to the degree of precision of the prediction, this epistemological inde-terminism is very weak, as we can adjust our precision as much as we like. A stronger notion of epistemological indeterminism arises when the precision relation is non-linear, i.e, if the precision of the predictability exponentially decreases with time even when the measurement error is constant. In that case we say that the system under consideration is dynamically unstable, in the sense that after a short time prediction becomes impossible and the dynamics of the system appears indeterministic. In this case the practical unpredictability becomes physical unpredictability. There is yet another approach to the definition of deterministic chaos which involves computational complexity, i.e, the mathematical definition for randomness. Although it is widely accepted that computability is not logically related to determinism in its objective mode,1 9 there is still an ongoing debate on the relevance of this mathematical notion to deterministic 1 8 Yet , even Laplace's demon could not predict an outcome of a quantum measurement if ontological indeterminism prevails. 1 9 Earman (ibid., 126). Chapter 3. (In)determinism 32 chaos. 0 Indeed, the subtle difference between computational complexity and dynamic instability might be important when it comes to distinguishing between practical and physical notions of epistemological indeterminism, or even between epistemological and ontological determinism,21 But we shall set aside this debate. More important to us is that deterministic chaos, as an example of a deterministic dynamical system that appears indeterministic, establishes the difference between objective indeterminism that is cashed out in terms of chance, and apparent indeterminism in which ignorance is manifest. The former is a feature of a particular world; the latter is a feature of our knowledge of this world. 2 2 3.4 Chance, Randomness and Ignorance I am convinced that the vast majority of my readers, and in fact the vast majority of scientists and even non-scientists, are convinced that they know what 'random' is. A toss of a coin is random; so is a mutation, and so is the emission of an alpha particle. . . . Simple, isn't it? These seemingly innocent words of Mark Kac (1959) hide what Kac knew very well; that randomness could be called many things, but not simple. The very words "chance" and "randomness" are notoriously ambiguous as well as vague. The former is mostly predicated of events or of sequence of events; 2 0See for example Batterman (1993; 1996) and Ford (1983). 2 1 For example, if we imagine Laplace's demon as a universal Turing machine, there will always be some equations that do not admit a constructive solution even when the initial data are recursive. In such cases Laplace's demon would have to consult an Oracle with computing powers beyond those of a Turing machine, i.e., a machine with computing pow-ers exceeding any algorithm. See Pitowsky (1996). Moreover, if computations are viewed as physical processes that consume space and time, one can think of strange spacetime structures that will turn an otherwise undecidable proof decidable, e.g Hogarth (1994) and Ng (2000). 2 2 T h e distinction between epistemic and objective or ontological unpredictability con-stitutes one of the 'continental divides' in the interpretations of Q M . While it is unde-niable that according to the existing formalism of Q M future events cannot be precisely predicted, there is still disagreement with respect to the source of this unpredictability. Theories such as orthodox Q M or G R W (Ghirardi et. al. 1986) accept this unpredictabil-ity as a consequence of the world we live in. As stated above, alternative formulations, such as de-Broglie-Bohm (1952) do not change the conclusion that the future cannot be predicted, although they modify the reasons for it. In these theories the unpredictabil-ity is a result of insufficient knowledge of initial conditions, and not of any irreducible indeterminacy or fundamentally stochastic dynamics. Chapter 3. (In)determinism 33 the latter—of numbers; sequences; patterns; or even processes. Another dif-ference is that while randomness is mostly predicated of a single number or of a single sequence of numbers, with chance matters are more compli-cated. Indeed, Nagel (1961, 324-335) analyzes different notions of 'chance' and categorizes them according to the relational-absolute criterion. Thus, the chancy character of an event in the relational sense is always decided with respect to another event or set of events, e.g, (1) the unexpectedness of its occurrence given the occurrence of another event or a sequence of events; (2) the statistical independence of one event from another; or (3) the unpre-dictability of an event given certain initial conditions. On the other hand, the notion of absolute chance requires the notion of 'causation' or 'determi-nation' in the sense that (4) a chancy event occurs spontaneously, without any 'cause'. In other words, absolute chance can be defined as absence of determining conditions for the occurrence of an event. Two remarks are in place. First, note that if an event is chancy in the sense of chance4 then it is also chancy in the sense of chancei^^, but not vice versa. Second, randomness and the notions of chancei^s can make contact when one thinks of a sequence of events under a specific metric. For example, chaotic orbits in phase space can be seen as a random sequence. Yet, as we shall see, equating randomness with chance4 is more problematic. Say we toss a coin twenty times and write down 1 for 'head' and 0 for 'tail'. After the sequence of tosses we'll.end up with a list of zeros and ones. If we repeat this 'experiment' several times we can sometimes get a sequence with no obvious pattern, e.g. , 01101100110111100010 (call this sequence A), and sometimes a sequence composed only of '1' or only of '0' (call this sequence B). Which of the twenty bits long sequences, A or B, is random? Clearly the answer cannot rely on the process that generates the sequences, or the origin of the sequences,23 since the same mechanism can generate both with an equal probability of 2 - 2 0 . Due to Kolmogorov and Chaitin mathematicians now solve this problem with the notion of complexity: complexity of a sequence is the number of bits that must be put to a computer in order to obtain the original sequence, and a random sequence is one such that its complexity is approximately equal to the length of the sequence in bits. Thus, a sequence is random if the smallest algorithm capable of specify-ing it to a computer has about the same number of bits of information as the 2 3 Note that this indicates that one should be careful with the term 'random process' and with the idea that randomness and chancci are equivalent. A process can yield a random sequence and be perfectly non-random, in the sense that it is governed by deterministic laws. Coin tosses, Roulette wheels and dice throws are such processes. Chapter 3. (In)determinism 34 sequence itself, that is, if the information embodied in the sequence cannot be reduced, or 'compressed' to a more compact form. In our case, sequence B is clearly not random because it can be 'compressed' to the algorithm "Print 1 (or 0) twenty times". Sequence A, on the other hand, cannot be compressed into a shorter sequence than its own length. From these considerations it is already clear that (1) it is impossible to establish beyond question that an event is an absolutely chance occurrence, as this demands showing that there is nothing whatever upon which its oc-currence depends, and that (2) by combining the mathematical definition of randomness with Godel's theorems one can show that the question whether a sequence is random has no answer unless the latter is indeed not random or its complexity is less than that of the formal system that is used to prove its randomness.24 To find out that a sequence is non-random one only needs to find a program that can generate it and that is substantially smaller than the sequence itself. But to find out that a sequence is random one needs to prove that no such program exists, and if such sequence exists and its complexity is greater than the complexity of the formal system in which one tries to prove its existence, the proof will never end. As a result, (a) one might think that it is always possible to reject a proclaimed chancy character of an event just by saying that the description of determining conditions for its occurrence is incomplete, and (b) one might even cite the mathematical notions of randomness and incompleteness stated above in proof. As for (a), I agree, yet I do not think it turns the debate into a metaphysical one (see below). As for (b), that the question whether a sequence is random is undecidable does not prevent us from regarding a single event or a sequence of events as being chancy. As Geoffrey Hellman puts it: Whether a physical process is fundamentally random in the sense that its outcome is not causally determined should not depend on the predilection of one peculiar species of hairless ape for certain kinds of order.25 As in the case of unpredictability, undecidability and indeterminacy are not co-extensive. The latter is a matter of existence; in this case—existence 2 - 1 Chait in (1975). 2 5 Hellman (1978). It is interesting to compare the ontological and the epistemic views of determinism with the debate between Platonism and intuitionism in the foundations of mathematics, where the disagreement is on what constitutes a mathematical meaningful statement—independent existence or construction proofs. Chapter 3. (In)determinism 35 of determining conditions. Using our definitions of indeterminism and in-determinacy we can say that if an event is chancy then it is not uniquely determined by the current state of the world in the sense that the conditions for its determinacy are absent. The conclusion is that if one presupposes that a sequence of events is chancyj that is, that each event in the sequence is 'spontaneous' with no determining conditions for its occurrence, then the fundamental laws that govern this sequence and the theory that describes it with these laws must be indeterministic, in the sense that the latter can give probabilistic predic-tions or give no predictions at all. But if, on the other hand, one is given a theory which yields only probabilistic predictions and a chain of events with no obvious pattern in it, one cannot deduce that the fundamental laws of the world are indeterministic and that the chain of events is chancy. First, echoing Hume's 'problem of induction' one can claim that the 'hidden' deter-minacy might be revealed tomorrow. Second, the theory might be deemed incomplete, as, for example, Bohmians argue against QM, emphasizing that Bohm's theory is empirically indistinguishable from orthodox QM. Thus, absolute chance, or chancer implies indeterminacy and indeterminism, but indeterminism does not imply absolute chance, since unless the theory is complete indeterminism does not imply indeterminacy. The alternative to chance is ignorance, or lack of knowledge. This alter-native is clearly acceptable in the case of chancei^.3: I didn't expect to see you because I did not know you were in town; two events can be rendered statistically independent just by conditionalizing on an unknown common cause; and dynamic instability or sensitivity to initial conditions yields loss of predictability in purely deterministic systems. But the interesting prob-lem arises when ignorance is viewed as an alternative to chance^ As commonly believed, the answers to the questions whether quantum processes are absolutely chancy and the fundamental laws are indeterminis-tic are still inconclusive since Q M is compatible with both alternatives. My own belief, however, is that even if today there exists no empirical way to answer these questions they cannotbe deemed purely metaphysical. Agreed, technological limitations currently prevent us from giving an answer, since (1) as soon as we try to measure a quantum system it decoheres (See chapter five (section 5.4.3) and Appendix C), and (2) 'hidden-variable' theories are currently ruled out only because of relativistic considerations, but this does not mean that the laws themselves 'haven't yet decided' whether they are deterministic or indeterministic. The character of physical laws should lie in the observed nature and not in the observer, and the fact that we lack experimental capacities to solve the Chapter 3. (In)determinism 36 measurement problem does not render the latter metaphysical. Whether the fundamental laws are indeterministic or deterministic is not up to us, just as the validity of these laws does not depend on the skills and capacities of the experimenter. Agnosticism with respect to the interpretation of Q M might be a plausible position to maintain in one's lab, but it collapses as soon as one agrees that there is more to the world than just physics departments. Before going on, let us take stock. We distinguished between objective or ontological indeterminism and epistemological indeterminism. The for-mer qualifies theories and events and is a claim about non-uniqueness of solutions to equations of motion. It is not to be confused with indetermi-nacy, which qualifies values of measurable quantities or states of the world and is a claim about their non-existence or indefiniteness. Indeterminacy implies indeterminism but the converse holds only if the theory in question is complete. Ontological determinism differs also from epistemological de-terminism. The former regards the inability to uniquely predict a state of a system as a result of the chancy character of our world. The latter puts the blame on ignorance or lack of knowledge, and is captured by different levels of unpredictability. The first and strongest, nomological unpredictability, is equivalent to ontological indeterminism, assuming completeness of our theory. The second level—physical unpredictability—can be cashed out in terms of the non-existence of a constructive proof for solving the equations of motion of a physical system. It is equivalent to the dynamical instability of that system in terms of its sensitivity to initial conditions.26 The last and weakest, practical unpredictability, regards our own human boundaries in measuring and predicting with exact precision the state of a physical system. 3.5 C h a n c e a n d t h e L a w s o f N a t u r e In the final part of this chapter we shall consider the relation between chance and laws of nature.27 As a case study we shall concentrate on a certain alternative to orthodox QM, the spontaneous localization theory of GRW, investigated by Ghirardi, Rimini and Weber (1986), Bell (1987), and Pearle (1997) among others.28 Non-relativistic GRW describes the physical state of an isolated sys-tem (or the whole universe) at time t with its wave function. One usually thinks of this wave function as a kind of field, or "stuff", which occupies an 2 6 Pitowsky (1996). 2 7 Thi s section is indebted in part to Loewer (2000). 2 8 F o r further detailed and technical discussion see chapter five (section 5.3). Chapter 3. (In)determinism 37 multi-dimensional configuration space.29 Assuming GRW is complete and true and given physicalism, everything else—particles, their positions in 3-dimensional space, the distribution of people, cats and so forth—supervenes on the wave function. This means that when the wave function is sharply peaked in a volume of configuration space associated with particle n then n is located in that region. The single dynamical law of GRW is indeter-ministic. It says that the wave function of an isolated system evolves in conformity with a probabilistic law that specifies (depending on the wave function at t) the chances of various wave functions at subsequent times. More precisely the wave function evolves in accordance with Schrodinger's deterministic equation except that at any moment there is a chance of that function collapsing into a narrower—in some of the dimensions of the config-uration space—wave function.30 When a system starts with wave function \I>, the laws specify various possible futures for the system and chances for those futures. Since the latter specify how a system evolves and their ex-istence and value is a matter of objective fact, they are worthy of the title 'dynamical objective' chances. The best place to begin a philosophical discussion of objective chance is with David Lewis' (1980) 'A Subjectivist's Guide to Objective Chance'. According to Lewis objective fundamental chances belong to propositions and in the first instance to propositions that specify that a specific type of event will (or won't) occur at a specific time (or during a particular time interval) in a specific location (or region). For example, the chance that a GRW wave function collapse centered on point x will occur at time t. As Lewis observes the chance of an event A occurring at time t may itself change during the times prior to t so chances are time-indexed. Lewis thinks of chances as being given by what he calls 'history to chance conditionals'. These are statements of the form "if h is the actual history up to and including t the chance at t of S at t' (t' > t) is x". The totality of history to chance conditionals may be given by a 'theory of chance'. In GRW the chance theory is the fundamental dynamical law. The dynamical chances at t are determined by the state at t, that is, by the density of 'stuff' in the configuration space, but the history prior to t is irrelevant. It is assumed that at t the chance of every proposition about times entirely prior to t is either 0 or 1. This assignment of chances gives rise to a tree 2 9 I n fact this was Schrodinger's original interpretation. 3 0 Thus G R W solves the measurement problem since when a macroscopic system, e.g., a measuring device, becomes entangled with a quantum system, e.g., an electron, the wave function of the composite system is very likely to quickly collapse to a wave function which is highly peaked in one of the regions corresponding to the device's pointer positions. Chapter 3. (In)determinism 38 structure branching toward the future. At each node the branches are the possible futures from that node each weighted by its chance from the time of that node. If at each node there is a chance (or chance density) for every possible subsequent state, the chances of each possible future (sequence of states) is determined. The chances that occur in physical theories like GRW are scientifically significant because of their connections with laws, causation, and explana-tion. Here we shall briefly spell out some of Lewis' insights about these connections: 1. Laws: In a chancy world—one in which events possess fundamental dynamical chances (different from 0 and 1)—events cannot also be governed by deterministic laws. That is, there cannot be a determin-istic law that says that St will evolve into S*, while the chance at t of 5(* is less than 1. There are some considerations that suggest that fundamental chances must be governed by laws. If the state (of an isolated system) at t is identical to its state at t' then although the evolution of the two states may differ it seems that the chances (if there are chances) of the possible evolutions of the two states must be identical. Underlying this is the idea that chances are not bare fundamental properties, as, for example, electromagnetic field value might be, but must be grounded in some physical state. So, for ex-ample, in GRW (and other indeterministic quantum theories) chances are (nomologically) determined by wave functions which represent the density of 'stuff' in multi-dimensional configuration space. 2. Causation: In a chancy world chances are part of the causal order; they are caused and are involved in causing. If the laws attribute chances, then causation must operate through those chances. So, for example, if placing a piece of radium near the Geiger counter causes it to register, it does so by way of the chance of its registering. 3. Explanation: In a chancy world an explanation of why one event occurred rather than an alternative may cite the fact that the chance of the first was much greater than the alternative. And in explaining an event we often cite a factor that explains its chance, e.g., explaining why the Geiger counter registered by citing the fact that some radium was brought nearby. Among the many interesting epistemological and metaphysical questions Chapter 3. (In)determinism 39 that Lewis' account raises, the simple but nevertheless puzzling question which seems to underlie current attempts to relate between chance and time I would like to draw attention to is this: given the account of chance and ignorance in the above sections, to what extent can ignorance supplement chance and the roles it plays in the context of the laws of nature spelled out here? Prima facia it is hard to see how chances can play their roles in laws, explanation, and causation in the ways outlined above unless they are ob-jective; as objective as spatial, temporal, and causal relations.32 The issue becomes more pressing when one accepts, as Lewis himself does, that chance and ignorance must exhaust each other if the laws under considerations are fundamental laws: To the question of how chance can be reconciled with determin-ism [...] my answer is it can't be done [...] There is no chance without chance. If our world is deterministic there is no chance in it save chances of zero and one.3 3 While Q M is regarded by many as indicating the breakdown of the de-terministic world-view, the explanatory superiority of chance over ignorance is still a matter of dispute: there are at least two cases in physics—statistical mechanics and Bohmian mechanics—where chances arise from lack of knowl-edge, or ignorance. In these theories events are assigned specific chances notwithstanding the dynamics being purely deterministic. In statistical mechanics ignorance is introduced twice. First, in order to explain thermodynamic phenomena one usually uses a coarse-grained description, i.e., one ignores the exact initial microstate of the system at hand. Next, in order to recover thermodynamic entities and laws one uses the principle of indifference and assigns equiprobabilities to all microstates in equilibrium state. But what could one's ignorance of the exact initial microstate of the system have to do with its behaviour? And why should one adopt the assumption of equiprobabilities in the first place? In Bohmian mechanics the reasoning is quite similar. One usually as-sumes an initial probability distribution which is then 'propagated' with the otherwise deterministic dynamical law. Some initial probability distribu-tions lead to super-luminal signalling; other don't. It would be awkward to 3 1 For a recent discussion see Hoefer (1997). 3 2 I n saying that chances are objective I mean that their existence doesn't depend on our beliefs (except of course where chances of statements about belief are involved). 3 3Postscript to ' A Subjectivist's Guide to Objective Chance'. Chapter 3. (In)determinism 40 assume that the status of a law of nature is somehow due to our degree of belief—to our ignorance! The uneasiness with respect to the explanatory role of ignorance is clear. One way to overcome it is, of course, to introduce objective chance into our theories. And yet this does not make GRW and other indeterministic theo-ries the only show in town. In suggesting to subsume the initial probability distribution into the laws of nature themselves, Loewer (2001) offers an orig-inal way to account for objective chance within the framework of current statistical mechanics and Bohmian mechanics: If by adding such a proposition to a theory one makes a great gain in informativeness with little cost in simplicity, then that probability distribution has earned its status as a law and the chances it specifies are as objective as dynamical L-chances. Ar-guably this is just the case with respect to the micro-canonical distribution in statistical mechanics and the Bohmian probability distribution within Bohmian mechanics. By adding the micro-canonical distribution to Newtonian laws the resulting system (and the proposition that the entropy in the distant past was much lower than currently) entails all of statistical mechanics. By adding the quantum equilibrium distribution to the Bohmian dynamical laws the resulting system entails standard quantum mechanics. In both cases enormous information gain is achieved with very little cost in simplicity. This is not the place to discuss Loewer's suggestion. Suffice it to say that it seems to run against the common view of laws of nature as independent of initial conditions, and that there are good methodological and philosoph-ical reasons to resist any modifications of this view over and above mere simplicity. What is important here is that the uneasiness with respect to the explanatory role of ignorance in current statistical theories that we are about to encounter in the next chapters is well motivated. We shall spell out this explanatory itch in details in chapter six, but keep it in mind, since it is about to appear again soon when we turn to evaluate the relations between the two classes of concepts—time-asymmetries and indeterminism. Chapter 4. Cutting the Gordian Knot 41 Chapter 4 Cut t ing the Gordian K n o t -L ime-asymmetries and indeterminism are both clusters of concepts that admit a subtle typology, and the last two chapters were aimed to create a common vocabulary with the hope it would prove useful in any attempt to verify a putative connection between the two domains. As the citations in chapter one demonstrate, many feel that issues of chance, indeterminism and time-asymmetry are somehow connected. An explicit connection, however, has not yet been established, and the aim of this section is to lay down, by elimination, possible candidates for such a program. But first a methodological disclaimer is in order. I intend to eliminate most of the alleged connections with the help of counterexamples. Some might regard this way of 'intuition pumping' as shallow. In defence let me say two things: (1) one can view the recurrent misunderstanding in the literature as due precisely to lack of this kind of analysis,1 and (2) contrary to the usual philosophical usage of this method, my goal here is strictly positive, insofar as I intend to distill from the in-tractable bundle of connections two specific cases where philosophically in-teresting relations between time-asymmetries and indeterminism might be established. To this extent I shall reject most of the putative relations with the help of a guiding principle and a simple mechanical model. The former is purely linguistic. As we have seen, the adjectives 'indeterministic' and 'irreversible' must be applied with care when treated independently, and a fortiori when combined together. The latter is a simple set-up of a frictionless billiard table with, say, six billiard balls. 'Prigogine (1996) and indeed the entire Brussels's school are guilty of such confusions. See Bricmont (1996) for a devastating criticism. For another recent instance of confusing statements see Elizur (1999; 2001). W a t s o n , y o u i d i o t . S o m e b o d y s to le o u r t en t . S h e r l o c k H o l m e s , The National Post. Chapter 4. Cutting the Gordian Knot 42 4.1 I n d e t e r m i n i s m a n d t h e A s y m m e t r i e s in T i m e 4.1.1 Nomological Irreversibility The first step is to demonstrate that indeterminism and irreversibility are logically independent when applied to physical theories. This is easily done, as nothing prevents us from cooking up theories that violate such alleged dependence.2 C M (without singularities, i.e., with boundary conditions and when applied to large scale bodies) is deterministic and TRI2; GRW—that is, Q M with a genuine physical collapse mechanism—is indeterministic and non-TRl2. But we can also have deterministic theories which are non-TRi2, such as the equations of diffusion or heat conductivity, and indeterministic theories which are TRI2 such the modal interpretation of Q M whose reduced dynamics, i.e., the dynamics of the system alone (after tracing over the degrees of freedom of the rest of the world) are stochastic while the total evolution is TRI2. 3 In this context it is interesting to investigate the case of Q M with no-collapse. Recall our discussion in chapter three: modulo the measurement problem, with respect to the dynamical evolution of the total wave function, no-collapse theories are deterministic even more than C M . Are these theories also invariant under time-reversal? Schrodinger's equation being first order in time, the fundamental dynamical law in these theories is non-TRIi, yet notwithstanding this fact Davies (1974, 154-157) follows Wigner (1959, 325-348) and demonstrates how with respect to the square of the norm of the wave function there exists a time reversal operator that translates any pure state tp to its mirror image—its complex conjugate tp*. By taking this mirror image and combining it with Schrodinger's equation one achieves TRI2 in no-collapse QM. Callender (2000a) raises objections to this view and insists on a distinc-tion between TRI2 and WRI (the latter being the reversal under Wigner's or Davies' operation). One should recall, however, that TRI2 is denned with respect to a certain mirror image.4 If one takes Davies' and Wigner's defi-nition of mirror image to be the correct one, then no-collapse Q M is TRI2, or, in other words, there is no difference between TRI2 and WRI. Although Callender (ibid., 13-14) rejects this definition, he gives no compelling reasons to abandon it. 2 Note that the logical possibility of cooking up such theories does not imply that these theories describe nature correctly. 3Baciagalupi and Dickson (1999). 4 That this definition is not arbitrary is argued convincingly in Arntzenius (2004). Chapter 4. Cutting the Gordian Knot 43 The problem with the problem of the direction of time is that it is easy to fall prey to confusions. Callender's (ibid.) argument against TRI2 in no-collapse theories is a good example. Since symmetry is commonly defined as a commutation relation between the dynamical operator (the Hamltonian) and the time-reversal operator (which generates the instantaneous 'mirror image'),5 the answer to the question whether a theory is symmetric under time reversal hinges upon one's choice of the time-reversal operator. Apart from the fact that in Q M there is no observable difference between violations of WRI and violation of T R I 2 , 6 unless one can justify one's rejection of the common choice of Wigner's reversal operator which yields the mirror image—the complex conjugate tp*—with reasons beyond the simple claim that such operation does indeed yield time reversal invariance, Callender's argument does not go through. As a result, if no interpretation of pure states is presupposed, and if one agrees that (1) these states completely represent physical reality, and (2) the correct mirror images of these states are their complex conjugates, then one must conclude that insofar as the theories are concerned, no-collapse theories are T R I 2 . 7 This 'intuition pumping' is sufficient to demonstrate the logical inde-pendence of nomological irreversibility and indeterminism. In terms of our billiard table model, we can say that one is free to write down any theory one fancies, be it TRI2 or non-TRl2, deterministic or indeterministic, to describe the motion of the billiard balls. The constraints on such theories would then arise not from a priori considerations but from the fact that some of them would not correspond to one's experience. Note, however, that if one regards processes and not theories, then the situation is quite different. Recall that a TRI3 process is one that is allowed by a theory to be exactly undone. Now, it is clear that for any statistical theory that gives only probabilistic predictions or retrodictions, although the theory itself might be TRI2, the processes it describes are non-TRl3. 8 The intuitive reason for this relation is that even if the theory allows a reversed sequence of a process to occur, its laws will in general permit 5See, e.g., Wigner (ibid.). 6 This is so because the empirical predictions of the theory are given only in terms of probabilities and not in terms of the wave function itself (i.e., the probability that a state ip will have some value is invariably equal to the probability that its mirror image, ip*, will have the same value). 7 For a similar view see Zeh (1999, ch. 4). 8 The converse, of course, does not hold, since one can think of many non-TPJ 3 processes in non-statistical theories, e.g., Savitt's (1995, 18) counterexample to Penrose (1989, 354-359) thought experiment. Chapter 4. Cutting the Gordian Knot 44 other evolutions as well.9 And yet there is a more involved reason for the non-TRi3 character of processes described by statistical theories. While employing the same formal algorithm in both time directions, a statistical theory, in order to be worthy of its name, distinguishes between the actual results of these algorithms, that is, it distinguishes between the 'forward' and the 'backward' joint, or conditional, probabilities. To see why consider the following example.10 Suppose that a statistical theory T is TRI2, that is, it uses the same algorithm in both the calculations of St2 from Stj where t\ < £2 and of Sj^ from Sj% where £2 <t\. Now if one knows that the state in t\ is actually Sa, one cannot infer the exact state, say S0, in t2. One can only infer that the probability P(Sb(t2), Sa(t\)) = p\. But if one knows that the actual state in t2 is S0 then in the absence of additional information one cannot infer either that the state in t\ was Sa or that the probability P(Sa(t\), Sb(t2)) = P2- Still, one may have additional information that would allow such inference, e.g., the prior probability (in-dependent of the knowledge of St2) of — Sc. In this case one may use Bayes' theorem together with the knowledge of Sb(t2) and the theory T to infer a revised probability for 5^ as follows: pro ( f ] o(f } ] ... PiSb^Sqjh)) x P(Sa(h) P(sa(tl)tsb(t2)) - E m ) i 5 e ( t l ) ) x W l ) ) ] Since in such cases the statistical theory T issues conditional probabil-ities only in one (in this case, 'forward') direction, it describes processes asymmetrically. Moreover, any attempt to add a further set of statistical consequences of the form: P(Sa(t1),Sb(t2))=p' (t!<t2) (4.2) must also satisfy certain constraints in order to preserve consistency with the forward conditional probabilities, and hence the addition of the reverse conditional probabilities for all forward conditional probabilities in a way that meets all the constraints is tantamount to adding law-like boundary conditions and changing the character of the theory. We can now see why processes described by statistical theories are non-TRI3. In order to satisfy this time-reversal-invariance the statistical theory must admit that P(Sb(t2), Sa(h)) = P(S^(h), 5 f c f i(£ 2)), where Sa and Sb are the actual states of the system in t\ and t2, respectively, and t\ < t2. But as we have seen above, no statistical theory can specify the latter probability. 9See Savitt (1994, 911-912). 1 0 The discussion here follows Healey (1981, 104-107). Chapter 4. Cutting the Gordian Knot 45 Callender (ibid.) follows Healey (ibid., 107) and concludes that the fore-going means that any statistical theory is non-time-reversal-invariant sim-pliciter, but since what at stake here is the probability of the actual process to occur backward, rather than the mere possibility of the occurrence of such process and the formal algorithm the theory offers for calculating it, I think the correct conclusion is limited to the non-TRl3 character of the processes it describes. To put matters in another way, that a statistical theory demon-strates non-invariance with respect to transition probabilities in calculating results of actual processes implies nothing about the TRI2 character of its algorithm.1 1 The foregoing reasoning holds also with respect to indeterminacy. The basic idea is that as in the case of indeterminism, indeterminacy and time-reversal-invariance of theories are logically independent. Say we have on one hand a 'hidden variable' theory that assigns determinate non-contextual val-ues (either with prob. 0 or with prob. 1) to its observables, and on the other an indeterministic theory that assigns the values of these observables only probabilities between 0 and 1. Would this automatically imply that these theories are TRI2 and non-TRi2 respectively? As shown here, the answer is in the negative. Recall that for a theory to be TRI2 all that is needed is that it employs the same algorithm in predictions and retrodictions. Although the results of these algorithms might be relevant to the reversibility of the process, they are quite irrelevant for establishing logical relation between indeterminacy and irreversibility of theories. So far we have seen that indeterminism and nomological irreversibility are logically independent. From the point of view of physics, however, it is interesting to note that the two basic frameworks for the interpretation of what is considered the current fundamental dynamical theory, QM, are, as a matter of fact, distinguished along the double criterion of determinism and time-symmetry. Setting aside the 'no-interpretation' interpretation that goes back to the Copenhagen school which is purely instrumental and hence says nothing about the world outside the lab, 1 2 theories which take the measurement problem seriously divide into two camps: no-collapse and collapse theories. The first camp can be divided further by the way each theory in it answers the question which observables posses definite values,13 but once this issue is settled the dynamical evolution of the total state vector employed by no-n F o r a similar view see Savitt (1995, 18). 1 2Fuchs and Peres (2000). 1 3 That is, what are the 'beables' of the theory. See Bub (1997). Chapter 4. Cutting the Gordian Knot 46 collapse theories is linear and unitary; completely deterministic and TRI2. 1 4 What distinguishes between no-collapse theories, e.g., 'hidden variable' theories such as de-Broglie and Bohm's (1952), modal theories such as Van Fraassen (1991, ch. 9), and many-worlds theories such as Everett (1957), is neither the dynamical evolution of the state vector nor their interpretation of single-time probabilities of instantaneous quantum state as an effective, reduced, state (although the additional dynamical laws that yield this re-duced state do differ between no-collapse theories). Rather, it is the way each theory interprets the transition and joint probabilities. And yet, the interpretation of these probabilities (which is straightforward only in 'hidden variable' theories and rather obscure in many-worlds theories) is irrelevant to the issue of the unitary and TRI2 character of the dynamical evolution of the total state vector. Members of the second camp are collapse, or 'spontaneous localization', theories such as the theories of Ghirardi, Rimini, and Weber (GRW, 1986) and CSL (Pearle 1986; 1997).15 It has long been recognized that collapse can't be modelled by a linear, unitary evolution. If one insists that the state remains normalized, then one is stuck with non-linear evolution. However, it has come to be realized that, somewhat surprisingly, one can represent collapses via linear operators if one is prepared to let the norm of the state be reduced, or in other words, to allow the evolution not to preserve the norm but only the average value of the squared norm. The idea is to model collapse evolution by a 'selective' operation—an operation that makes the trace of the density operator smaller by a factor that gives the probability of the transition. If one models things this way, then deterministic evolutions, being those with probability one, are precisely those that preserve the trace. So, one can equivalently represent the collapse as a non-linear transition or a linear one: in the former case the formalism describes an individual system; in the latter—an ensemble of systems.16 And yet in both representations and contrary to the unitary and re-versible evolution of the total state vector in no-collapse theories, in GRW 1 4 A linear equation is such that when it takes a state \A) onto the state \ A ' ) , and it takes the state \B) onto the state \B') then it takes any state of the form | A) + \B) onto the state \A') + \ B ' ) . A unitary operation U is an analogue to rotation in elementary vector calculus and as such is a trace preserving invertible map, in the sense that U W = W U = 1. CSL stands for Continuous Spontaneous Localization. 1 6Since the norm of the wave function has, on the usual interpretation, no physical significance—the state of a system is represented by a ray in Hilbert space, or, equivalently, by an equivalence class of density operators that are scalar multiples of each other—then no matter what one's interpretation is, one can use either representation. Discussions of the difference between these two approaches can be found in Ghirardi et. al. (1993). Chapter 4. Cutting the Gordian Knot 47 and CSL the collapse evolution is irreversible and is represented with a non-unitary one parameter operator semi-group defined on the appropri-ate Hilbert space. The transition probabilities in these representations are indeed purely stochastic,17 but this stochasticity is related only to the non-TRI character of the process (in the TRI3 sense), and can be extrapolated to the non-TRl2 character of the theory only if one regards GRW or CSL as fundamental. Now if one rejects the idea that GRW (or its kin CSL) is fundamental one can go both ways: on one hand, one can focus on the non-linear representa-tion and claim that (similar to the case of deterministic chaos) the apparent indeterministic character of the dynamics is a result of sensitivity to initial conditions and the ignorance thereof. Such new physics is explored in Wol-fram (2002) and in t'Hooft (1999). On the other hand, one can focus on the 'selectively' linear representation and then claim that the collapse process is a result of the system being open to interaction with the environment, and can also be modelled as an effective process by no-collapse theories powered by decoherence (which again are TRI2 and regard the evolution of total state vector as deterministic). Since in the latter the state vector never collapses and represents an ensemble of systems, the basic question of the transition from pure states to mixtures remains to be solved.18 The upshot is that a logical relation between indeterminism and time-asymmetry can be established with respect to processes, and hence can be extrapolated to theories only in the case of a fundamental theory. Further-more, this distinction between theories and processes serves to elucidate the distinction between chance and ignorance. Suppose that one has in hand two theories: (1) 7^, a deterministic and TRI2 theory, and (2) Ts, a indetermin-istic and non-TRJ.2 theory that gives only probabilistic predictions. Given a non-TRl3 process, currently explained by Ts, one can make two equally plausible assertions: either Ts is fundamental and the irreversibility mani-fest in the non-TRl3 process is nomological, or Ts is non-fundamental; the fundamental theory is 7^; and the irreversibility manifest in the non-TRi3 process is only accidental. How one interprets the probabilities employed 1 7 G i s i n (1989) has shown that non-linearity without stochasticity would result in su-perluminal signalling; Ghirardi and Grassi (1991) have shown that stochasticity without non-linearity can at most induce ensemble but not individual reductions, i.e., they do not guarantee that the normalized state vector of each individual system represents after the collapse a system with definite properties. 1 8 I t is worth stressing here that decoherence by itself solves the measurement problem only 'for all practical purposes'. That this 'solution' is unacceptable is one of John Bell's (1990) legacies. See Hagar (2003). Chapter 4. Cutting the Gordian Knot 48 in Ts would then co-vary with one's attitude toward the theory. In the former case they would be considered as manifesting pure chance; in the latter—ignorance or lack of knowledge.19 Callender (ibid., 12) arrives to a similar conclusion, but he claims further that only Bohmian mechanics is genuinely TRI2. As we have seen here, there is no reason to deny this feature from other no-collapse theories. It is only the dynamics of the wave function that matters hence all no-collapse theories should be regarded as TRI2. 2 0 To summarize, the basic claim of this section is that there is no a priori reason for the 'continental divide' in the interpretations of Q M . The idea that indeterminacy is linked to time-asymmetry stems from the view that collapse theories such as GRW or CSL, when regarded as fundamental, are non TRI2 and involve pure chance, and hence can accommodate the metaphysical view of actualization of potentialities.21 Yet before covering the discussion with metaphysical sauce one should make the subtle distinctions made here and admit that with respect to the character of theories there is no logical connection between a collapse theory being non-TRI (in the TRI2 sense) and its being also indeterministic. 4.1.2 Accidental Irreversibility So far we have (1) ruled out any connection between determinism and time-reversal-invariance as a feature of dynamical theories and, given the current state of affairs in physics, (2) discovered an accidental and indirect connec-tion, mediated through the notion of fundamentality of a theory, between the former and time-reversal-invariance as a feature of physical processes. What about epistemological indeterminism and lesser forms of asymmetries in time? Does predictability have any bearing on accidental or practical irreversibility? Contrary to some persistent (and misleading) lines of thought, due to Pri-1 9 A s claimed here, one can distinguish further between Bohmian mechanics and other no-collapse theories: the ignorance in the latter case (which is a result of 'tracing out' the environment) is different from the ignorance in the former (which is usually understood as ignorance of the exact state of the system). See further discussion in chapter five (section 5.4). 2 0See also Hemmo (2002). What is true is that there is still no satisfactory account for transition probabilities in these theories, and that the 'accidental' character of Q M irreversibility is regarded as stemming not from special thermodynamical initial conditions, but from dynamical ones, i.e., those which ensure decoherence. Further discussion is left to chapters five and seven. 2 1 See also Shimony (1989, 389). Chapter 4. Cutting the Gordian Knot 49 Figure 4.1: Billiard Balls gogine and the Brussels school,2 2 epistemological indeterminism, or physical unpredictability, is neither sufficient nor necessary for accidental irreversibil-ity. The billiard table model demonstrates this independence and we shall consider it here in some detail. Recall that accidental irreversibility implies that although physical pro-cesses are governed by TRI2 laws, they are, as a matter of fact, irreversible. In our model this means that (1) the motion of the billiard balls is gov-erned by TRI2 laws, and (2) given two snapshots, or instantaneous macro-configurations of the balls on the table (s, where the balls are confined to a small part of the table, and / , where the balls are spread all over the table) then although the processes s —> / and / —> s are both allowed by the theory, the latter is never observed. In chapter two (section 2.2.2) we mentioned two basic ingredients that are necessary and sufficient for explaining accidental irreversibility: macro-micro distinction and initial conditions. The former, apart from confining the domain of irreversibility to macroscopic systems with many degrees of freedom,23 reminds us that we have to distinguish between macroscopic and 2 2 Driebe (1994); Prigogine and Strengers (1996). 2 3 T h e billiard table model is thus only a heuristic tool. Indeed, in order for irreversibility Chapter 4. Cutting the Gordian Knot 50 microscopic variables. In the case of the billiard table, an instantaneous macro-configuration of the balls on the table, such as s or / , is a possible micro-variable x on the table's phase space.M The vast numbers of degrees of freedom ensures that for certain macroscopic functions -F(x), e.g. , the mass density, the time evolution is autonomous, that is, we can determine Ft given Fn alone without inquiring for x . 2 5 This autonomy is justified by the fact that after taking appropriate lim-its one can prove that the overwhelming micro-configurations that instan-tiate FQ are those which induce the macroscopic evolution Fo —> Ft for a certain time. This terribly difficult task was accomplished by Lanford (1981), who, following Grad (1958), has given the first exact derivation of the Boltzmann equation and supplied a consistency proof for macroscopic irreversibility with microscopic reversibility: the macroscopic laws that gov-ern macroscopic phenomena are another level of description of the dynamical variables that constitute the 'building blocks' of the phenomena observed. Although the micro-level is governed by TRI2 laws, in appropriate limits, 2 6 and in certain time scales,27 the macro-level can be described in non-TRl2 laws. As all theories involve laws and initial conditions,28 accidental irre-versibility within TRI2 theories leads us to look for its reasons in the initial conditions themselves, i.e, in the micro-configurations x that give rise to s and / . Consider the macro-state s representing a macroscopic function, say, the mass density, associated with the micro-states that instantiate it. If we prepare the billiard table in the constrained macro-state s (that is, we assign definite positive energy to this state while confining the balls to a small area), then when we remove the constraint on s the overwhelming to be manifest the numbers of particles involved should be in the order of 10 2 3 , but even 6 balls on a frictionless table are sufficient for the purpose of clarifying conceptual misunderstandings. 2 4See Appendix B. 2 5 T h i s non-trivial feature should be distinguished from the trivial one that for every given initial configuration xo, giving rise to a trajectory x(£), any function on phase space follows the induced evolution Fo —> Ft where Fo = F(xo) and Ft = F(x( i)) . 2 6 Tha t is, in time-scales shorter than Poincare's recurrence time and in the Boltzmann-Grad limit where the ratio between the mean free path of the molecules and the macro-scopic dimensions is held fixed while the density decreases. In Lanford's derivation, as in any other attempt to derive the Boltzmann equation, probabilistic assumptions appear not only with respect to the initial conditions, but also with respect to the dynamics itself, e.g., the molecular chaos hypothesis (the statistical independence between the velocities of two molecules that are about to collide). 2 7 I n order of the mean free time, i.e., the time between collisions. 2 8 A s Richard Feynman (1965, 116) reminds us. Chapter 4. Cutting the Gordian Knot 51 majority of the micro-states that instantiate the macro-state s will evolve deterministically so as to induce the observed evolution s —» / . 2 9 Let us call these micro-states 'good TD states'. Agreed, there maybe some'exceptional micro-states, 'bad TD micro-states', for which s will not evolve towards / , but these form what is usually called a 'measure zero' set in the phase space of the system when it is constrained in state s.3 0 Thus the 'bad TD states' are so rare that we do not expect to see even one of them appearing; not even "once in a million years" . 3 1 This reasoning also serves to rebut the famous Loschmidt reversibility paradox. Recall that the laws that govern the process s —> / are TRI2. What if one reverses the velocities of the billiard balls in /? Wouldn't it then bring the system back to s? Agreed, if one could pick this 'bad TD micro-state' x in / then one could, as Maxwell's famous demon, violate the second law of TD, yet remember that when / is an equilibrium state the 'bad TD micro-states' that would lead to s are "scarce as a hen's teeth" (Ford 1992). Under the natural Lebesgue measure (generated with the assumption of equiprobabilty of micro-states) these bad micro-states form only a measure zero subset of the micro-states that instantiate / , and as a matter of fact, they never occur. 3 2 As stressed in chapter two (section 2.2.2), the problem is now transferred to another level. Indeed, we started in state s which, with respect to the uniform distribution on the entire table, is quite unlikely.3 3 How did we get there at the first place? Agreed, we ourselves have put the balls in this state, but we can go back step by step: humans depend on the food they 2 9 Here is where our billiard table analogy might seem misleading, since in a real billiard game one assigns the positive energy to the state s when the balls are motionless and watches them roll. What is meant here is the following procedure: (1) one constrains the balls to a small triangle with a regular billiard prop; (2) one kicks one of the balls, endowing the system with kinetic energy; (3) when the energy is equipartitioned, that is, when s is an equilibrium state, one removes the constraint, or the prop. The analogy is not misleading since it drives home the important lesson that what is considered a 'system' and a fortiori what is considered 'equilibrium state', is entirely up to us. 3 0 A l l this, one should remember, is done with respect to the microcanonical measure imposed on the phase space of the system when s is an equilibrium state, or with 'com-pliments' of the principle of indifference that leads to the assumption of equiprobability. 3 1 A good analogy of this situation is the line of the real numbers [0,1]. The rational numbers form a 'measure zero' set on this line while the irrationals are 'measure one'. 3 2 Thi s , however, does not prevent other measures to assign positive measure to this subset, hence the problem of justifying the microcanonical measure in the first place. See also chapter seven (section 7.3.2). 3 3 T h i s state in itself, when regarded with respect to its own constraints, is nothing but a macro-state which under a uniform distribution of micro-states is instantiated with a 'good T D micro-state'. Chapter 4. Cutting the Gordian Knot 52 eat, which is dependent on the sun, etc. until we reach the beginning of the universe, and the last resort of the past hypothesis—the universe started out of equilibrium in an 'improbable' state.34 Note, and this is important, that we have given an account of acciden-tal irreversibility without even talking about the dynamics that govern the motion of the billiard balls. Indeed, in order to account for accidental irre-versibility within a TRI2 theory we have formulated another theory about the initial conditions. We have done what Boltzmann did, "quantifying impossibility and turning it to probability"3 5 by imposing a measure on these conditions. In doing so we have declared that a phenomenon has been explained if the set of exceptional initial conditions is of measure close to zero.3 6 But as soon as we agree to this measure, the dynamics that take the system from one state to another become irrelevant. It is sometimes claimed that the dynamics does play a role. Recall that within deterministic dynamics one can lose predictability very fast if the system under consideration is sensitive to initial conditions. Such an in-stability, called 'mixing', leads nearby initial micro-states on phase space to rapid divergence of their trajectories. There is a whole hierarchy of dy-namical systems that demonstrate this feature,37 and many have offered it as a remedy to the alleged lack of clarity in the foundations of S M . 3 8 Yet although mixing in particular and dynamical considerations in general are considered highly relevant to foundational problems in SM, such as the justification of equiprobability of the microstates and the relation between non-equilibrium and equilibrium S M , 3 9 I believe they are quite irrelevant to the issue of accidental irreversibility. Unpredictability, as an epistemic measure of ignorance, does not do anything; the concept of dog does not bark. 4 0 Put another way, nothing about what anybody may or may not know can play any role in bringing about a phenomenon. The fact that epistemic indeterminism can be modelled by deterministic chaos, i.e., non-linear dy-3 4 Again, with the fear of getting repetitious, improbable with respect to what we would expect now. The micro-state that instantiated that improbable macro-state had to be 'good' otherwise T D would not hold in our world. 3 5 Gibbs ' words in Boltzmann's 1895 Lectures on Gas Theory. 3 6 Goldstein (2001); Albert (2001, 58 and 68). 3 7 E . g . ergodicity, mixing, C-systems. See Appendix B. 3 8 T h e most explicit claim was made by Krylov (1979). The idea is that under a coarse grained description of the trajectories in phase space mixing systems demonstrate 'an approach to equilibrium'. See Appendix B. 3 9See, e.g., Gallavotti (1999). 4 0 Spinoza, cited in Bricmont (1996, 132). Chapter 4. Cutting the Gordian Knot 53 namics which demonstrate lack of predictability and sensitivity to initial conditions, is neither sufficient nor necessary for the explanation of acciden-tal irreversibility. Indeed how can it be, if the mixing on phase space involves ensembles of systems and not an individual system such as our billiard table? I believe that the confusion here is due in part to a famous analogy propounded by Gibbs, 4 1 who considers an ink drop in a glass of water, which is a kind of mixing in physical space, and is quite different than mixing in phase space. Our billiard balls model is mixing in the former sense but not in the latter. Moreover, even if one could connect the two notions of mixing by considering large number of billiard tables and by saying that mixing systems (in the physical space sense) form a subset of measure 1 of the set of all the billiard tables, this would be tantamount to going back to the standard picture of Boltzmann whose account of accidental irreversibility relies only on initial conditions and a measure imposed on them. Taking stock, accidental irreversibility, i.e., an irreversible process gov-erned by TRI2 theories, can be explained by a conjunction of three condi-tions: (1) a specific measure on the set of initial conditions; (2) a specific 'good' micro-state as an initial condition; (3) the past hypothesis, i.e., an out-of-equilibrium 'abnormal' (with respect to the current state of the uni-verse) macro-state of the universe. And yet the unease with respect to this explanation of accidental irre-versibility is persistent. Indeed, from a physicist's point of view, most of the foregoing, while satisfactorily explaining why we rarely observe violations of TD, is quite irrelevant to the physics of thermodynamical processes we do observe: it yields no insight into the physical mechanisms responsible for driving the system to equilibrium, it tells us nothing about relaxation times, transport coefficients, etc. In short, it does not even begin to explain accidental irreversibility apart from saying that it is typical. The 'empirical' unease leads to 'conceptual' unease. Those who incline to disagree with the claim that the dynamics are irrelevant are mostly those who feel uneasy with the lack of physical justification for the probabilis-tic assumptions introduced in the explanation of accidental irreversibility. The idea here is the following: even if one accepts, given a possibility proof such as Lanford's (1981) for the compatibility of the two levels of descrip-tion, that macroscopic phenomena and microscopic phenomena need not be constrained by the same laws, one still has to show that the probabilistic assumptions introduced in the derivation of macroscopic laws from micro-scopic laws are a natural consequence of the physics involved, that is, that " G i b b s (1902, 144-147). Chapter 4. Cutting the Gordian Knot 54 they do not depend on human ignorance as in the Boltzmannian approach. Put another way, if the price for the reproduction of thermodynamics from the underlying dynamics is that one must introduce probabilities into the explanatory narrative, then in paying this price one must solve another prob-lem, which here I dub as the twofold problem of probability in SM: how are these probabilities to be understood, and how are these probabilities to be calculated and justified? The standard, Boltzmannian, explanation of thermodynamic phenom-ena answers the first part of the problem of probability (how is probability to be understood) with the notion of ignorance of the exact micro-state the system is in. But Boltzmann's explanation of irreversibility leaves the second part of the problem of probability (how is probability to be calculated and justified) unsolved: in order to derive the correct probability measure one assumes without justification (unless the principle of indifference is regarded as a viable justification...) that the micro-states that represent an equilib-rium state are equiprobable, and this assumption allows one to rule-out the transition from equilibrium to non-equilibrium as improbable. Put in these terms SM becomes combinatorics in disguise. But now one can transfer the objection to the explanatory relevance of ignorance from the dynamical level to the level of the initial conditions themselves: how can lack of knowledge play any role in bringing about the phenomena? Surely our ignorance did not cause the balls on the billiard table to wander around on the table. Yet in the standard explanation ignorance plays an indispensable role and is manifest in (1) the assumption of equiprobabilities and in (2) the crudeness of the description, i.e., in the gap between macro-states and micro-states that instantiate them. In chapter three (section 3.5) we have seen that the unease with respect to the explanatory role of ignorance is motivated by a certain view of sci-entific explanation as causal, or dynamical, explanation. In addition, since Boltzmann's struggle to explain thermodynamic irreversibility was fuelled by the revival of the atomic hypothesis in the beginning of the nineteenth-century and as such was a part of a broader framework in which he aimed to construct plausible mechanical counterparts to thermodynamic phenomena, attempts to improve the explanatory power of his theory with physical justi-fications are basically motivated by an aspiration to achieve unity in science, hence the appropriate label for them is fundamentalism or reductionism.42 It thus becomes evident that the attempts to relate chance and time in the "^Generally speaking, reductionism is a metaphysical thesis, a claim about explanations, and a prescriptive research program. See chapter six for discussion. Chapter 4. Cutting the Gordian Knot 55 philosophy of science literature express a philosophical bias towards funda-mentalism and inter-theoretic reduction. Fundamentalists would like to find a framework in which TD is reduced to SM in such a way that the probabilistic assumptions introduced by Boltz-mann become a consequence of the underlying physics rather than of exter-nal, epistemological, considerations. In other words, fundamentalists would like to offer a better solution to the problem of probability in SM than the one offered by Boltzmann; a solution that could make the explanation of accidental irreversibility independent of human ignorance. As the current inventory of fundamental theories suggests, if SM—a TRI2 and determin-istic theory—is to be replaced with a more fundamental statistical theory, the latter would be non-TRi2 and stochastic. And so it turns out, and this should be of no surprise after the last section, that the theory that can fulfil the ambitious fundamentalist project is non-TRi2 and stochastic, and it is precisely here where chance and time-asymmetry make contact, since, as we have seen, the probabilities of any other fundamental statistical theory that seeks to reproduce TD with a TRI2 theory must be interpreted as a result of ignorance. Agreed, and as we shall see below, the non-TRl2 and stochastic theory would have to reproduce the microcanonical measure and would still rely on the past hypothesis but within this theory (1) the former would become an empirically justified measure, and not an o priori postulate, and (2) the latter would be needed for entirely different purpose than in the standard explanation. Being non-TRl2, the fundamental theory that might account for thermodynamic processes does not make any claims about the past, and as a result the past hypothesis is needed not to correct a mistake but to 'fill in the blank space' in a way that ensures that the theory itself, along with the rest of science, can be plausibly maintained.43 The idea that such non-TRl2 and stochastic theory may serve to explain thermodynamic irreversibility was first propounded by Albert (1994, 2001) who suggested that the spontaneous localization approach, also known as the GRW theory,44 may offer a unified resolution to the quantum measurement problem and to the foundational problems of SM. We shall explore Albert's suggestion further in chapter five. 4 3 A s Callender (1997, S231) notes, eliminating the past hypothesis is the great achieve-ment of inflation theories in cosmology, where a universe like ours turns out to be a result of almost arbitrary initial macro-conditions. •"Ghirardi, Rimini, Weber (1986). Chapter 4. Cutting the Gordian Knot 56 4.1.3 Practical Irreversibility Finally we arrive at the weakest notion of asymmetry in time. • Here we can assert that indeterminism does indeed imply practical irreversibility of a physical process. To see why we need to recall that practical irreversibil-ity amounts to our inability to undo or recover a physical process, i.e., to induce reversibility. Indeed, if we do not make the micro-macro distinction, then reversing the dynamics of a deterministic process completely recov-ers the initial micro-state. This kind of 'Loschmidt reversal' in the atomic level requires enormous precision and severe isolation conditions, yet it was accomplished in the fifties in a series of 'spin echo' experiments.45 In our billiard balls model, if the dynamics is governed by TPJ i laws and we do not wait for the system to relax in state / , then reversing the velocities of the balls will result in their returning to their initial configuration s. However, even the slightest amount of noise in the dynamics will prevent the balls from recovering their initial state. Here we have to distinguish again between ontological and epistemic indeterminism, because the initial state can be recovered by a complete re-versal of the dynamics and the noise. Ontological indeterminism dismisses such possibility in principle, because the noise is internal and inherent to the dynamics, and is genuinely random. 'In principle' means that irrespec-tive of the billiard table's being a closed system or an open system,46 in both cases induced reversibility would be impossible. The case of epistemic indeterminism is different. Here the noise is a result of ignorance and not chance. Its source is our tracing out the effects of the environment on the system and hence induced reversibility would be impossible only if one con-siders the billiard balls and the table as an open system. In other words, in order to reverse a physical process we would have to take into account the interactions of the balls on the billiard table with the universe as a whole up to, say, the last photon, and this is impossible 'in practice', especially if the dynamics admit the kind of sensitivity to initial conditions that is captured by epistemic indeterminism. The idea of 'noise', or external influences, appears in many discussions on the foundations of SM, from the founding fathers who played with it as a possible justification for the equipartition theorem and for the H theorem,47 4 5 0 n the 'spin echo' experiments see Hahn (1950; 1952; 1984) and Rhim et. al. (1971), and also chapter five (section 5.4). That is, regardless of considerations of friction and elastic collisions on the table. "Maxwe l l (1879) in Brush (1976, Vol 2, 366-367); Boltzmann (1895, 60). Chapter 4. Cutting the Gordian Knot 5 7 to modern 'interventionists , who regard it as essential for reconciling TD with SM. It is best captured by E. Borel's (1914) words: The representation of gaseous matter composed of molecules with position and velocities which are rigorously determined at a given instant is therefore a pure abstract fiction; as soon as one supposes the indeterminacy of the external forces^  the effect of collisions will very rapidly disperse the trajectory bundles which are supposed to be infinitely narrow, and the problem of the subsequent movement of the molecules becomes, within few sec-onds, very indeterminate, in the sense that an enormously large number of different possibilities are a priori equally probable. Borel calculated that even the gravitational effects resulting from shifting a small piece of rock with a mass of one gram as distant as Sirius by a few centimetres would completely change the microscopic state of a gas in a vessel here on Earth by a factor of 10~ 1 0 0 within a fraction of a second after the retarded field of force has arrived. Some think that the Sirius problem is a serious problem.4 9 Other dismiss it as mere astrology: And I cannot with a straight face tell a student that (part of) our explanation for irreversible phenomena on earth depends on the existence of Sirius. 5 0 The common response to interventionism is the counter-factual claim that even if we were to isolate a cat in a closed room it would still approach equilibrium. Another repeated objection is that isolation is always possible in principle and that the hypothesis that lack of isolation is behind thermo-dynamic approach to equilibrium is an unfalsifiable hypothesis.51 Yet these responses miss the point. The interventionist fully accepts the idea that the reason for macroscopic irreversibility lies in initial conditions. But given these, the interesting task is to find which statistical mechanical 4 8 E . g . , Lebowitz and Bergmann (1955); Blatt (1959); and Ridderbos and Redhead (1998). 4 9See the school of environmental decoherence e.g., Guilini et. al. (1996). 5 0 Bricmont (1996, 147). j lIndeed, isolation in physics is regarded as an idealization and lack of it is never used to refute a theory. Thus, for example, when a theory predicts certain regularities in a certain 'isolated' system and these are not found, in the sense that there is 'noise' in the system, the physicist usually 'enlarges' the system, making the 'noise' part of it. See Nagel (1961, 319) for an account on how Neptune was discovered as a result of such process. Chapter 4. Cutting the Gordian Knot 58 model serves best for reproducing TD. Indeed, a quick look at the history of SM would reveal that, contrary to TD and the pessimistic interpretation of the heat death of the universe, the problem of the direction of time was never a central issue in S M . 5 2 Boltzmann's discussion of it is confined to several remarks in the final part of Lectures on Gas Theory where he makes some sharp and clear remarks about Zermelo's recurrence objection and some less clear remarks about applying TD to the universe as a whole,5 3 to which he adds the following warning: Obviously no one would consider such speculations as impor-tant discoveries or even—as did the ancient philosophers—as the highest purpose of science.54 Moreover, Boltzmann's struggle for the promotion of his ideas was con-ducted in an entirely different context. The second half of the nineteenth century saw the revival of the atomic hypothesis, and Boltzmann was trying to construct a plausible mechanical model for TD. Indeed, the reversibil-ity paradox, originally conceived by Lord Kelvin in 1874,55 was brought to Boltzmann's attention by Loschmidt as an argument against his model, and not as an argument against the thermodynamic arrow in time. One should see the interventionist approach in this light, that is, as an extension of Boltzmann's ideas and as a struggle to construct mechanical models that reproduce TD. Thus, it is true that macroscopic irreversibility can be ex-plained by initial conditions, but as Boltzmann himself says, this is not the highest purpose of science, at least not of SM. For these reasons the interventionist approach cannot be ruled out on a priori grounds, especially when so far its models are the only ones that have yielded realistic relaxation time scales in approach to equilibrium.5 6 Although it might not offer the ultimate reason for the existence of irre-versible phenomena, interventionism or the open system approach can still offer a better solution to the problem of probability than the Boltzmannian 5 2 Brush (1976a; 1976b); Cohen (1996). That the atomic hypothesis, and not irreversibil-ity per se, was central to the debate can be appreciated when reading the letters sent to the editor of Nature in 1894 and 1895 on the status of Boltzmann's H theorem. "Boltzmann (1895, 443-448). 5 4 Boltzmann (ibid., 447). 5 5 Bader and Parker (2001, 49); Brush (1976c, ch. 14). 5 6 Most current solvable models, or computer simulations, concern non-interacting par-ticles, or ensembles of systems. Only recently an exact model of a single trajectory in phase space was suggested (Metzler et. al. 2001). On the other hand, just by taking into account the walls of a container in which a gas approaches equilibrium (Blatt 1959), one can achieve realistic time scales for this process. Chapter 4. Cutting the Gordian Knot 5 9 account does and qualify as a possible approach to the foundations of SM, in the spirit of the following statement: Statistical mechanics is not the mechanics of large, complicated systems; rather it is the mechanics of limited, not completely isolated systems.57 Interventionism, moreover, illustrates the difference between ontological and epistemological indeterminism. According to the former a complete isolation of the system would not have any impact on the possibility to induce reversibility in the TRI3 sense. The latter, on the other hand, predicts that a complete isolation of the system would allow induced reversibility.58 4.2 A Br ie f Summary Before we move to the asymmetry of time let us summarize the results so far. After analyzing the possible relations between indeterminism and asymmetries in time we have arrived to the following conclusions: 1. Both indeterminism and nomological irreversibility and indeterminacy and nomological irreversibility are logically independent. There are no a priori constraints on writing down any combination of the two in a physical theory. However, processes described by a statistical theory are inherently not TRI3, i.e., the theory is asymmetric with respect to the process it describes in the sense that it discriminates between forward and backward conditional probabilities. 2. Indeterminism and accidental irreversibility are logically independent. Within TRI2 theories the latter can be explained by a conjunction of (1) a certain imposed measure, (2) a specific initial micro-condition, and (3) the past hypothesis. 3. Indeterminism and unpredictability both imply practical irreversibil-ity. The former applies both to closed and open systems and turns induced irreversibility to nomological impossibility; the latter applies only to open systems and turns induced irreversibility to practical im-possibility. 5 7 Bla t t (1959). 5 8 Assuming that determinism holds in both time directions. Chapter 4. Cutting the Gordian Knot 60 That a logical relation between indeterminism and irreversibility can be established only in the practical level might seem trivial and philosophically unimportant.59 Yet from a physical point of view there is much interest here. For the sake of the argument, suppose that we were able to (1) isolate our systems and (2) control the microscopic variables. In that hypothetical case, whether the fundamental laws were deterministic or indeterministic would depend on our achieving complete reversibility. We could then have an ex-perimental method for verifying a long-standing metaphysical question, and this seems to be quite interesting. Thus put, any argument against the rele-vance of indeterminism to irreversibility is also an argument against the pos-sibility of extracting metaphysical notions of time-asymmetry from physics: claiming that complete isolation of physical systems is only an idealization and arguing on a priori grounds against using isolation in establishing a relation between indeterminism and irreversibility would make the question whether physics can have any say on one's metaphysical pet theory moot. Moreover, that indeterminism implies practical irreversibility can, in the context of inter-theoretic reduction, support the relevance of indeterminism to the project of reducing TD to SM, especially if one is interested in deriving the former from dynamical considerations: it might help to make plausible or even to justify the probabilistic assumptions, involved in the derivation. Indeed, from the point of view of the unity of science, demonstrating com-patibility between macroscopic irreversible laws and microscopic reversible dynamics with the aid of probabilistic assumptions is not enough. Within a reductionist project one has to demonstrate how these probabilistic assump-tions are a natural result of the dynamics, and in this context the puzzle is twofold: first, can deterministic dynamics give rise to probabilities, and second, can non-deterministic dynamics serve as a justification for the prob-abilistic assumptions? In both cases the practical character of irreversibility is important, as it demonstrates how the philosophical distinction between ignorance and chance as the source of probabilities in a statistical theory can be mapped onto the physical distinction between open and closed systems. Thus, although one can say that in general asymmetries in time and indeterminism are independent of each other, under some views of scientific explanation and inter-theoretic reduction to be spelled out in chapter six, and given the current state of affairs in science, a contingent connection between chance and time might be established. 'Savitt (1994, 912). Chapter 4. Cutting the Gordian Knot 61 4.3 I n d e t e r m i n i s m a n d t h e A s y m m e t r y of T i m e After leaxning that no logical relation exists between ontological indetermin-ism and nomological irreversibility with respect to theories one may wonder whether there is anything more interesting to be said. As-the opinions cited in the introduction demonstrate, many believe that there is. That no logical dependence can be established between indeterminism and the asymmetries in time when physical theories are involved does not immediately prevent one from connecting indeterminism with the two notions of asymmetry of time discussed above, i.e., with temporal becoming and with time-handedness. The purpose of this section is first to examine the alleged relation between accidental or de facto macroscopic irreversibility and asymmetries of time, and then to point to possible ways to make the intuitions regarding inde-terminism and asymmetries of time precise. We shall see that also here, as in the case of the asymmetries in time, matters of indeterminism are still logically independent of the asymmetries of time, but under certain assumptions one might establish a contingent relevance. 4.3.1 The Entropic Heresy Accidental (de facto) macroscopic irreversibility is tantamount to discussing TRI2 theories, and we have already dealt with indeterminism in this context. Yet we have also seen that there are those who believe that TRI2 laws, when applied to the universe as a whole, imply that time has no global direction, or that time is not handed. In this context some view TD, the locus classicusoi accidental irreversibility, as a theory in which local time-direction is manifest. Given this view and the idea that TD should be reduced to SM, and following our conclusion in the last section that SM may be replaced by stochastic and non-TRi2 theory, indeterminism becomes indirectly relevant. We are about to see, however, how much one needs to assume in order to keep intact this alleged Gordian knot between indeterminism and asymmetries of time. Recall that although symmetries of laws are at least as inclusive as symmetries of spacetime there still exists a question—a philosophical open question—to which one should give precedence when local and global defi-nitions collide. The final paragraph of chapter two about time flipping its direction made it clear that the consequences of giving precedence to sym-metries of laws are non-trivial. One is then inclined to give precedence to local asymmetries of physical processes in time over the global asymmetry of time itself. If the direction of time itself is manifest in physical processes Chapter 4. Cutting the Gordian Knot 62 then not only we have to fear "Boltzmann's time-bomb", but also it seems that any violation of the second law here and now would mean that time has nipped its direction. These considerations alone are sufficient for beginning to doubt the alleged precedence.60 The idea here is that although our theories might be reversible (in the TRI2 sense), they may be highly non-invariant under the interchange of earlier and later, and the existence of such asymmetries provides the basis for saying that there is an intrinsic structural difference between earlier and later, at least if there is no way to explain these asymmetries by invoking boundary conditions and more fundamental laws that themselves do not exhibit those asymmetries; otherwise the asymmetries could not be said to be intrinsic to earlier and later. Failure to appreciate this idea leads to claims such as the following: In mathematical parlance, temporal isotropy in scientific con-texts is, therefore, tantamount to the covariance of laws of na-ture under time reversal. This amounts to asserting that within science time's arrow will have to be rejected if the laws of nature remain unaltered and valid in a universe whose past and future are interchanged with ours. 6 1 But as we have already seen (chapter two, section 2.1.3), even within a TRI2 framework the orientability of the manifold and the light cone structure with its domains of dependence prevent certain processes—namely causally connectible processes—from being reversed in the sense that there can be no observer that can describe their reversal. Consider, for example, a ball rolling along a line PQ and two descriptions (1) Sp —> SQ and (2) SQ —* Sp. (1) and (2) cannot be viewed as descriptions given by different observers of the same sequence of events, for if either SQ or SQ obtains the ball is at Q, while if either Sp or Sp obtains the ball is at P; therefore (1) implies that the event of ball's being at P is before the event of the ball being at Q, while (2) implies the opposite, but since these two events are causally connected the idea of two observers describing the same process is physically impossible. The example drives home an important lesson. The question of the invariance of laws under time reversal should not be formulated in terms of descriptions as given by different observers to the same physical process but 6 0 Barman (1967, 1974) calls such doubt 'the entropic heresy' and argues that macro-scopic irreversibility is not a necessary condition for structural difference between past and future. 6 1 Mhelberg (1980, 156). Chapter 4. Cutting the Gordian Knot 63 rather in terms of descriptions of different processes as given by the same observer. Thus put, the argument, due to Earman, against (1) abusing TRI2 the-ories for establishing time-isotropy and (2) abusing physical, or de facto irreversibility in establishing a time-direction-flip, is quite simple. Recall that 'time is not handed' means that given an orientable manifold there exists a diffeomorphism on the manifold that preserves the geometry but flips the time direction. The proponents of the atemporal view (Reichen-bach (1956), Gold (1962; 1967), Price (1996), and Schulmann (1997)) believe that given accidental irreversibility, i.e., TRI2 theories, there is no objective global distinction between past and future, but these people regard the two diffeomorphic models as applying to the same world. This, according to Ear-man, is a big mistake. Even if one were to give precedence to symmetries of laws over symmetries of spacetime, then, because the symmetry holds not between the same process under different descriptions but between dif-ferent processes, each with its own unique description, one could not claim that time flips its direction in the same world. Would one want, for similar reasons, to identify worlds under invariance of charge or parity? I believe that part of the confusion here with respect to the scope of the symmetry is due to the tendency to conflate the physical notion of theoretical models with the metaphysical notion of possible worlds. Earman's diffeo-morphism yields another model and another possible world. Agreed, the two models are isomorphic, and, assuming the spacetime theory that holds in the two possible worlds is deterministic, the two models—the original and its diffeomorphic twin—are physically equivalent.62 But (1) isomorphism does not automatically entail identity, and (2) there is no compelling reason to move from the equivalence of the models to the identity of the worlds they inhabit. 6 3 Personally I am sympathetic to Earman's argument, and I shall offer another argument in support, but first we need to convince ourselves that the problem of 'time flipping its direction' is genuine. Here we need to consider two points: first, the problem arises because we combine TRI2 theories with 6 2See the vast literature on the hole argument (Earman and Norton 1987) and the reasons for identifying two diffeomorphic models as physically equivalent. 6 3 Compare Weyl (1952, 27): The laws of nature do not determine uniquely the one world that actually exists, not even if one concedes that two worlds arising from each other by automorphic transformation, i.e., by a transformation that preserves the universal laws of nature, are to be considered the same world. Chapter 4. Cutting the Gordian Knot 64 manifest accidental irreversibility and we assign the latter a role in fixing local time-direction. Even before rejecting this relationalist view that gives precedence for dynamical laws over spacetime structure it would seem that one way to free ourselves from the awkward situation we are facing is to invoke non-TRl2 theories and to turn the accidental irreversibility into a nomological one. As a result the local time-order manifest in irreversible phenomena would coincide with global time-order and the puzzle could be solved. Yet another possible resolution that awaits the one who wishes to retain TRI2 theories is to claim that Boltzmann's time-bomb is 'blank': it might as well be the case that such time-direction-flipping will be undetected and will pass unnoticed since for the inhabitants of the spacetime manifold with no fixed or absolute reference of time, physical processes would appear un-changed. Imagine a flea walking down my sleeve toward my wrist button. If I turn my sleeve upside down the flea would still walk toward my wrist button, although, with reference to the former "time" direction, it would now go upwards. This difference can be noticed only from the point of view of one who intends to smash the flea, or from what Price (1996) calls 'an Archimedean point of view'; a point outside time. And yet a puzzle remains. Indeed, from a moderate relationalist per-spective of time, even if one could disarm the time-bomb by changing the laws or by declaring the bomb 'blank' one would still remain helpless with respect to knowing the direction of time. As Healey (1981, 102) notes, to in-fer a non-arbitrary structural difference from a physical difference one needs a further condition: time would be globally directed if one had in his hands an objective procedure to orient temporally a local portion of world history and then consistently apply this orientation to other parts of the world. We shall return to this point shortly to verify if there is such process in the offing and how does it relate to indeterminism. Next, the proponents of the precedence of asymmetries in time to the asymmetry of time, while accepting the argument that it would be impos-sible to detect a flipped-time-direction, still claim that the latter can have empirical traces. These traces, if detected, can imply not that time has al-ready flipped, but that it is about to flip. 6 4 The possibility of the existence of 'echoes' from the future is sometimes raised in the context of the vio-lations of Bell's inequalities, and the argument against the detectability of the time-flip fails when one notes that, however outlandish, the framework of advanced action, or backward causation, is logically possible as long as Price (1996, 105-113); Schulman (1997, 309-314). Chapter 4. Cutting the Gordian Knot 65 causal loops are exc luded . Here is another poss ib i l i t y for d i s a r m i n g the t i m e - b o m b . O n e can regard the universe as a whole as a sys tem to w h i c h the laws of na ture do not apply, and to dismiss the ' v i ew f rom no w h e n ' as a ph i lo soph ica l qu ibble , not to be implemented i n p h y s i c a l theories, w h i c h app ly to p h y s i c a l processes inside the universe. T r e a t i n g the universe as a phys i ca l sys tem a n d a p p l y i n g p h y s i c a l theories to it may lead to incoherence. Fo r example , i f i n order to assign tempera ture or ent ropy to a t h e r m o d y n a m i c sys tem we need to a t t ach i t to a heat ba th , then the universe as a whole cannot be regarded as such sys tem. In response one can c l a i m tha t t h e r m o d y n a m i c s is not a fundamenta l t h e o r y , 6 5 a n d tha t the adequate fo rma l i sm for t rea t ing the universe as whole is a conf igura t ion space, but then one remains w i t h other p rob lems such tha t there is no second order t ime w i t h reference to w h i c h the universe can evolve, a n d a phys i ca l theory w i t h i n w h i c h t ime disappears is suscept ible to the p laus ib le ob jec t ion of be ing ep i s temica l ly incoherent: how can one make pred ic t ions i n such t h e o r y ? 6 6 T h i s argument can d rag us into a d iscuss ion on issues of scientific rea l i sm: the c l a i m is that even i f one accepts the tension between the perspect ives of inside a n d outside observers, manifest i n the d i s t i n c t i o n between l o c a l a n d g loba l definit ions, a n d agrees to the l imi t a t ions i t imposes o n the scope of our theories, the d i s t i n c t i o n i tse l f is a dangerous weapon tha t shou ld not be used carelessly. T h e pr ice for p u t t i n g i t to work i n so lv ing conundrums i n phys ica l fo rmal i sm is tha t i t opens a gap between rea l i ty a n d a p p e a r a n c e , 6 7 and for scientific real ists there is a great deal of difference between theories tha t t e l l the t r u t h a n d theories that do not get caught i n a l ie . In response I can say tha t a l though T D is considered as a theory w i t h a un iversa l content, i t is s t i l l res t r ic ted to specific domains of app l i cab i l i t y : E i n s t e i n invest igated the l i m i t of v a l i d i t y of T D w i t h f luc tua t ion phenomena a n d B r o w n i a n m o t i o n , 6 8 a n d the above considerat ions ind ica te tha t a l though T D might be even app l icab le to b lack holes, i ts a p p l i c a t i o n to the universe as 6 5 Tha t is, thermodynamics is not a theory that deals with spacetime itself but only with processes inside spacetime. 6 6 T h i s is one of the issues that constitute the cluster of problems in quantum gravity, also known as 'the problem of time'. See Isham (1991). 6 7 For example, that T D does not hold for the universe as a whole would make accidental irreversibility and any other T D phenomena 'local' or 'non-real' in the scope of the world 'in itself and turn T D into an appearance theory. The concept of equilibrium state would then become a fiction and would have to be replaced by the concept of stationary state, maintained by temperature gradients. 6 8 K l e i n (1967). Chapter 4. Cutting the Gordian Knot 66 a whole is incoherent. Should one conclude from these limitations that heat and temperature are non-real entities? I doubt it. TD is a special science, and expressing doubts about its 'reality' is tantamount to expressing doubts in the entire special sciences. Indeed, as long as one maintains a coherent framework in which theories and theoretical entities co-vary with their do-mains of applicability, there can be no tension between phenomenological laws such as the second law and other fundamental laws. 4.3.2 Excuse Me, When is The Future? Where do we stand? As I see it, Earman is correct in (1) claiming that the TRI2 character of our current theories is not an immediate vindication of the non-handedness of time and in (2) deflating the special role of accidental (de facto) macroscopic irreversibility, in fixing, or establishing, an asymmetry of time. But where does Earman leave the moderate relationalist; the one who accepts global orientability and seeks the direction of time not in de facto but in nomological irreversible physical processes; the one who is interested in finding the direction of time, that is, one who is curious to know 'when is the future' over and above a conventional choice between light cone lobes? Orientability of the manifold, although necessary, is not sufficient for indicating a non-arbitrary direction of time. For the moderate relationalist the missing ingredient is a nomological irreversible process: given manifold orientability one needs to supply a globally applicable procedure for distin-guishing in a non-arbitrary way past from future.69 It is precisely here that one can establish a contingent relation between time and chance, since, as we have seen, such law-governed irreversible processes are germane in statis-tical theories. As it happens, one such theory is a realistic interpretation of Q M that regards the collapse of the wave function as a real physical process, and in this theory the probabilities are considered as rising from objective chance. And note that when the debate is formulated in these terms it is clear why chance was (mistakenly) considered as inherently relevant to the issue of asymmetry of time. 7 0 If one accepts (1) the (problematic) view that TD applies to the universe as whole; (2) the (unproblematic) view that the 6 9 Agreed, it might be that such a procedure would be quite difficult to come up with, since, as Healey (ibid., 107-111 and 119-121) notes, one would still have to rely on an inter-subjective agreement on local time-order and on the assumption that the local time-order holds also globally. 7 0See Elizur (1999), and also Arntzenius (1995; 1997), who argues that the 'forward' transition probabilities of collapse interpretations entail the direction of time but ignores the presuppositions such view requires. Chapter 4. Cutting the Gordian Knot 67 universe as a whole is a closed system; and (3) the relationalist idea that irreversible physical processes serve to fix (or in the moderate case, to detect) time-order, then, given the non-TRl3 character of processes described by statistical theories, whether or not the fundamental theory is deterministic or indeterministic might seem important. Yet unless one assumes (1)—(3) it is quite unclear how time and chance can be related, and even when (1)—(3) hold one cannot deduce that chance entails the asymmetries of time. The correct conclusion from these assumptions should be the following. First, the distinction between chance and ignorance (as the source of the probabilities that appear in the statistical theory that might serve to detect the direction of time) is relevant to the question whether the theory is fundamental or non-fundamental. Second, given that the current choice in physics is either TRI2 and determinism or non-TRl2 and indeterminism, this distinction should be also relevant to the question whether the direction so detected can be considered local or global. 7 1 4.3.3 Chance and Becoming Can something interesting be said about temporal becoming? Recall that the symmetry in this case is time-translation-invariance (TTI). Also here we can apply Earman's argument and claim that the existence of this sym-metry is not sufficient to rule out temporal becoming. Indeed, the latter is a metaphysical thesis about coming into existence. Those who, for the reasons presented in chapter two (section 2.1.1), find it hard to make sense within a dynamical theory constrained by conservation of energy, should re-call Earman's argument: the translation yields a distinct model, not another description of the same model. Moreover, one can still define, as Stein (1991) does, a relation of 'having come into existence relative to a spacetime point' as a structural feature of spacetime. The notion of becoming thus peacefully co-exists with physical processes being TTI and with spacetime's being Minkowskian. 7 2 The intuition that indeterminism is somehow related to the metaphysical world-view of 'slices of existence that keep piling up', propounded by C D . Broad, is reminiscent in the aforementioned interpretation of Q M that views the wave function as a physical object and the collapse of the wave function as a physical process in which actualization of potentialities is manifest. 7 1 And, of course, a non-spacetime-relationalist can reject any relation whatsoever be-tween time and chance just by rejecting assumption (3). 7 2 T h i s claim is still contentious. See Callender (2000b) and Saunders (2000), but also Myrvold (2002) for a rejoinder. Chapter 4. Cutting the Gordian Knot 68 Before the collapse the world consists of potentialities, i.e, possible values for possible measurable observables; after the collapse these potentialities are actualized and there exists a unique value for a specific observable. In orthodox Q M one is inclined to take an instrumental stance and regard the wave function as an epistemic construct; in 'hidden variables' theories observables' values, while considered unknown, altogether always exist; and other no-collapse theories are vague about the meaning of the transition probabilities. Thus the only candidate for carrying the metaphysical burden of Broad's temporal becoming is thus a realistic interpretation of Q M , where indeterminacy is genuine and indeterminism is ontologically construed. But as stressed earlier in this chapter (section 4.1.1), that the realistic collapse interpretation involves objective chance and is also non-TRl2 is any-thing but mysterious. Indeed, only under a relationalist view of spacetime a non-TRl3 process is required for either fixing or detecting the direction of time, and since this process can be accounted for also by a TRI2 and deterministic fundamental theory, chance is not a necessary ingredient here. In order to convince ourselves that chance is not logically necessary we should recall that the metaphysical picture we are trying to construct is one of an increasing 'pile' of events. That the process that yields this incre-ment need not be genuinely random, in the sense discussed in section 3.4, is perfectly clear. Any non-TRi3 physical process will do. 7 3 Finally we can see why Reichenbach's insight cited in the introduction is slightly incorrect. Q M might be compatible with temporal becoming, yet this is due to a particular interpretation of the theory that takes quantum indeterminacy seriously, and not to the stochastic character of its dynamics. In this sense Lucas' insight is more on the mark, but alternatives to it do exist. As in the case of the handedness of time, the physical process that might serve to detect the desired ontological distinction between past and future happens to be stochastic, and unless one commits oneself to an embarrassingly long list of presuppositions there exists no deep or mysterious connection between time and chance. To put it less bluntly than Holmes,7 4 7 3 One has to remember the subtle difference between non-TRI as a feature of a theory and as a feature of a process. That the collapse process is chancy entails its being nomo-logically non-TRl3 only given a certain assumption regarding the status of the theory that describes it. 7 4 The complete 'tent joke' the punch line of which is the motto of this chapter is the following: Sherlock Holmes and Dr. Watson go camping and pitch their tent under the stars. During the night, Holmes wakes Watson and says Watson, look up at the stars and tell me what you deduce. Watson says: Chapter 4. Cutting the Gordian Knot 69 one should not jump to far-reaching conclusions before noticing what made the latter possible in the first place. 4 . 4 A Heavy burden on Time's Wagon The Gordian knot between time and chance can be cut unless one insists on maintaining a heap of assumptions and a philosophical position in which inter-theoretic reduction or spacetime relationalism are important ingredi-ents. Before moving to explore the consequences of these assumptions let us summarize the discussion so far. In analyzing the relation between indeterminism and time-asymmetries we first distinguished between the different asymmetries. We have seen that with respect to asymmetries in time, the connection is very subtle. First, it exists in the weak case of practical irreversibility and in this regard the distinction between ontological and epistemic indeterminism hinges on the system being open or closed. Second, we have seen that any statistical theory is inherently asymmetric: although the theory may employ identical algo-rithms for predictions and retrodictions, it distinguishes between forward and backward conditional probabilities in the calculations of the occurrence of actual processes. If the theory is one of our current fundamental theories then the interpretation of the probabilities that appear in the statistical the-ory depends on whether the non-TRl3 processes are law-like. Furthermore, under some accounts of scientific explanation and inter-theoretic reduction chance is an important ingredient in the explanation of irreversible phenom-ena such as the approach to thermodynamical equilibrium. When one turns to the asymmetries o/time the issue becomes more com-plicated. The continental divide here is one's attitude toward spacetime, or in other words, one's stance with respect to the question of precedence. Say that one agrees with Earman on the following two claims: (1) that current fundamental laws are TRIi does not entail that spacetime has no global unique orientation; and (2) global spacetime structure precedes the symme-tries of the laws that govern the phenomena that inhabit it in fixing the I see millions of stars, and even if a few of those have planets, it's quite likely there are some planets like Earth, and if there are a few planets like Earth out there, there might also be life. Holmes replies: Watson, you idiot. Somebody stole our tent. Incidentally, when people of different nationalities were ranked on how funny they found the joke, Canada was at the bottom of the list. Chapter 4. Cutting the Gordian Knot 70 direction of time, or in other words, the symmetries of the dynamical laws are more inclusive than the symmetries of spacetime. In this case the dis-tinction between chance and ignorance is quite irrelevant to the direction of time, unless one can show how it sneaks into our current spacetime theories. The prospects of this move, however, are dim, since current attempts to con-struct a quantum theory of spacetime just complicate things by banishing time from the theory. But if one maintains even the weakest form of spacetime relationalism and agrees to the claim that (3) spacetime structure is manifest in physical processes, one can then make contact between the asymmetries o/time and the distinction between chance and ignorance as the origins of the prob-abilities that appear in our current most fundamental theory. Since both interpretations allow non-TRi3 processes, the implication of the distinction between chance and ignorance on the asymmetry o/time hinges upon one's degree of relationalism: 1. A spacetime relationalist who rejects Earman's (1) and (2) can claim that physical processes serve to 'fix' the direction of time. In this case, the interpretation of the probabilities as chance or ignorance would co-vary with the direction of time being global or local, respectively. 2. Moderate relationalists who accept Earman's (1) and (2) but view temporal orientability as necessary yet insufficient for extracting the direction of time from physics will require a non-conventional method for detecting the latter. Thus, precisely analogous to an oriented map that does not fit the landscape it portrays, but with the proviso that one could never posses the map, one can claim that if one had at hand an objective physical process which is asymmetric in time and which can be applied universally, i.e., a law-like irreversible process, one could know 'where lies the future'. And it so happens that the only available interpretation of our current fundamental theory for physical processes for which TRI2 fails is also an interpretation for which determinism fails. And so for spacetime relationalists the distinction between chance and ignorance is quite important, since, as a matter of fact, contingent upon current interpretations of our current fundamental theory in physics, inde-terminism and indeterminacy might serve to fix (or, in the moderate case, detect) a global direction of time and to identify a physical theory that is sympathetic to the metaphysical notion of temporal becoming, respectively. Chapter 4. Cutting the Gordian Knot 71 But note, and here we arrive at the end of this chapter, that the relation between time and chance is purely accidental, and the Gordian knot that ties them together can be easily cut. First, asymmetries in time such as accidental irreversibility can be still explained without the notion of chance, and the latter is important only under certain views of scientific explana-tion, and second, the difference between past and future, be it structural or ontological, does not require chance in any deep or mysterious way. It so happens that the process that might serve to fix or detect the former and is compatible with the latter is chancy, yet there is no a priori reason for such relation. On the contrary, plausible as they are, to keep time's wagon tied to the pole of indeterminism one has to load the wagon with an irritat-ingly heavy burden of premises, one of which is the current state of affairs in physics. 72 Part II A Tale of Two Theories o Chapter 5. The Origins of Probability 73 Chapter 5 The Origins of Probabi l i ty J . he beginning of the last century was the beginning of the end of "clock-work universe". The currency of the eighteenth-century was receiving fatal blows from the rising science of QM: black-body radiation; wave-particle duality; the uncertainty principle; it looked like the Newtonian picture was falling apart. This, it seems, is the popular view. What is less appreciated is the fact that the seeds of the quantum revolution were sowed earlier. In fact, it was within the framework of the atomic hypothesis and the kinetic theory of gases—the debates on which consumed most of the nineteenth-century— that Maxwell and Boltzmann first introduced statistical assumptions into the otherwise deterministic mechanics in order to account for thermody-namical phenomena. The founding fathers of SM, however, were reluctant to abandon the heritage of the eighteenth-century, and as a result the cru-cial distinction between ontological and epistemological indeterminism was frequently blurred. Nevertheless, their attempt to explain thermodynamical irreversibility by the atomic-kinetic theory of matter had made physicists fa-miliar with probabilistic modes of thought, thereby creating a favourable cli-mate for the introduction of quantum indeterminism in the following years.1 In clarifying the historical background it becomes possible to connect the distinctions introduced earlier between the notions of chance and ig-norance with the debate on the origins of probabilities one encounters in modern statistical physics. It turns out that this distinction—often blurred in the early discussions on SM—is a key ingredient in the taxonomy of the interpretations of QM, serving as a continental divide between the different solutions to the infamous quantum measurement problem. It took another 'For a complete historical account on the subject see the writings of Stephan G. Brush, e.g., Brush (1976b; 1976c; 1983). A t q u i t e u n c e r t a i n t i m e s a n d p lace s , T h e a t o m s left t h e i r h e a v e n l y p a t h , A n d b y f o r t u i t o u s e m b r a c e s , E n g e n d e r e d a l l t h a t b e i n g h a t h . J . C . M a x w e l l , 1874 Chapter 5. The Origins of Probability 74 century to close the circle: two solutions to the measurement problem are now harnessed in the context of the foundations of SM to bridge the ex-planatory gap in the inter-theoretic relation between TD and SM that the standard Boltzmannian approach left behind. The task of the chapter is thus to introduce these two solutions, and to emphasize the deep philosophical difference between them. 5.1 Chance and the Atomic Hypothesis Physicists and philosophers alike commonly believe that the deterministic world-view came under attack only within the rise of QM. Historically, the following more refined statement is closer to the truth: it was the distinction between ignorance and chance and the transition from the former to the latter in the context of interpreting probabilities that was the main impact of Q M . 2 The eighteenth-century was dominated by the concept of the "clockwork universe" with God as the master clockmaker. Ironically, although Newton himself opposed it in his own writings, the mechanistic world-view gained success mainly because of the decisive results of Newton's mathematical principles of natural philosophy.3 Newton's laws, being second order in time, are time-reversible (TRIi), and it was only in the beginning of the nineteenth-century when Fourier developed his mathematical theory of heat conduction where irreversibility first appeared.4 Later on appeared Carnot's essay on the efficiency of steam engines—commonly regarded as the source of Clausius' second law of TD. In this law Clausius introduced a quantitative measure of the transformations between heat flow and heat conversions and many claim that in so doing captured the apparent irreversible character of certain natural phenomena. This quantity was later baptized as Entropy.5 Over and above the puzzle of thermodynamic irreversibility the begin-ning of the nineteenth-century saw also the revival of the atomic hypoth-esis. This hypothesis, originally conceived more than two thousand years before by Democritus, was beginning to receive a fair amount of confirma-2 We shall not pay much attention here, or anywhere else in the thesis, to another im-portant feature of quantum theory which is extensively discussed in the literature (see, for example, Pitowsky (1989; 1994)), i.e., the revolutionary character of quantum probability which, due to interference, violates classical (Boolean) probability theory. 3 Brush (1976c, 605) and references therein. 4 Contrary to Newtonian laws, the time variable in Fourier's theory appears in a first-derivative term, thus making the reversal of heat conduction forbidden by the theory. 5See chapter seven. Chapter 5. The Origins of Probability 75 tion mainly from the new developing science of the kinetic theory of gases.6 It was in this context that the first connection between the theory of prob-ability and physics was first made. Among the problems associated with the attempts to apply the atomic hypothesis to the explanation of thermodynamical phenomena, i.e., with the reduction of TD to classical mechanics, two concern the notion of probability. The first is a conceptual problem, to give a precise definition to the notion of probability in a deterministic theory. The second concerns the choice of distribution. Even if make up our minds about the meaning of probability, we still have to justify on the basis of classical mechanics the choice of the Maxwell-Boltzmann distribution for an ideal gas, or the Gibbs distribution for the general case. We shall call this twofold problem 'the problem of probability in SM' . That the founding fathers of SM were reluctant to abandon the clockwork world is clear from Maxwell's introduction of his 'distribution law' which was almost completely derived from probability theory. Maxwell himself was quite aware that his derivation belonged squarely in statistics and he insisted on an analogy between the velocity distribution function and Gauss' law of error:7 It appears from this proposition [the velocity distribution func-tion] that the velocities are distributed among the particles ac-cording to the same law as the errors are distributed among the observations in the theory of the method of least square. The mechanism of the interactions between the particles of the gas is com-pletely absent from Maxwell's first model, and it was only later, with the attempts to justify this law by relating it to molecular collisions, that the distinction between chance and ignorance came into play. The two distinct problems of understanding and choosing the probabil-ities of SM were often mingled in the early discussions. The choice of the Maxwell-Boltzmann distribution, for example, seemed so natural that it be-came almost synonymous with probability in SM. Consequently, and given the aim to construct mechanical models for thermodynamic phenomena, the first attempts to derive the Maxwell-Boltzmann distribution were made in the context of the underlying dynamics of the models. 6 The first quantitative microscopic model for a gas was proposed by Bernoulli in 1738 and was improved in major ways by Clausius more than a century later. See Emch and Liu (2002, 87-89). 7 Maxwell (1860) cited in Emch and Liu (2002, 92). Chapter 5. The Origins of Probability 7 6 The insistence on dynamica'justification for the Maxwell-Boltzmann dis-tribution led to Boltzmann's //"-theorem: one of the necessary assumptions for the derivation of the distribution from the underlying dynamics is the 'molecular chaos' assumption. It states that the velocities of two molecules that are about to collide are statistically independent in any given time. Many correctly remark that it is exactly here where irreversibility 'sneaks in ' . 8 But the 'molecular chaos' hypothesis is notorious not only because of its so called 'implications' on irreversibility. Both Maxwell and Boltzmann are unclear in their writings with respect to the source of this 'chaos' or 'disorder'. Maxwell, whose theory relies on the deterministic character of the dy-namics in the calculations of exact positions and velocities of the molecules, and whose demon is an example of the applicability of TD only to macro-scopic phenomena, is asserting by 1 8 7 5 that [t]he molecular motion is perfectly irregular, that is, that the direction and magnitude of the velocity of a molecule at a given time cannot be expressed as depending on the present position of the molecule and the time.9 In his article 'The Atom'—published in 1 8 7 5 in the Ene. Britannica— Maxwell indicates, moreover, that this irregularity must be present in order for the system to behave irreversibly.10 Boltzmann is also ambiguous about the origins of SM probabilities yet his ideas developed in the opposite direction to Maxwell's. 1 1 He argues in his reply to criticism in Nature ( 1 8 9 4 ) that a certain coarse-graining, e.g., the distinction between the macro- and the micro- and the exclusion of highly improbable well-ordered states, is crucial to his theory, and declares:12 ... [M]y Minimum Theorem [the H theorem—A.H.], as well as the so called second law of thermodynamics, are only theorems of probability. The second law can never be proven mathematically from the equations of dynamics alone. But his first attempts to derive the second law with the H theorem rely explicitly on the statistical independence of the colliding molecules, i.e., 8 Brush (1976c, ch. 14), Price (1996, 44). 9 Maxwell (1875) 'On the Dynamical Evidence of the Constitution of Bodies' in Nature, 11, 357. Reprinted in The Scientific Papers of .7.C. Maxwell Vol 2, 436, and in Brush (1976c). 1 0 Brush (1976c, 615). n F o r a review see Klein (1973). "Boltzmann (1895, 415). Chapter 5. The Origins of Probability 77 on the stochasticity of the dynamics. Boltzmann's early ideas on the H theorem, moreover, encouraged some to seek the source of the stochasticity (and, erroneously as Price (2003) notes, of irreversibility) in the interaction of the studied system with the environment.13 And so it was the founding fathers of SM, and not of QM, who played with the distinction between chance and ignorance as the origin of the prob-abilities they introduced into the underlying dynamics in order to account for thermodynamical phenomena. On one hand they regarded TD as only statistically valid, that is, as applying only to the macroscopic realm, and hence traced the probabilities of SM to the 'crudeness' of thermodynamical observables and the ignorance of the experimenter of the exact micro-state at hand. On the other hand, they realized that without some kind of 'ran-domization' in the molecular level there could be no justification for the probabilistic assumptions themselves. By 1900 the necessity of the assump-tion 'molecular chaos' to the explanation of irreversibility is promoted by Planck when he announces that the most important question of contempo-rary scientific philosophy is that of the compatibility of TD and mechanics.14 The physics community was only to abandon the epistemological inter-pretation of indeterminism in the atomic level with the rise of Q M and with Einstein's treatment of Brownian motion, but the seeds for this revo-lution were already planted and nourished by the founders of SM. Agreed, they were ambiguous with respect to whether this indeterminism should be treated as epistemological or ontological, and it was only in Q M that the renunciation of the former was (almost) completely achieved, but the struggles to derive the second law left an unhealed scar on the deterministic world-view. 5.2 The Quantum Measurement Problem It is well known that the formalism of Q M allows the existence of certain states, namely superpositions, that cannot be given a classical probabil-ity interpretation as measures of ignorance of the actual unknown values. Roughly speaking, if a ceratin macroscopic measuring apparatus finds itself entangled with a certain quantum system, as is the usual case when mea-surements are performed, then, assuming the formalism of Q M is complete 1 3 This was Culverwell's (1894) idea, echoed in Burbury (1895) but it is reminiscent of Maxwell's justification of his equipartition theorem some twenty years before. See Brush (1976b, ch. 14, sec. 6) for further references and details. "Brush (1976c, 621). Chapter 5. The Origins of Probability 78 and universally applicable, this macroscopic measuring apparatus does not posses a determinate state in the end of the measurement. That everyday life is full of examples to the contrary is the essence of the quantum measurement problem. More precisely, it can be formulated as the mutual inconsistency of the following three statements: 1. The wave function of a system is complete; the wave function specifies all the physical properties of a system. 2. The wave function always evolves in accord with a linear dynamical equation (the Schrodinger equation.) 3. Measurements of, e. g. the spin of an electron, always (or at least usu-ally) have determinate outcomes, i . e. at the end of the measurement the measuring device is either in a state which indicates spin up (and not down) or spin down (and not up). The founding fathers of Q M (e.g., Dirac and von-Neumann) insisted on (1) and (3), hence had to reject (2) and postulated a collapse of the wave function due to the measurement. It was claimed that after the measurement the superposition 'shrinks' to one of the eigenvalues of the eigenstate of the observable measured. As John S. Bell (1990) notes, the orthodox collapse postulate introduced two quantum jumps, one of the classical apparatus to the eigenstate of its 'reading' and one of the system to the eigenvalue of that eigenstate. But the orthodox view is widely held as unsatisfactory. In Bell's (ibid.) own words: It would seem that the theory is exclusively concerned about 'results of measurements' and has nothing to say about anything else. What exactly qualifies some physical systems to play the role of the 'measurer'? Was the wave function of the world waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer, for some better-qualified system—with a Ph.D.? Bell (1987, 201) rejected the arbitrary 'shifty split' between the micro and the macro; the quantum and the classical, and succinctly put matters in the following terms: Either the wave function, as given by the Schrodinger equation is not everything, or it [the Schrodinger equation] is not right. Chapter 5. The Origins of Probability 79 In other words, in order to solve the measurement problem either the kine-matics of orthodox Q M or its dynamics must be modified. The first route—the idea that the wave function is not everything—was taken by Bohm (following de-Broglie), and recently by modal interpretations of Q M . 1 5 Among these two approaches, the former (Bohmian mechanics) is the best alternative for those who still reject the idea that the statistical character of Q M is a result of pure chance. Similarly to Boltzmannian SM, in this framework the meaning of quantum probabilities arises from the incomplete description of the wave function, i.e., from ignorance.16 The second route—the idea that the Schrodinger equation is not always right—was taken some fifteen years ago by a group of three Italian physicists and was developed to a full blown theory, namely, the GRW theory.17 GRW added a non-linear and stochastic term to the otherwise deterministic and linear equation of motion of Q M and offered a unified dynamics for the micro and the macro regimes, based on pure chance. While Bohmians and GRW followers deny statements (1) and (2) re-spectively, one can still deny statement (3). This route opens the way for other no-collapse interpretations of QM, e.g., Everett's many-worlds inter-pretation and its variants. In this framework it was recently suggested to interpret quantum probabilities within a decision-theoretic scheme as de-grees of belief.18 Of course, there are some working physicists who dismiss the measure-ment problem altogether, claiming that 'it refers to a set of people'19 and not to a genuine conundrum in the foundations of QM. In this instrumental 'no-interpretation'20 view one regards the wave function not as a physical 1 5 For a lucid exposition of Bohmian mechanics see Albert (1992, ch. 7). A comprehen-sive treatment can be found in Holland (1993). On modal interpretations see, e.g., Van Fraassen (1991). 1 6 The case of modal interpretations is much more involved since the extra dynamical laws introduced in these interpretations are stochastic, and yet one can apply these laws to an instantaneous state only after decoherence ensures the 'existence' of such state, i.e., only after a special kind of ignorance different than the one propounded by Bohmian mechanics is acknowledged. See sections 5.4.3 and 5.4.4 below. 1 7 Ghirardi et. al. (1986). The first collapse model, i.e., a suggestion to regard the collapse rule as a dynamical process rather than an ill-defined postulate, appeared some twenty years before GRW's paper in Bub and Bohm (1966). Other eminent physicists such as P. Pearle, J.S. Bell, N . Gisin and A . Shimony have also, along the years, contributed to the development of the dynamical state-vector reduction program. 1 8See Deutsch (1995); Saunders (1998); and Wallace (2002), but also Barret (1999) and Maudlin (2001) for a criticism. 1 9 The words of a prominent Caltech physicist, Hideo Mabuchi, in Fuchs (2001). 2 0 Fuchs and Peres (2000). Chapter 5. The Origins of Probability 80 entity, but as a mathematical construct that represents nothing but the knowledge of the physicist. Under the 'no-interpretation' view there exists no non-instrumental way of interpreting quantum probabilities since accord-ing to this view these probabilities are probabilities for finding the system in a certain state, hence John Bell's question about whose knowledge is it that Q M captures can still be asked, and the arbitrariness with respect to the scope of quantum theory and the boundary between the quantum and the classical still remains. We turn now to investigate the one solution to the measurement problem which introduces objective physical chance to the dynamics. 5.3 Schrodinger's equation is not always right Schrodinger never did hide his irritation when he talked about the "miracle" of the collapse postulate: If we have to go on with these damned quantum jumps, then I am sorry that I ever got involved.2 1 Yet the whole point of Schrodinger's infamous cat was to indicate the absur-dity of macroscopic superpositions and the necessity of the 'collapse' of the state vector. The GRW theory is an example of a realistic interpretation of the wave function which solves the problem of macroscopic superpositions by taking the collapse of the wave function seriously. In presenting it here I follow Bell (1987). GRW's idea is that the wave function of an N particle system usually evolves in accord with the Schrodinger equation, but every once in while (once in 1016 /N seconds) it gets randomly (but with a fixed proba-bility rate) multiplied by a normalized Gaussian (and the product of these two separately normalized functions gets multiplied at the same time by an overall normalizing constant). The Gaussian is of the form: where is chosen at random from the arguments of ip and the width a of the Gaussian is of the order of 10~5 cm. The probability of this Gaussian •0(ri,r2, ...,rN,t) (5.1) G = Ke{~ (5.2) 2 1 Quoted in Bell (1987, 201). Chapter 5. The Origins of Probability 81 being centred on any particular point r is stipulated to be proportional to the absolute square of the inner product of tp, evaluated at the instant just prior to this 'jump', with the Gaussian G: K*(n . . .r A f )t) |G(r f c))| 2 (5.3) Then, until the next 'jump', or 'hit', everything proceeds as before with accordance to the Schrodinger equation.22 In such a theory there is nothing but the wave function, in which we must find an image of the physical world. But the wave function does not live in 3-space; it lives in a multidimensional configuration space, so it makes no sense to ask for the amplitude or the phase of the wave function at a point in spacetime. The 'jumps', however, are well localized in ordinary space.23 The wave function, according to this view, gives the density (in a multi-dimensional configuration space) of the 'stuff' the world is made of. 2 4 And that is all. The theory is remarkably short and clean but its implica-tions are far-reaching. First one should note that calling it an 'interpretation' is off the mark. This is a new theory whose predictions differ from orthodox QM, aiming to replace orthodox QM. Second, the theory takes very seriously the wave function and its collapse, and most important, it takes seriously the idea of entanglement; the one feature of Q M that Schrodinger (1935, 555) called not ... one but rather the characteristic trait of quantum me-chanics, the one that enforces its entire departure from classical lines of thought.25 And yet the theory offers a natural way to eliminate the irritating con-sequences of entanglement when macroscopic bodies are involved, and this is done just by making the rate of the 'new quantum jumps' proportional 2 2 T h e probability for a 'hit' and the Gaussian width are taken to be new constants of nature, cooked up in such way that for an isolated microscopic system the 'jumps' will be rare; completely unobservable in practice, and for a macroscopic system the violation of energy conservation would be very small. And this is what led Bell (ibid.) to propose them as the 'local beables' of the theory, and what was also a source for his optimism regarding the prospects of rendering the non-relativistic theory Lorentz invariant. 2 4 Ghirard i (1997; 2000, 110-111). 2 5 F o r any two systems S i and 52, among the possible states of the joint system Si <g> S2 are entangled states, in which neither of the composite systems possesses its own state vector, and in which the probabilities regarding the results of a pair of measurements, one on each system, do not factorize into independent probabilities regarding the individual measurements. Chapter 5. The Origins of Probability 82 to the mass in the spacetime point they are localized to. Thus, the theory allows electrons to enjoy the cloudiness of waves while tables and chairs are restricted to specific regions in spacetime. Finally, the theory is non-TRI.2,26 and it involves pure chance.27 As such, Albert (1994; 2001) believes that (1) it can offer a natural mechanical model for thermodynamics, and that (2) since it frees one from reference to specific initial conditions or to unjustified probabilistic assumptions and nothing but dynamical considerations is invoked in the explanation, it is also a favourite candidate for the solution to the twofold problem of probability in SM. The key idea is this. 2 8 In the GRW theory the jumps mean that the system actually undergoes stochastic transitions from one state to another. Initial states play no role whatsoever in determining either the system's cur-rent state or the system's rate for collapses. In the context of the recovery of thermodynamics it is useful to think about the GRW jumps as if they induce stochastic perturbations of the Schrodinger trajectory of the system. This means that the GRW trajectory can be seen as a patchwork of seg-ments of different Schrodinger trajectories each of which corresponds to a different initial state of the system. The system jumps from one Schrodinger trajectory to another, such that the net result is an effective stochastic tra-jectory. In other words, the system performs a random walk in the space of all possible Schrodinger trajectories where the probabilities are given by (5.3). The connection to thermodynamics goes as follows. We want to deter-mine whether the time evolution of a given system is thermodynamically normal or abnormal. Consider the spreading out of a gas in a box. When the partition is removed at to the composite wave function of the gas is l*(0)} = £ A * ( ° ) l ^ > ' (5.4) i where the |$j) are some wave functions in position representation and the A;(0) are the corresponding quantum mechanical amplitudes. We assume that the wave function of the gas evolves in time in accordance with the GRW dynamics, and that the gas is macroscopic enough, so that there 2 6 T h e collapse is a nomological non-TR.13 process, i.e., 'forward' directed in the sense explained in chapter four (section 4.1.1). 2 7 T h e jumps are spontaneous, or random, in the sense explained in chapter three (section 3.4). 2 8Purther details can be found in Hemmo and Shenker (2001; 2003) on which the fol-lowing is based. Chapter 5. The Origins of Probability 83 is high probability for a GRW jump to occur during any dynamical time interval A£. When a jump occurs the wave function of the gas collapses into a state that is localized around a certain position r — r i , r 2 , . . . , r j v , that is, around some spatial distribution of the gas molecules. For example, at ti the wave function collapses into some state \ipi) which corresponds to a Gaussian centered around r ( £ i ) . The collapsed state then evolves in the intermediate times £2 (where t\ < £2 < £ 3 ) in accordance with the Schrodinger equation. The high mass density of typical macroscopic systems implies an overwhelmingly high probability for a collapse by £3 (where £3 — £1 > At) onto a Gaussian centered around a position r ( £ 3 ) , where rfa) ^ r(*i). A sequence of such jumps results in a trajectory in the system's state space which can be described in terms of thermodynamical magnitudes. It is therefore possible, in this case, to determine whether or not the evolution of the system obeys the laws of thermodynamics (e. g. of entropy non-decrease). Suppose now that we write down the GRW equation for a given thermody-namical system, and solve it for all possible initial states. Consider any time interval ( £ 1 , £3 ) for which, according to the GRW prescription, a col-lapse of the quantum state of the system occurs with high probability. For every possible initial state at £1 there are in general many (possibly infinitely many) possible final states at £3 . For each such evolution, it is then pos-sible to determine whether the evolution is thermodynamically normal or abnormal. Albert (2001, 148-162, and especially 155-6) now observes the following. First, the GRW jumps can be understood as inducing stochastic perturba-tions of the quantum state of the gas. We thus have an internal perturbation mechanism. As explained above the wave function, say of our gas in (5.4), follows a genuinely stochastic trajectory in the system's state space. Second, and moreover, any GRW collapse induces a set of probability distributions, that is, transition probabilities—given the wave function just prior to the collapse - over the possible wave functions of the system immediately after the collapse. So in order for a GRW collapse to put (with high probability), say, our gas's wave function on a segment of a Schrodinger trajectory which is thermodynamically normal, what is needed is that the thermodynami-cally normal states (throughout the set of micro-states to which the system can collapse) overwhelmingly outnumber the thermodynamically abnormal ones. Moreover, we need this condition to hold in every microscopic region of the state space. If this turns out to be correct it would mean that after a GRW jump the wave function of the system will be (with high probability) thermodynamically normal. This is regardless of the history of the system, Chapter 5. The Origins of Probability 84 a n d i n pa r t i cu l a r of the state of the sys tem i m m e d i a t e l y before the collapse. A n d so, each and every state has an ove rwhe lming ly h igh p r o b a b i l i t y to evolve to a t h e r m o d y n a m i c a l l y n o r m a l state fo l lowing a G R W j u m p . T h i s impl ies tha t the p roper ty of be ing t h e r m o d y n a m i c a l l y n o r m a l is sta-ble over t ime , whereas tha t of be ing t h e r m o d y n a m i c a l l y a b n o r m a l is h i g h l y unstable. In effect, wha t is needed is tha t the G R W probabi l i t i es for the collapse t rans i t ions reproduce the probabi l i t i es of the n o r m a l and a b n o r m a l trajectories ca lcu la ted f rom the s t anda rd s ta t i s t i ca l -mechan ica l measure for any given macro-s ta te of the sys tem. A l b e r t puts forward the hypothesis tha t as a mat te r of fact the G R W d y n a m i c s satisfies this const ra in t . C a l l this A l b e r t ' s dynamical hypothesis. In other words, he conjectures tha t the G R W d y n a m i c s w i l l jus t i fy the use of the s t anda rd measure i n c lass ical (and quan tum) s t a t i s t i ca l mechanics . B u t note tha t i n order for A l b e r t ' s d y n a m i c a l hypothes is to be true, there is no need to invoke postula tes regard ing i n i t i a l states a n d p r o b a b i l i t y d i s t r i -bu t ions thereof. R a t h e r , since the G R W d y n a m i c s is genuinely s tochast ic , whether or not this hypothes is is t rue depends on the set of t r ans i t i on prob-abi l i t ies generated by the G R W collapses. In th is sense, A l b e r t ' s approach a ims at de r iv ing the t h e r m o d y n a m i c a l regular i t ies f rom the u n d e r l y i n g G R W d y n a m i c s only, w i t h o u t recourse to i n i t i a l states or p r o b a b i l i t y d i s t r ibu t ions thereof. B u t G R W theory is nei ther the on ly so lu t ion to the measurement prob-l e m nor the on ly approach to the foundat ions of S M . O n e of i ts r ivals i n th is twofold context is the open sys tem approach . 5.4 The Sirius Problem is a Serious Problem T h e m o t i v a t i o n for the open sys tem approach or in te rven t ion i sm was pre-sented i n previous chapters . Here we sha l l describe the one exper iment w h i c h is usua l ly regarded as suppo r t i ng in tervent ionis t c la ims; d i s t ingu i sh between r ad i ca l a n d modera te in te rvent ion ism; a n d discuss ce r ta in concep-t u a l difficulties one faces when a p p l y i n g i t i n the foundat ions of S M . T h e technica l detai ls can be found i n A p p e n d i x C . 5.4.1 The Spin Echo Experiments A series of exper iments dea l ing w i t h the ar ray of sp inn ing , a n d therefore magnet ic , nuclei i n a c ry s t a l la t t ice p rov ide an observa t iona l result that , even i f not of great impor t ance i n the ongoing discovery process of physics , is i m p o r t a n t as a test case for the open sys tem approach to the foundat ions Chapter 5. The Origins of Probability 85 of S M . 2 9 T h e exper iments begin by a l i gn ing the nuclei ' s axes of r o t a t i on i n pa ra l l e l by means of a s t rong, ex te rna l ly imposed magnet ic field. A n appropr ia te radio-frequency pulse then flips a l l of the axes n inety degrees so tha t they now poin t i n a d i r ec t ion pe rpend icu la r to the imposed magne t ic field bu t s t i l l pa ra l l e l to one another . A t this po in t the axes begin to ro ta te i n the plane pe rpend icu la r to the magnet ic field due to the phenomenon k n o w n as Precess ion . T h e nucle i precess at different rates due to var ia t ions i n the perfect ion of the c rys ta l . Af t e r a short t ime, macroscopic observat ions ind ica te they are d i s t r i bu t ed 'at r a n d o m ' i n the pe rpend icu la r plane. A t a t ime At after the i n i t i a l radio-frequency pulse, a second pulse flips the nucle i i n the same pe rpend icu la r p lane so tha t the nuclei tha t have moved furthest are now furthest ' b e h i n d ' i n the 'precession race' . T h e nucle i cont inue to precess, a n d at a t ime 2At after the first pulse the nuclei now once more have their axes a l igned i n the o r ig ina l d i r ec t ion . T h e reappearance of order among the axes can be detected macroscop ica l ly as a pulse us ing s t anda rd nuclear resonance techniques. Af te r repeated exper iments , the pulse tha t indicates reorder ing of the sp in axes d imin ishes i n intensi ty. T h i s is due to the transfer of energy from the sys tem of spins to other components of the sys tem. W e can describe the above exper iment i n everyday terms as a group of runners on a t r a c k . 3 0 In the i n i t i a l state the runners are a l igned on the s tar t l ine as the spins are a l igned by the magnet ic field. T h e 'precession race ' begins w i t h the first radio-frequency pulse a n d continues u n t i l a t ime At at w h i c h the runners are ' r a n d o m l y ' d i s t r i bu t ed i n fo rmat ion accord ing to their var iant veloci t ies . O n the second radio-frequency pulse the runners t u r n back a n d s tar t r u n n i n g i n the opposi te d i rec t ion , thus the slower ones are now closer to the s t a r t ing poin t . Af t e r a t ime 2At the runners are aga in a l igned on the s t a r t ing l ine. T h e recovery of the i n i t i a l state y ie lds the echo effect. 5.4.2 Radical Interventionism in the Classical Regime T h e sp in echo exper iments led some physic is ts to look for models that migh t exp l a in the phenomena of d i s s ipa t ion of the s ta t i s t i ca l correla t ions between the a toms and the observat ion that after some t ime the echo d i m i n i s h e s . 3 1 2 9 The description of these experiments, preformed by Hahn (1950; 1952) is taken from Sklar (1993, 219-220). 3 0 T h e analogy appears in Hahn's articles and also on the front cover of the Physics Today (Nov. 1953). 3 1 Lebowitz (1955; 1956; 1957); Balescu (1967); Sudarshan (1972). Chapter 5. The Origins of Probability 8 6 A l l these models assume a sys tem that is open for in teract ions w i t h a reser-voi r or an envi ronment . W i t h further s i m p l i f y i n g assumpt ions , the most i m p o r t a n t be ing the assumpt ion of i n i t i a l s t a t i s t i ca l independence between the states of the reservoir a n d the states of the s y s t e m , 3 2 one can average over the possible states of the reservoir to y i e l d a t r ans i t i on p r o b a b i l i t y den-s i ty w h i c h gives the p r o b a b i l i t y tha t the representat ive po in t of the sys tem k n o w n to be at a ce r ta in l oca t ion i n the system's phase space w i l l be pro-pe l led into an in f in i t e s imal vo lume element w i t h i n a ce r t a in t ime in te rva l . R o u g h l y put , d i s t u rb ing the flow of the sys tem i n phase space by i n t r o d u c i n g a non- l inear t e r m into the otherwise l inear d y n a m i c s one can then show how, given cer ta in condi t ions , the sys tem w i l l demonst ra te a mono ton i c approach to e q u i l i b r i u m . 3 3 A s far as the net result of these models is concerned, they are very s imi l a r to the mechan i sm of the G R W ' jumps ' , yet concep tua l ly the two approaches are comple te opposi tes . T h e idea is tha t i n the G R W framework the per-turba t ions i n the sys tem t ra jec tory are a resul t of an inherent s tochast ic ' j ump ' , a n d there is no need to d i s t ingu i sh between the sys tem and its envi -ronment . I n the c lass ical in tervent ionis t models , however, the per tu rba t ions are a result of ex te rna l in tervent ions a n d the separa t ion of the w o r l d in to ' systems' a n d ' envi ronments ' is manda tory . N e x t , the 'noise ' i n the G R W m o d e l is a result of pure chance a n d thus cannot be e l i m i n a t e d whether or not the sys tem is i sola ted. B u t the r a d i c a l in tervent ionis t surely cannot ho ld that the laws tha t govern the sys tem do not govern the env i ronment , hence—work ing as he does i n the c lass ical regime—he must interpret the 'noise ' as a result of ignorance. However , i n this case (1) one can inc lude the env i ronment i n one's de-sc r ip t ion and e l imina te the 'noise ' , and (2) one leaves unso lved the p r o b l e m of choos ing a p r o b a b i l i t y measure i n S M since i n order to 'generate ' the noise f rom the env i ronment one must assume the very p r o b a b i l i t y measure one is l o o k i n g for f rom the outset. In other words, whatever ' randomness ' one i n -vokes i n the envi ronment i n order to pe r tu rb one's sys tem must have got ten there under the same p robab i l i s t i c assumpt ions one is t r y i n g to jus t i fy i n 3 2Indeed, how can one make a distinction between the environment and the system without assuming they arc distinct and separated, or that their states are statistically independent in the beginning of the interaction? 3 3 B y invoking the analogy of runners in a race one can think of the effect of the envi-ronment as a malicious spectator of the race who throws, say, tomatoes, at the runners and causes them to slightly deviate from their positions on the track in such way that when they reverse their movement they can no longer arrive together to the starting line. A modern version of this radical interventionism can be found in Ridderbos and Redhead (1998) and in Ridderbos (1999, 58). Chapter 5. The Origins of Probability 87 the first place. Fu r the rmore , as stressed i n chapter four (section 4.1.3), the sp in echo exper iments demonst ra te the impac t of 'noise ' on the ab i l i t y to recover a phys i ca l state. T h u s they are relevant to the issue of p r a c t i c a l i r revers ib i l -i ty, and , unless one c o m m i t s oneself to a cer ta in v iew i n ph i losophy of science tha t cherishes d y n a m i c a l explanat ions , they are qui te irrelevant to the ques-t i o n of e x p l a i n i n g phys i ca l i r revers ibi l i ty . Nevertheless, i n wha t follows we do c o m m i t ourselves to th is pa r t i cu la r v iew i n ph i losophy of science insofar as we are l o o k i n g for mechan ica l models for the t h e r m o d y n a m i c a l phenom-ena, a n d w i t h i n this f ramework there migh t be a way to replace r ad i ca l in te rven t ion i sm w i t h a more modera te vers ion tha t can s t i l l be useful. 5.4.3 A New K i n d of Ignorance G i v e n the de te rmin is t i c f ramework of c lass ica l mechanics , r ad i ca l in terven-t i o n i s m that in t roduces ' r ea l ' noise into the sys tem i n terms of pe r tu rba t ions i n the system's t ra jec tory must be ru l ed out as concep tua l ly unsat isfactory, bu t i t leaves us w i t h an i m p o r t a n t lesson. It dr ives home the observat ion tha t wha t physic is ts regard as ' sys tem ' is ent i re ly convent iona l a n d w i t h the proviso tha t the on ly t rue closed sys tem i n our w o r l d is the universe as a whole , the no t ion of ' env i ronment ' is on l y res idual ; i t is whatever is not the sys tem. T h u s , there is no need to interpret the 'noise ' i n the sys tem as rea l pe r tu rba t ions . One can jus t say tha t the sys tem 'appears ' p e r t u r b e d be-cause one decided to res t r ic t the system's phase space by ce r t a in boundar ies a n d e l imina te the envi ronment f rom the descr ip t ion . U n d e r this in te rpre ta t ion , w h i c h m a y be appropr i a t e ly n a m e d 'moder-ate' , the p robab i l i t i e s we in t roduce into the models are nei ther a result of env i ronmen ta l ' shocks ' nor a result of ignorance of the sys tem's exact state: as for the former idea—its r edundancy was exposed above; as for the la t te r— w i t h i n c lass ica l mechanics there is no reason w h y the in te rna l dynamics of the sys tem cannot be k n o w n . 3 4 R a t h e r t h a n be ing unde rwr i t t en b y env i ron-men ta l pe r tu rba t ions or by 'coarse g ra in ing ' , the o r ig in of S M probab i l i t i e s lies accord ing to the modera te in tervent ionis t i n the e l i m i n a t i o n , or ' t r ac ing-out ' , of the envi ronment f rom the desc r ip t ion of the system's behaviour . A n d note tha t under this modera te in te rpre ta t ion , there is no a t t empt to contest the past hypothes is or the idea tha t the reason for p h y s i c a l i r -revers ib i l i ty lies i n the i n i t i a l condi t ions of the universe. The re is also no 3 4 T h i s was already clear to Einstein who preferred Boltzmann's first, dynamical, deriva-tion of the micro-canonical measure to his second, combinatorial, one. See Cohen (2002, 21). Chapter 5. The Origins of Probability 88 attempt to predict the future behaviour of the system, because one would then need to know the state of the environment, and in order to obtain the latter one would first have to retrodict the initial state of the universe. Thus, moderate interventionism is quite modest in its claims, but it serves to rehabilitate the models of its radical kin which are the only ones until now that yielded realistic time scales for relaxation to equilibrium, and, as we are about to see, in the quantum regime it also ties the probabilities of SM with the dynamical description, thus resolving the problem of choosing a probability measure in SM. Agreed, one consequence of the emphasis on systems' being open and the use of 'tracing-out', i . e. , the elimination of the environment from the dynamical description, is that SM ceases to be a fully experimental science. Yet it remains empirical in the same sense cosmology is: we can observe nature, but we are unable to control and reproduce some very important parameters.35 Consequently I see no a priori reason to reject the models that moder-ate interventionism offers as an alternative to the standard, Boltzmannian, framework,36 or not to include them as yet another link in the historical chain of improvements to the mechanical models for thermodynamics—a chain initiated originally by Bernoulli in 1738. For physicists the arguments for or against moderate interventionism should be empirically motivated. A-priori reasoning against improving one's models would have made Maxwell's innovative velocity-distribution law or any other model in the respectable chain above a target for criticism. Let me just mention what can be achieved with moderate interventionism in the classical foundations of SM modulo the conceptual problems it leaves unresolved. Here the gain is practical rather than philosophical in character. Although there have been several alternative suggestions,37 in the classi-cal context it is usually ergodicity which is invoked to justify the use of the micro-canonical measure in equilibrium S M , 3 8 and it is here where moderate interventionism can make a (small) difference. First, recall that notwithstanding a system's being ergodic there always exists a measure-zero set of initial conditions which yield abnormal thermo-3 5Shenker (2000) who defends this view says: "Some might find this unpleasant. How-ever, whether or not science is fully experimental is unfortunately not up to us". 3 6 Here I disagree with a team of respectable philosophers and physicists such as J . Bricmont, D.Z. Albert, C. Callender, L. Sklar and P. Horwich. 3 7 F o r example, Khinchin's (1949) approach which is based on the central-limit theorem or Pitowsky's (2001) interesting fluctuations approach. 3 8 The classical text is Farquhar (1964). For a recent review see Sklar (1993); Gallavotti (1999); Van Lith (2001); Emch and Liu (2002, 237-330). For the definition of ergodicity see App. B. Chapter 5. The Origins of Probability 89 dynamical behaviour. In opening one's system to environmental interactions one might explain why these initial conditions are so hard to contrive: they require a level of accuracy which is practically impossible when environmen-tal effects are present.39 Thus, although external influence is unimportant for explaining physical irreversibility, it is crucial for preventing induced reversibility. Second, many-particle ergodic systems relax to equilibrium in infinite time and the few-particles models for which ergodicity is proved are highly contrived and unphysical. In other words, ergodic systems are non-generic among finite Hamiltonian systems and the ergodic theory suffers from a serious lack of realistic models.40 But just by taking into account the walls of the container—ironically, these are usually discarded in any respectable model of SM—one, e.g. Blatt (1959), achieves finite and more realistic time scales. The question that is left open, of course, is not what 'stirs' faster, but what 'stirs' simpliciter. Finally, with the proviso that physical systems are 'open' one can ex-tend the quest for constructing mechanical models for TD phenomena to the non-equilibrium domain. In this domain equilibrium states are replaced by stationary states maintained by the environment. In the vast scientific literature on non-equilibrium SM one can find attempts to connect between the non-equilibrium and equilibrium SM, that demonstrate how the latter becomes a special case of the former, and in this sense moderate interven-tionism supplies a relative consistency proof for the ergodic hypothesis.41 Since these contributions are pragmatic in character, they are of interest 3 9 0 n e might object to this minute advantage in saying the ergodic hypothesis is sub-ject to limitations imposed by the K A M theorem which states that in any Hamiltonian conservative system there exist 'islands of stability' which form a non-vanishing measure subset on phase space thus preventing the system from demonstrating full ergodicity even if perturbed. See Emch and Liu (2002, 310-317); Sklar (1993, 169-174) and the example of the F P U 'paradox' (chapter two, fn. 47). But note that the K A M theorem concerns stability of trajectories in phase space hence is relevant to ensembles of systems. The idea of applying interventionism to the difficulty of 'picking' the 'abnormal' initial conditions of an individual system is immune to this criticism. 4 0 E m c h and Liu (2002, 294-310; 317-330). 4 1 Thi r ty years ago David Ruelle (1973; 1985) proposed a viewpoint which is far more general than that of Boltzmann's quest for the microscopic source of the macroscopic heat theorem. Ruelle's proposal is based on an extension of Boltzmann's ergodic hypothesis. In systems in which dissipation occurs one can regard the motion as developing asymptot-ically on an attractor smaller, in general, than the entire phase space. However, on this attractor the motion will not sensibly differ from a motion subject to conservative forces. This proposal, later developed by Gallavotti and Cohen (1995; 1996), is now known as the 'chaotic hypothesis'. See Appendix B. Chapter 5. The Origins of Probability 90 mostly to physicists. Philosophers, however, are apparently interested more in the conceptual problems of understanding and choosing probabilities in SM than in the construction of mechanical models for TD. Going back to philosophy and to the conceptual foundations of SM, I believe, nevertheless, that even if we accept the claim that the open system approach is irrelevant to the twofold problem of probability in SM as far as classical mechanics is concerned, nothing stops us from mitigating radical interventionism with its moderate kin and from moving into the quantum regime. 5.4.4 Quantum Decoherence For physicists who believe in the crucial contribution of the ergodic theory to the foundations of S M , 4 2 especially in underpinning Boltzmann's insights and in providing an imaginative guide for conjectures, the open system ap-proach has pragmatic importance. Philosophers, however, together with a loud minority of physicists, e.g., Bricmont (1996); Goldstein (2001), reject the relevance of ergodicity to the conceptual problems in the foundations of S M . 4 3 Nevertheless, my claim is that modulo certain metaphysical caveats, the open system approach—when applied to the .quantum regime—can yield a conceptual gain: similarly to the GRW theory it can play a role in solv-ing the twofold problem of probability, i.e., the problem of understanding and choosing probabilities in SM. Appendix C describes this role while the following section elucidates the metaphysical caveat. The open system approach to the foundations of SM inspired physicists working in Q M and gave rise to the school of environmental decoherence.44 The description of open quantum systems is entirely different from the cor-responding situation in classical theories. The formalism of quantum theory uses the concept of a density matrix for describing parts of a larger system. The annoying superpositions of macroscopically different properties can be shown to disappear from these density matrices on an extremely short time-scale. For example phase relations between different positions are continu-ally destroyed, or more precisely are 'delocalized' into the environment. The relevance of this irreversible coupling to the environment is widely accepted, but the mechanisms of decoherence are different from (though related to) those responsible for the approach of thermal equilibrium. In fact, decoher-ence precedes dissipation in being effective on a much faster time-scale, but 4 2 Ruelle (1991); Gallavoti (1999); Emch and Liu (2002). '1 3See the on going debate on the pages of the Philosophy of Science for the last three decades. 4 4See Appendix C and Giulini et. al. (1996). Chapter 5. The Origins of Probability 91 it requires special initial conditions; as special as those which are responsible for reproducing an approach to equilibrium in thermodynamics.45 Since for all practical purposes decoherence yields an apparent collapse, prima facie it has almost all the advantages of the GRW theory when ap-plied to the foundations of SM. Conceptually, however, the two approaches are quite distinct. Postponing the technical details to later chapters and to the Appendix as I do, I would like to confine myself here to three remarks that highlight the subtle differences between the 'quantum kin' of classi-cal interventionism and the GRW theory within the project of supplying a dynamical explanation to thermodynamic phenomena. The first remark regards a general difference between radical and mod-erate quantum interventionism. In the radical case it is the environment that is viewed as 'disturbing' the system, and the non-linear terms intro-duced into the dynamics represent 'hits' or 'jumps' that are inflicted on the system by its interaction with the environment. In the quantum moderate case the story is exactly the opposite. Here the system 'hits' the environ-ment, or the measuring apparatus, and as a result of decoherence classicality emerges, i.e., arbitrary superpositions are dismissed, yielding to a preferred set of 'pointer states' which the system 'imprints' on the environment. These states are entangled least and are stable inasmuch as they are insensitive to further interactions of the system with the environment. A l l other states are entangled with the environment; preserve their purity; and are observable only if one takes into account the composed system, i.e., the system and its environment. The stable, preferred, states are thus the best candidate for a classical description within a quantum world. They are natural, dynamical, records of the system which emerge due to the system's interaction with its environment, i.e., due to the system's 'openness ' . 4 6 The second remark is that contrary to what a few physicists assert,47 decoherence by itself cannot and does not solve the measurement problem in the way the GRW theory does. What is true is that it can supply a consistency proof for the appearance of a classical world with an underly-ing quantum dynamics, or in other words, it can explain why we never see macroscopic superpositions, not why they do not exist. But as Bub (2000, 90-91) puts it, the fact that the 'effective' quantum state—an improper mix-4 5 T o ensure decoherence the Hilbert spaces of the system and its environment should form a tensor product, and the interaction Hamiltonian should commute with the system's observables in the position basis. See Appendix C. 4 6 Zurek and Paz (1994). 4 7 Zurek (1991); Anderson (2001); Stamp (Private communication). Chapter 5. The Origins of Probability 92 ture described by the reduced density operator—is diagonal with respect to properties associated with some pointer basis "not only fails to account for the occurrence of just one of these [properties] but is actually inconsis-tent with such occurrence" since taking into account the environment gives us back the pure state from which the mixture was derived, and this state is inconsistent with the occurrence of events associated with definite prop-erties. In this sense decoherence alone is more a theory that is never caught in telling a lie than a theory that tells the truth. 4 9 Consequently, and this brings us to the third remark, decoherence must be supplemented with further no-collapse interpretations such as many-worlds or modal interpretations in which extra dynamical laws (over and above Schrodinger's equation) allow one to interpret the reduced state as a probability distribution over the components of the entire wave function. Now, a combination of decoherence with appropriate dynamics seems to yield formally the same net result as the GRW theory,50 but irrespective of one's attitude towards no-collapse interpretations, in the context of har-nessing these interpretations to the foundations of SM as alternatives to the GRW approach several further comments should be made. First, as discussed in chapter four (section 4.1.1, especially fn. 19) the dynamics of the total wave function in these no-collapse theories is determin-istic and time-symmetric, whence the susceptibility of these theories to the difficulty that plagues all explanations of thermodynamical behaviour which rely on deterministic and time-reversible dynamics: one cannot ignore initial conditions anymore, even if in this particular case these initial conditions are dynamical conditions for decoherence, rather than thermodynamical condi-tions for low entropy. And yet given these initial conditions, Hemmo and Shenker (2003, sec. 7) show how the dynamical evolution ensures effective thermodynamical behaviour as much as GRW theory does. Second, at least one type of these no-collapse interpretations, e.g., the many-worlds interpretation, is notoriously vague about the meaning of the probabilities it employs:51 while the reduced state description is interpreted 4 8 0 n the difference between proper and improper mixture see d'Espagnat (1966). In short it boils down to the idea that the former can be given a classical (Boolean) igno-rance interpretation of unknown existing values while the latter, due to interference terms, cannot. 4 9 B u b has recently taken back his objections and has become a fierce proponent of the epistemic view of the wave function, in which the 'for all practical purpose' solution of the measurement problem by decoherence is the standard reply to what is usually considered 'philosophical quibble' and 'pseudo science'. 5 0See chapter seven and appendix C. 5 1 B e l l (1987, 93-99; 117-138); Albert and Lower (1988, 198-201); Kent (1990); Hemmo Chapter 5. The Origins of Probability 93 i n a l l no-collapse theories, even i n B o h m ' s p i lo t wave theory, as single t ime p r o b a b i l i t y d i s t r i b u t i o n over the components of the wave funct ion , wha t ex-ists i n B o h m ' s theory (and i n m o d a l theories when in terpre ted as s tochast ic h idden variables theories) and is s t i l l miss ing i n the many-wor lds interpre-t a t ion is an answer to the ques t ion wha t are the t r ans i t i on p robab i l i t i e s probab i l i t i e s for.52 In B o h m ' s theory, for example , the p robab i l i t i e s are for par t ic les to have cer ta in configurat ions; they are ca lcu la ted by us ing de te rmin is t i c d y n a m -ics a n d by c o n d i t i o n a l i z i n g on the effective wave func t ion and the i n i t i a l p r o b a b i l i t y d i s t r i b u t i o n for par t ic les i n the universe. A s s u m i n g tha t the i n i t i a l p r o b a b i l i t y d i s t r i b u t i o n is t y p i c a l (relat ive to the measure g iven by the square of the wave funct ion) , th is recovers B o r n ' s rule for par t ic le loca-t i o n s ; 5 3 the jo in t a n d t r ans i t i on p robab i l i t i e s are inher i t ed v i a the d y n a m i c s f rom the i n i t i a l p r o b a b i l i t y measure by way of c o n d i t i o n a l i z i n g o n k n o w n outcomes of exper iments ; a n d results of exper iments are ana lyzed i n terms of par t ic le conf igura t ion . In the var ious versions of the G R W theory, on the other hand , the p rob-abi l i t ies are for the wave funct ion to evolve i n a ce r ta in way, or to have a ce r t a in form; they are ca lcu la ted us ing the collapse dynamics w h i c h ap-p r o x i m a t e s ' B o r n ' s rule; a n d they are i n t roduced d i r ec t ly in to the d y n a m i c s . I n these collapse theories the wave func t ion is in terpre ted as represent ing mass densi ty i n a conf igura t ion space; a n d the p robab i l i t i e s are i r reduc ib le d y n a m i c a l t r ans i t i on chances. B u t con t ra ry to the precise ontology of these theories, i n the many-wor lds in te rpre ta t ion , a l though there exists an account of how to calculate the p rob-abi l i t ies , a n d an effective reduced desc r ip t ion o n w h i c h such ca l cu la t ion can re ly is ensured by decoherence, the onto logy is more t han d i s tu rb ing . In fact, the ques t ion wha t are the p robab i l i t i e s p robab i l i t i e s for is s t i l l open . O n e can agree w i t h recent c la ims tha t the mean ing of the t r ans i t i on , or jo in t , p robab i l i t i e s i n no-collapse in terpre ta t ions is irrelevant to the ques t ion of the t ime -a symmet ry of the theory, since wha t mat ters is the evo lu t ion of the total wave f u n c t i o n , 5 4 bu t w i t h i n the project of s u p p l y i n g a d y n a m i c a l exp l ana t i on for t h e r m o d y n a m i c phenomena this feature of no-collapse theo-ries such as the many-wor lds in te rpre ta t ion becomes a serious obstacle w h e n (1996). 5 2 A s noted above, recent suggestions from the 'British fortress' of the many-worlds interpretation in Oxford urge us to interpret these probabilities within a decision theoretic scheme as degrees of belief of an observer whose world is about to 'branch'. 5 3 Goldstein et. al. (1992). M Hemmo (2002). Chapter 5. The Origins of Probability 94 one is t r y i n g to account for the behav iour of an i n d i v i d u a l sys tem. T h e reason for th is diff icul ty is tha t these no-collapse in terpre ta t ions when accompan ied by decoherence describe the q u a n t u m state w i t h a re-duced densi ty m a t r i x ; a mix tu r e , a lbei t an imprope r one. T h i s dens i ty m a t r i x is usua l ly in terpre ted as a p r o b a b i l i t y d i s t r i b u t i o n over an ensemble of systems. Y e t i t is not at a l l clear (1) accord ing to wha t c r i t e r ion an i n d i -v i d u a l observed sys tem, say, a gas i n a box , shou ld be identif ied w i t h one of the systems i n the ensemble, (2) how are we to ensure tha t a cer ta in phys i ca l process, say, the evo lu t ion of an i n d i v i d u a l observed sys tem over t ime, w i l l involve the same i n d i v i d u a l sys tem, a n d (3) wha t mean ing of p r o b a b i l i t y is left once such a sys tem is chosen i f the evo lu t ion of the to t a l wave func t ion is comple te ly de te rmin i s t i c . In short , i n the context of r ep roduc ing T D f rom Q M since the fo rma l i sm of the open sys tem approach describes not a single sys tem bu t an ensemble of systems, i t leaves unanswered the ques t ion w h y this particular system approaches e q u i l i b r i u m . T h e on ly way-out of th is p r o b l e m is to regard the reduced densi ty m a -t r i x as descr ib ing a single, a lbei t h igh ly impure a n d entangled, i n d i v i d u a l s y s t e m . 5 5 T h e consequences of such a move, however, are momentous . If, accord ing to the theory, the i n d i v i d u a l sys tem is s t i l l entangled a n d on l y appears c lass ica l to an observer to w h o m the comple te desc r ip t ion of the w o r l d is unavai lable , then one must admi t tha t the fo rma l i sm of our current most fundamenta l theory, ra ther t h a n descr ib ing the w o r l d 'as it is ' gives us on ly a descr ip t ion of our experience.56 S u m m a r i z i n g , i na smuch as one wants to transfer c lass ical modera te inter-ven t ion i sm in to the q u a n t u m regime one first needs to combine decoherence w i t h no-collapse in terpre ta t ions i n order to supp ly an 'effective' so lu t ion for the measurement p r o b l e m a n d to ensure cont inuous suppression of superpo-5 5 T h i s way-out is also the legacy of the consistent histories school, in which the single system is taken to be the ensemble as a whole. For an accessible non-technical presentation of this 'post-Everettian' view see Hartle (1992) and also Wallace (2001a; 2001b), but also a criticism in Maudlin (2001). 5 6 Tha t this is an inevitable consequence is made clear by the following reasoning: in order to avoid the tension between 'reality' and 'appearance' one must introduce collapse as a real physical process, as, for example, Gisin and Percival (1992) who represent the interaction of the environment with an individual system with a diffusion equation do. The difference between their suggestion and the GRW model, however, is that in the latter the collapses are spontaneous, i.e., they occur without any cause. By invoking the environment as a cause Gisin and Percival render themselves radical interventionists hence they are susceptible to the same criticism mentioned above: it cannot be the case that the dynamics of the world is deterministic inside one's system and indeterministic outside it. But if the collapse is not real, then what is captured by the theory is nothing but an appearance. Chapter 5. The Origins of Probability 95 sitions. Given that solution, one can then regard the transition probabilities employed by the dynamics of the no-collapse theories as effectively stochas-tic. As a result, one retains all the explanatory advantage of the GRW framework over the standard Boltzmannian view. But contrary to the case of the GRW theory, where a single system actually approaches equilibrium, in quantum interventionism there is still a difficulty in 'generating' a descrip-tion of the appropriate individual system. Once generated, this description entails, furthermore, an ineliminable cut between appearance and reality. 5.5 A r e Observed Systems Always Open? The last paragraph makes it clear that the individual-system/ensemble-of-systems dichotomy—rather than being a quibble about formalism—indicates a major interpretative difference between the GRW theory and the open sys-tem approach with respect to the distinction between appearance and reality. But the issue at stake is not reality versus appearance per se—that there exists a distinction between the two realms is something both approaches agree on. Rather, what at stake is which of these realms should be the ulti-mate subject matter of science. In other words, the two approaches differ on what should be captured by the description of our most fundamental theory. Theories such as GRW strive for an 'Olympian world-view'.57 They regard the ultimate subject matter of science to be nature as it is. The scientific enterprise is thus aimed to produce a neutral description of the natural world, stripped from observers, human or other, and free of external interventions. The division of the world into systems, observers, and environ-ments is regarded as artificial, and the theory endows a natural mechanism with the task of eliminating this artificiality. In such framework the differ-ence between how the world appears to conscious observers like us and how the world is plays only a temporary methodological role, in the sense that it guides the theorist in his quest for the unifying world-view, but once the latter is achieved, this distinction simply evaporates. Since the formalism of the theory aims to capture the world 'as it is'—the wave function is in 1-1 correspondence with an individual physical system—the philosophical world-view that underlies it is naive realistic,5 8 and the viewpoint of 'the observer' is always a 'God's-eye' view; a view from without. By contrast, theorists within the open system approach such as classi-cal interventionists or einselectionists ('ein' stands for environment-induced , 7Shimony (1985, 418). , 8Pearle (1982, 460; 1986, 539). Chapter 5. The Origins of Probability 96 decoherence) content themselves with much less. In these theories the dis-tinction between how the world is and how we, as scientists, experience it and describe it is built into the theory itself, not as a temporary method-ological tool to be removed later, but as the ultimate goal of the theory. Consequently, the scientific description captures only how the world appears to be, i.e., it captures only our experience as a point of view from within. As Zurek and Paz (1995, 622) put it: In this setting the observer must be demoted from the position of an all-powerful external experimenter dealing, from without, with one more physical system (the Universe) to a subsystem of that Universe, with all the limitations arising from such confinement to within the physical entity he/she is supposed to monitor. In environment-induced decoherence, for example, one writes down a reduced density matrix as an 'effective' state of the composite system (the system + apparatus + observer) as it appears to an observer confined to the degrees of freedom of that composite system. The 'effective' evolution of the this 'effective' state as it appears to this observer is given by a master equa-tion. This observer is deprived of the 'God's-eye' view; for him a description of the 'rest of the world', i.e., the the composite system plus the environ-ment, is unavailable. But since Q M is commonly regarded as a fundamental theory, such description cannot be viewed as incomplete, as, for example, a phenomenological description of a physical system in thermodynamics is. As a result, the tension between the practical and the nomological unavail-ability of a full-blown description of the world is resolved by acknowledging that what is captured by the fundamental theory is only our experience—an appearance of the world; not the world as it is. Consequently, within the open system approach 'the environment' be-comes a crucial part of the scientific description, and must be taken into account as something special in the sense that notwithstanding his percep-tion of physical systems from within, the observer must be accompanied with an environment, i.e, something else, external to the perceived system, in order for this perception to be 'realized'.5 9 Another way to put this is to say that within this framework the universe as a whole ceases to be a meaningful physical system since there is no external environment which an internal observer can rely on in his description.60 5 9 Recall that in moderate interventionism it is the system that imprints its state on the environment and that the interaction system-environment is what allows the 'emergence' of the classical world. 6 0Some, e.g., Page and Wooters (1982); Crane (1993); Smolin (1995), suggest to divide Chapter 5. The Origins of Probability 97 An interesting philosophical question that immediately comes to mind is whether—modulo the last caveat—the wave function can be interpreted realistically in the open system approach as in the GRW theory. Recall that in this approach the wave function does not stand in 1-1 correspondence with an individual system. If one agrees to ignore the environment, that is, if one agrees to trace the environment out of one's description by eliminating its degrees of freedom, then realism with respect to the wave function might still be preserved but at a price. Although the description of the physical state cannot be interpreted in a naive realistic way (since the wave function does not stand in 1-1 correspondence with an individual system), one can retreat to an observer-dependent interpretation, i.e., one can regard physical properties described by the wave function as relational properties, exactly as velocity is treated in the special theory of relativity.6 1 I shall set aside the issue of wave function realism as well as other in-teresting philosophical questions that follow, e.g., whether the world 'as it is' is like the way we experience it, or whether Q M urges us to abandon the eternal quest for the neutral, 'Olympian', world-view, or even whether Q M is consistent with naive realism. Instead, interested in the relations between chance and time as I am, and given that within the open system approach probabilities are interpreted (at least partially) as a result of ignorance,62 the question I do intend to focus on in what follows is whether the open system approach can be a regarded a candidate in the project of reducing TD to SM. In the following chapters I shall try to convince you that the answer to this question is, again, affirmative but conditional. Since naive realism, i.e., a 1-1 correspondence between the mathematical construct and the object it describes, is not a necessary condition for inter-theoretic reduction, the issue at stake is not realism, be it naive or other, but rather what counts as the universe into a set of subsystems of which one plays the role of an 'inside observer'. Yet these authors leave unresolved the tension between appearance and reality since even within their suggestion it is still impossible to describe the universe as a whole from without. See also Breuer (1995) who draws an analogy between this tension and Godel's theorem. 6 1 I n one of the first papers on decoherence (Zurek (1982, 1876)) one finds that "Rela-tivity of the properties of quantum systems is the key new concept arising in the context of environmental-induced-superselection rules". This was also the original suggestion of Everett (1957) in his 'relative-state' formalism. See also Rovelli (1996) ; Saunders (1998); Rovelli and Laudisa (2002). 6 2 Y e t , as emphasized here (section 5.4.3), the ignorance is an ignorance of 'the rest of the world' and not of the exact state of the system. That is, it is a result of 'tracing out' the environment rather than of 'coarse graining' the system. Chapter 5. The Origins of Probability 98 satisfactory inter-theoretic reduction. Finally, the metaphysical differences notwithstanding, it is noteworthy that the distinction between the open system approach and the GRW the-ory is more than mere philosophical academics. While ignoring the system-environment distinction, the GRW theory gives definite predictions which deviate from the predictions of orthodox Q M and which can be tested only if complete isolation of the system at hand would be possible.63 Advocates of the open system approach claim that this isolation is impossible in principle since once we acknowledge that the observer himself is 'open', physical sys-tems can never be isolated.64 But this argument simply begs the question since it relies on a demarcation which is foreign to the GRW theory.65 Note, moreover, that if one regards physical systems as inevitably open then it is the open system approach whose models, in so far as these models explicitly aim to reproduce the predictions of orthodox QM, are the ones which are not testable. Thus I find the a priori objections to GRW theory rather curious: there is nothing to be proud of when one cannot be proved wrong since this also means that, in a scientifically relevant way, one cannot be proved right! In the remaining part of the dissertation we shall dwell more on the dif-ferences between these two approaches, the 'Olympian view' and the open system approach. However, in order to make clear why some, e.g., A l -bert (1994; 2001), believe that solving the measurement problem would be tantamount to solving the long-standing debate on probabilities in the foun-dations of SM, and why some regard the GRW theory as the champion of the project of supplying 'purely mechanical' models for thermodynamics, i.e., models which are free from extra dynamically-unjustified statistical as-sumptions, we must turn now to the subtleties of inter-theoretic reduction in general and in particular to the case of the relations between TD and SM. 6 3 Bonc i et. al. (1995). 6 4 Z e h (1970; 1973). 6 5Moreover, current interference experiments (as those mentioned in, e.g., Leggett (1995); Haroche et. al. (1996)) are not sufficient to refute theories such as GRW since the latter makes reference directly to massive systems. Only if massive macroscopic superpo-sitions were detected would the GRW theory be falsified. Chapter 6. A Tale of Two Theories 99 Chapter 6 A Tale of Two Theories T h e t r u t h c o u l d not b e w o r t h m u c h if e v e r y b o d y was a l i t t l e b i t r i g h t . H. Dieter Zeh XX is about time to braid the strands of thought discussed in the first half of the dissertation into a cogent claim. Let us remind ourselves what we have achieved up until now. So far we have made precise the notions of time-asymmetries and inde-terminism. First, we distinguished between asymmetries of time and asym-metries in time and between the different notions of irreversibility. Second, we distinguished between epistemic and ontological indeterminism; between ignorance and chance. Next, we have also examined where and when the claim that the two subject matters are related is well founded, and ended up with two philosophical world-views and with several presuppositions un-der which such relation can be established. Finally, we have translated the debate on the origins of probability into precise physical frameworks by mapping the chance-ignorance dichotomy onto the distinction between closed and open dynamical systems and onto the different solutions to the quantum measurement problem. In the remaining part of the dissertation we shall investigate the roles these physical frameworks play within the philo-sophical enterprise of explaining one of the most discussed asymmetries in time in the literature—the approach to thermodynamic equilibrium. As our analysis reveals, there are both historical and philosophical moti-vations to couch this explanation in the context of the inter-theoretic relation between TD and SM. Thus, it is the task of this chapter to describe a modern view of one such relation—inter-theoretic reduction. But first a disclaimer is in order. 'Reductionism' is a term of contention in academic circles. For some it connotes a right-headed approach to any genuinely scientific field that seeks systematics in the phenomena; for others—a wrong-headed ap-proach that is narrow-minded and blind to the richness of the phenomena.1 'For example, in a collection of papers on the subject a distinguished physicist (Dyson (1995, 6)) writes: Chapter 6. A Tale of Two Theories 100 My goal here is not to defend this approach, and in this sense the discussion that follows is purely descriptive; not prescriptive. The reason is simple: those who find reductionism wanting are likely to be uninterested in the relation between chance and time from the outset. Reductionism is a metaphysical thesis, a claim about explanations and a research program. The metaphysical thesis to which many anti-reductionists object is that the whole is nothing but the sum of its parts. What is also com-monly contested is the reductionists' research program which is a method-ological prescription that follows from the claim about explanations. To first approximation the latter boils down to the idea that the whole can (and the methodological prescription that follows is that it should) be explained by giving an account of its parts. Consequently, nowhere has the thesis of reductionism created so much controversy as in the domain of inter-theoretic relations, where the quest for the 'unity of science' and the 'theory of everything' is championed or criticized by physicists and philosophers alike. We shall start by way of introducing the thirty-odd years old modern view of inter-theoretic reduction. Since it emerged mainly as an answer to the criticism raised against the older, classic, account of inter-theoretic re-duction, its introduction will also serve as a brief exposition of the latter. Having warmed ourselves we shall immerse in the subtleties of the relations between TD and SM with the intention to clarify once more two key is-sues: (1) why the standard, Boltzmannian, view has left many reductionists unsatisfied, and (2) what remedies the GRW theory and the open-system approach can offer to rectify Boltzmann's alleged explanatory lacuna. 6.1 Reductionism - The 'New Wave' 6.1.1 On Reduction In philosophy it is customary to view inter-theoretic reduction in the follow-ing way: a reducing theory (Ta) aims to explain the phenomena explained by a reduced theory (7;,) by way of substituting the theoretical entities of Ta for the theoretical entities of 7j, and deriving the laws of % within the It is a curious paradox that several of the greatest and most creative spirits in science, after achieving important discoveries by following their unfettered imaginations, were in their later years obsessed with reductionist philosophy and as a result became sterile. Hilbert was a prime example of this paradox. Einstein was another. Chapter 6. A Tale of Two Theories 101 framework of Ta.2 Note that under this account reproducing the predictions of the reduced theory by the reducing theory is not sufficient. To achieve a reduction between theories one needs to show how % is obtained from Ta by a limiting process.3 The idea here is that a reductionist account strives for a unity of science. Although different levels of description of reality are legitimate, Nature itself need not adhere to such pluralism, especially if the criteria for 'slicing' it are defined according to human capacities. In order to restore coherence and adequacy to our theories there must exist in principle a way to connect them. Agreed, it might be the case that such an 'Olympian world-view'4 is inaccessible for us humans, yet this reservation does not bear on the cogency of such a world-view and on the attempts to formalize it in science. Attempts to defend the philosophical view of reduction have attracted much criticism.5 First, the classical account of reduction was couched in the syntactic view of theories which regards theories as axiomatic systems expressed in natural or artificial language.6 Since under this view reduction is an inter-theoretic relation of deduction, the deductive derivation requires that the terms of the reduced theory share meanings with the terms of the reducing theory. One of the strong arguments against the classical view is that the identification between terms across theories is not always possible.7 2 The loci classici for the philosophical account on inter-theoretic reduction in general are Kemmeny and Oppenheim (1956) and Nagel (1961, eh. 11). 3 Nagel broadened his notion of 'derivation' to include limiting processes under pressure from Feyerabend (1964), e.g., meaning invariance need not hold under this broadening. ''Shimony (1985, 418). 5Suppe (1977). A remarkably concise and lucid account of this debate appears in a fairly long footnote of a recent paper of Rosenberg (2001, fn. 3). See also Bickle (1998a, 3-6). 6 For a synopsis of this 'received view' see Suppe (1977, 6-62); Emch & Liu (2002, 2-11). In a nutshell, the syntactic view emphasizes the importance of a theory's linguistic structure at the expense of other elements. 7 A n example for solving the identity problem in the case of theoretical concepts is Nagel's (1961) 'Bridge Laws'. According to Nagel (ibid.) identifying concepts across the-ories is possible in case of a homogeneous reduction, where a theory % is reduced to a theory Ta if the descriptive vocabulary of former is a proper subset of the descriptive vocabulary of the latter, and the terms of the former that appear in the latter share a common meaning. However things get complicated when we move to inhomogeneous re-duction, where the descriptive vocabulary of 7t, is not a proper subset of the descriptive vocabulary of Ta. In this case we must invoke what Nagel calls 'bridge laws' that allow us to transform the inhomogeneous reduction into a deductive explanation by connect-ing the missing terms, conceptual and observational, between the theories. Nagel's own interpretation of the status of the 'bridge laws' is that they express factual, or empirical connections between the states of affairs described by the relevant terms. Chapter 6. A Tale of Two Theories 102 Second, early on in the discussions of reduction Shaffner (1967) observed that reduced theories are often less accurate and less complete in various ways than reducing theories, and therefore incompatible with them in terms of predictions and explanations. Consequently, the classical view of reduc-tion faced two seemingly insoluble problems: (1) how can deductive deriva-tions yield false conclusions? and (2) While the reduced theory needed to be corrected before its derivation from the reducing theory can be effected, this correction sometimes resulted in an entirely new theory whose deriva-tion from the reducing theory showed nothing about the relation between the original pair.8 Third, two arguments against reduction from the domain of philosophy of mind—multiple realizability9 and anomalous monism10—have cast doubt on the relevance of reductionism as a methodology in science. The former claimed reductionism to be too weak and pointed to some generalizations which the reductionist methodology might fail to capture, suggesting that explanations might take functional rather than deductive form. The latter claimed reductionism to be too strong, questioning the law-like behaviour of some higher-level phenomena. Fourth, as the classical account of reduction relied on the D-N model of explanation, the eclipse of this model weakened the idea that inter-theoretical explanation should take the form of reduction. The classical account, moreover, was closely tied to the syntactic view of theories, but once philosophers of science began to take seriously the semantic approach to theories,11 which treats theories as families of models, and models as im-plicit definitions about which the only empirical question is whether they are applicable to phenomena, the role of reduction as deduction and its relevance to inter-theoretic relations became questionable. Before presenting the remedy for these faults let us mention another kind of inter-theoretic reduction which is more common in physics than in philos-ophy.1 2 In the eyes of the physicist, theories are mathematical constructs— 8Feyerabend (1964). 9 For a concise exposition and references see Bickle (1998b). 1 0Davidson (1970). n T h e first ideas of this view are propounded in Beth (1961) and Van Fraassen (1970). The claim of the semantic view is not that the syntactic view is wrong. Rather, the idea is that the latter is far too simple. According to the semantic view a theory can be characterized intrinsically by a set of uninterpreted assertions and extrinsically by a set of models that satisfy these assertions. The difference between the two views is exemplified by confronting, e.g., Euclidean geometry with group theory. 1 2 For a discussion on the two notions see Nickles (1975), who calls the reduction in which I am interested here Reduction-Chapter 6. A Tale of Two Theories 103 formal systems embodied in equations, so the inter-theoretic relations are relations between equations. The less general theory is a particular case of the encompassing one, as some dimensionless parameter, call it 5,13 takes a particular limit value: Ta (reduced theory) —> 7^ (reducing theory) as 5 —> 0. Thus, a more general theory is reduced to a less general theory in a specific l imit . 1 4 For example,15 special relativity reduces to Newtonian me-chanics in the limit 5 = ^ —> 0; general relativity reduces to special relativ-ity in the limit 5 = —> 0, and quantum statistical mechanics reduces to classical statistical mechanics in the limit 8 = —> 0 (the latter is known as Ehrenfest theorem). Put this way reduction must involve the study of asymptotics, and as a result the mathematical theory of asymptotics can be regarded as a theory of theory-reduction.16 What is the nature of the limit 5 —> 0 ? Note that the limit is always an idealization, because in the actual situation 5 > 0. Indeed, this feature captures the problem of relating our physical models to the world. Another feature that comes to mind is that, sometimes, limits of physical theories are highly singular, and it turns out that the singularity of the limit can be associated with emergent phenomena.17 This feature serves to bridge the philosophical and the physical views of reduction. Given the dimen-sionless limit S exists, reduction in the physical sense is always possible in principle since the theoretical entities which have physical meaning are con-sidered invariant hence are assumed to be identical across theories that differ only in their domain of applicability. On the other hand, one of the strong arguments against reduction in the philosophical sense is that the identifica-tion between theoretical entities across theories is not always possible. The physical view of reduction helps to make this argument precise and also to demonstrate why it is not as serious as one might think. Indeed, singular 1 3Scheibe (1993) calls this parameter the 'vehicle' of the reduction. "Note the difference between the philosophical and the physical sense of reduction: in the former the less fundamental theory reduces to the more fundamental theory; in the latter, vice versa. Thus the function cos 0 'reduces' (in the physical sense) to 1 as 6 —> 0. 1 5 In the following v : speed of a body; c : light speed; G : Newton gravitational constant; m : mass of a body; a : typical linear dimension of a body; and E : the energy of the system. 1 6Batterman (2001). 1 7 One good example cited in Berry (1994) is fluid mechanics: when the viscosity of a fluid decreases and its velocity increases, the flow becomes unstable, or turbulent hence its fractal character that defies smooth solution to the Navier-Stokes equation for smooth flow down a pipe. Chapter 6. A Tale of Two Theories 104 limits are rather a source for new physics than a source for philosophical anguish.18 Moreover, the physical view illustrates how the unity of science is preserved, as within this view laws non-arbitrarily co-vary with different domains of applicability in a precise way. 6.1.2 The 'New Wave' Let us move on to a more modern view of inter-theoretic reduction. By far, the most comprehensive account of this view is C. Hooker's.1 9 Together with the 'new wave' of inter-theoretic reduction it inspired,2 0 Hooker's view can be regarded as an answer to the first three objections to inter-theoretic reduction, and since it offers a coherent way to formulate inter-theoretic reduction within the semantic view of theories, it also undermines the fourth. Hooker's reasoning is as follows. Since a reductionist methodology strives for an intelligible world-view by way of explanatory, theoretical, and ontolog-ical unification, in order to preserve the ontology one needs to construe the reduction relation as an identity relation. This, as many have observed, is more easily said than done. But Hooker answers the identity-across-theories objection by demonstrating how to relax the demand for identity between theoretical frameworks and to replace it with a relation of analogy. By con-structing an analogue for the reduced theory in the language of the reducing theory Hooker ensures identity 'up to an analogy' and allows one to view the reductionist project as the a process in which suggested models in the reducing theory—analogues for the reduced theory—are refined or replaced according to the degree of similarity between the theories. Given T0 and Ta (the reduced and the reducing theories respectively), Hooker's (1981b, 49) definition of the reduction relation R reads: Within Ta, the reducing theory, we construct an analogue T0 of 7b, the reduced theory, under certain conditions CR such that Ta and CR entail T0 and argue that the analogy relation A between T0 and % warrants claiming a reduction relation between % and In symbols: (T0 R T6j if ((% A CR —> %) A (f 6 A %)). 1 8See the nourishing school of quantum chaos that 'emerged' from the limit Ti —> 0. 1 9Hooker (1979; 1981a; 1981b; 1981c). 2 0 Bickle (1998a). Chapter 6. A Tale of Two Theories 105 And no matter how complicated, idealized, and necessarily counterfac-tual CR is, the relation between Ta and % remains straightforward deduc-tion. Ta directly explains % and this is its power to indirectly explain Hooker's definition solves the problem of identity across theories since the analogue is constructed in the language of the reducing theory, but it also blunts Shaffner's double argument against reduction. Recall that Shaffner worries that the reducing theory—replacing as it does the reduced theory— not only undermines the validity of the latter but also casts doubt on the co-herency of the view of reduction as deduction. But according to Hooker, re-placement is not the only possibility; the question whether an inter-theoretic reduction relation exists is not a yes-no question. Rather, there exists a con-tinuum of possible inter-theoretic relations varying in strength from pure in-commensurability to complete identity; from replacing a theory to retaining it. Thus, as much as in the physical sense of reduction theory replacement is achieved "in the limit", also here, in the philosophical sense, replacement is the extreme scale on the spectrum of inter-theory reduction.2 1 Hooker's trilogy has inspired J. Bickle who, by marrying Hooker's insight to the structuralist tradition in the semantic view of theories,22 constructs what he calls the 'new wave' of inter-theoretic reduction. In so doing Bickle achieves another victory over the anti-reductionists camp. Recall that the eclipse of the syntactic view of theories motivated many to abandon the methodological prescription of inter-theoretic reduction since the latter was couched in terms of the former. Bickle's account on reductionism—inspired as it is by Hooker's insight—is spelled out entirely in terms of the semantic view of theories. In this framework it becomes possible to rebut the anti-reductionists and to construct the necessary continuum needed for Hooker's relation. For example, Bickle (ibid., 65-74) suggests how to impose a metric on the inter-theoretic spectrum in order to 'measure' the strength of the inter-theoretic relation by focusing on the similarity between Tb and %. This sim-ilarity depends on the set of constraints CR and the set of analogous models that instantiate % and are compatible with (7j, A Tj,).2 3 For this purpose 2 1 A s C. Hooker mentioned to me in private communication, his seminal trilogy on reduction can be retrospectively viewed not only as a rehabilitation of the current status of the philosophical view of inter-theoretic reduction but also as an early attempt to bridge the philosophical and the physical notions of reduction. 2 2 T h e structuralist school in the semantic view of theories is widely appreciated in Europe but, unfortunately, almost unknown to the Anglo-American tradition. For a com-prehensive account see Balzer et. al. (1987). 2 3 Bickle 's discussion, as well as other applications of the semantic view of theories to inter-theoretic reduction such as Scheibe (1993), are carried out under the simplifying Chapter 6. A Tale of Two Theories 106 Bickle (ibid., 82-96) borrows from the structuralist school the notion of a 'Blur', which is a precise definition of intra-theoretical approximation de-veloped within this framework in order to account for corrective reductions, and extends it into the inier-theoretical regime.24 Let us examine a relevant case. In the reduction of thermodynamics to the kinetic theory of gases the two crucial counterfactual assumptions (Co) for the latter to mimic the ideal gas law are (1) the lack of attractive forces among the molecules of the gas which can affect their motion, and (2) that these molecules have zero volume. These assumptions are not quite correct: in real gases there are weak attractive forces between molecules (which increase as the molecules are closer together), and the latter are indeed very small, but still bigger than mathematical points. The first approximation plays a role when the molecules are about to hit the walls of the container, since it suggests a correction in the actual measured pressure resulting from the fact that in real gases each molecule hits the walls of the container with less force than in the ideal case. This approximation also depends on the number of molecules which are close to the colliding molecule, hence on the density of the gas. The second approx-imation plays a role when one remembers that by compressing a quantity of a real gas one can never decrease the volume of the gas to zero, as required by Boyle's law. Rather, one needs to take into account the volume of the molecules themselves.25 Now, on the side of thermodynamics these features restrict us to a spe-cial class of models (where models are sets of states undergoing processes for which one can assign TD functions such as pressure, volume, and tem-perature) which will be the close approximations relative to the increases of pressure (owing to the existence of weak attractive forces in real gases) and the decreases in volume (owing to small but finite volumes of real gas molecules) needed to account for the corrections implied to the ideal gas law by the kinetic theory. On the side of the kinetic theory it is the counter-assumption that scientific theories can accommodate first-order logical notions. 2 4 A 'Blur ' u is a degree of approximation which constitutes a set of ordered pairs of potential models of a theory such that if a pair < x, y > is a member of a blur u then x and y approximate each other at least to the degree given by u. For the real numbers, or any other set displaying a standard metric, where the absolute value of the difference of ele-ments is meaningful, each blur is determined by a particular e : ue = < a, b >: \a — b\ < e. However, derived as it is from the notion of a uniform structure which imposes a topology on a set, and is, in turn, independent of any metric, the notion of blur is also metric-independent. See Bickle (ibid., 84). 2 j W h e n formulated precisely the two approximations taken together amount to the neglect of van der Waal's forces in ideal gases. Chapter 6. A Tale of Two Theories 107 factual assumptions that yield the relevant blurring. We need to include in the set of blurred models those models in which the attractive force between molecules and the molecules' volume both vanish. And although these mod-els will not obey actual laws governing real aggregates of molecules, for the purpose of relating TD with the kinetic theory they will separate an ana-logue structure mimicking the ideal gas law of this simple version of TD. The two classes of blurred models constitute now a set of ordered pairs which is no more and no less than the required reduction relation. And note that in this case, as in Hooker's definition, the relation be-tween the 'adjusted' form of the reducing theory—the blurred models of the kinetic theory (Ta A Ca)—and the adjusted version of the reduced the-ory (7ft)—thermodynamics of gases—is a strict deduction, inasmuch as the constituents of the models of the reducing theory (the colliding molecules) reproduce in their behaviour the laws that govern the constituents of the models of the reduced theory (the system's 'states'). Yet this relation is spelled out not in terms of linguistic or axiomatic structure, but in terms of a mapping from (a set of models of) the reducing theory to (a set of models of) the reduced one. To summarize, by defining the reduction relation within the framework of the semantic view of theories and by making this definition precise Bickle offers what seems to be the most plausible account of inter-theoretic reduc-tion to date. In what follows we shall adopt this state-of-the-art view of reduction and use it in our inquiry into the differences between the oppo-nents in the debate of the origins of thermodynamic irreversibility and its relation to chance. We shall not, however, walk through Bickle's treatment of the remaining objections to inter-theoretic reduction—multiple realizability and anoma-lous monism. 2 6 Rather, and since the structuralist twist of Hooker's insight appears to be the current best available account of inter-theoretic reduction, we shall apply Bickle's idea to our attempt to compare the GRW theory and the open system approach as surrogates for the standard, Boltzmannian, ac-count, of thermodynamic phenomena. Preparatory to this let us now remind ourselves what monsters lie beneath the murky waters of the reduction of TD to SM. The interested reader can find more on how the 'new wave' confronts these 'too weak/too strong' arguments in Bickle (1998a, 104-164). Chapter 6. A Tale of Two Theories 108 6.2 I n t e r - t h e o r e t i c r e d u c t i o n — T D a n d S M 6.2.1 A n exercise in S M Among the examples that inhabit textbooks in philosophy of science dealing with inter-theoretic reduction the case of thermodynamics and statistical mechanics is everyone's favourite. When one surveys the literature, however, one finds that the relation between TD and SM serves to exemplify not only the success of the reductionist world-view, but also its failure. Some describe the relations between TD and SM as a classic case of reduction between theories. For example, in Ernst Nagel's locus classicus on inter-theoretic reduction one finds that The incorporation of thermodynamics within mechanics—more exactly, within statistical mechanics and the kinetic theory of matter—is a classic and generally familiar instance of such re-duction. 2 7 Nagel is not alone in this belief. For example, in the preface for the Russian edition of their Statistical Physics textbook, Landau and Lifshitz (1980) say: Statistical physics and thermodynamics together form a unit. A l l the concepts and quantities of thermodynamics follow most naturally, simply and rigorously from the concepts of statistical physics. The story, however, is not as simple as these authors would like us to believe. When one embarks on the project of reducing TD to SM, as the founding fathers of SM did more than a century ago, one finds that there are several obstacles that cast doubt on the success of such a project. First, the macroscopic phenomena described by the non-TRIi laws of TD are asym-metric in time, while the laws that govern the entities in SM are presumably T R I i . 2 8 Agreed, this difference in character is not a problem per se, as both laws describe different levels of phenomena, yet it becomes a problem if one includes in the reductionist project the ambition to derive the laws of the reduced theory from the laws of the reducing theory. Second, the theoret-ical concepts of TD, such as heat, work, temperature and entropy, are so 2 7 Here Nagel (1961, ch. 11) is discussing inhomogeneous reduction, where the theoretical entities of the reduced theory do not form a proper subset of the theoretical entities of the reducing theory. 2 8 the exact point where irreversibility is manifest in thermodynamic processes is still under dispute. See chapters seven and eight. Chapter 6. A Tale of Two Theories 109 unique to the macroscopic level that it seems almost impossible to connect them with a microscopic equivalent, and third, when finally the connec-tion is achieved, one has to introduce probabilistic assumptions that in turn demand a justification. These obstacles seem to be the motivation behind a more reserved and cautious attitude towards the reduction of TD to SM, as put by one of the authorities in the field, Larry Sklar: 2 9 . . .SM successfully reduces TD by replacing the structural con-straints on the world imposed by the latter theory ... with a structural constraint on probabilities of the initial system char-acterized at the microlevel. What remains disturbing here is the seeming absence of any place for such constraint from the mechanical world-view. Sklar succeeds in delivering his cautiousness regarding the attempts to reduce TD to SM when he says that even when we can be assured of the validity of the assumptions necessary to obtain what we want, we may still be unable to obtain from these assumptions all that we would want, and again, in the rare case that we do seem to obtain completely what we want, Sklar has an argument waiting to show that after all we may not truly like what we first wanted.30 No wonder that there are those who describe the entire project as an exercise in SM (Sado-Maso...).31 Even more confusing is a remark of another authority in statistical physics, Jos Uffink, from which we can learn that a careful examination of the subject leads to the conclusion that the adequate relation between TD and SM is not what it is usually stated in the textbooks:32 ...[0]ne is tempted to conclude, somewhat perversely, that ther-modynamics is not successfully reduced to orthodox statistical mechanics at all, but rather vice versa. That the reduction of TD to SM is not a desert island in the sea of phi-losophy of science is an understatement. Indeed, only recently were some of the topics mentioned below explored in a Ph.D. dissertation by J. Van Lith (2001), whose research intended 'to investigate whether and how SM 2 9 Sklar is renowned for his three decades long struggle with the reduction of T D to S M . See, e.g., Sklar (1967; 1978; 1993; 1998; 2000). The quote here is from Sklar (1993, 368). 3 0 F o r a critique of Sklar's view see Uffink (1996). 3 1 Hellman (1998, 206). 3 2 Uffink (ibid., 386) refers here to concept extension, e.g., the notion of negative tem-perature which penetrated T D from S M . Chapter 6. A Tale of Two Theories 110 is a satisfactory theory for thermal phenomena of systems that are in equi-librium'. 3 3 Here I would like to propose that given the account of inter-theoretic reduction just presented, the question whether TD is reduced to SM has a simple answer, and as such it might not be the most interesting question to ask. My idea is that the 'new wave' view of inter-theoretic reduction allows us to demonstrate that at least one version of SM, namely Boltzmannian SM, does establish a reduction relation between TD and SM. Once we free ourselves from the fruitless dispute about this issue we can devote more time to the more interesting open question the foundations of SM present us with—the problem of probability—which within the project of constructing a mechanical model for TD is further augmented with the demand to supply a dynamical justification for SM probabilities.34 With the new problem well-posed it thus becomes possible to confront the notions of chance and ignorance within the project of supplying a dynam-ical explanation to thermodynamics and to demonstrate that even within this framework ignorance plays a role in the construction of mechanical models for thermodynamic phenomena displaying an asymmetry in time. 6.2.2 Chance and Ignorance in S M Consider the current standard explanation for thermodynamic phenomena. In Boltzmann's approach the mechanical analogue % for TD is an individ-ual dynamical system, defined only in equilibrium and restricted to spe-cial cases such as diluted ideal gases with infinite degrees of freedom and almost-zero density. This, as we have seen here, is not a problem in itself— approximations and idealizations are necessary to construct the reduction relation. Rather, problems begin when an arbitrary distinction between macroscopic states and microscopic states is introduced in order to impose an a priori measure on the latter, with which one can make predictions with respect to the former. Next, one faces the embarrassment of the unjusti-fied neglect of those regions on phase space which lead to a violation of the theory's predictions.35 Finally, deus ex machina constraints that are non-generic according to the theory itself are necessary in order to prevent the 3 3 V a n Lith (2001, 5). 3 4See chapter five (section 5.1). 3 5 T h i s problem, known as the measure zero problem, is discussed extensively in Sklar (1993). It boils down to the fact that the 'bad' regions which yield violations of the theory's prediction form a measure-zero subset of phase space, hence are highly improbable, but not impossible, especially when if no unphysical ergodicty is assumed, the measure that assigns them the zero-measure is not unique. Chapter 6. A Tale of Two Theories 111 reducing theory from making false retrodictions. Moreover, given the original motivation to construct an atomic model for TD, it is only ironic that there are almost no attempts to explain how the atoms in such model behave: the ergodic theory which is' invoked in order to justify the probabilistic assumptions is criticized as a non-starter since it yields unrealistic models,36 and the two other celebrated models of the standard approach—Eherenfest's (1912) urn (or 'dog flea') model and Kac's (1959) ring model—are highly abstract; involve no interacting parti-cles; and introduce non-physical 'unmoved movers'.37 In light of the lack of realistic dynamical models, it is not surprising that all efforts are directed at demonstrating that the initial conditions on which the explanation relies are typical,38 and what is puzzling here is that the fact that the standard explanation is a 'just so' explanation is never considered a draw-back, al-though one would assume that the aim of science is to reduce the number of 'just so' explanations and not to proliferate them. This seems to be the motivation behind the claims, e.g., Sklar (1993; 1998; 2000), that the Boltzmannian account does not fulfil the reductionist ambitions. Acknowledging the difficulties as I do, I beg to differ with Sklar's conclusion. Considering the obstacles just stated I suggest that the root of the dispute lies in the twofold problem of probability in SM, that is, in the problem of understanding the origins of SM probabilities and of choosing a probability measure in order to calculate these on the basis of the dynamics of the model, and not in the question whether this or that account allow us to announce that a reduction has been achieved. In fact, modulo the probability problem the Boltzmannian account is as 'reductive' as it can get. First, why does the Boltzmannian account leave the problem of proba-bility in SM unsolved? The reason is that in order to establish the reduction relation Boltzmann introduces unjustified probabilistic assumptions into the state-space class and into the structural class of models of classical mechan-ics. 3 9 In the first case, he imposes a measure on the state-space of the 3 6 Earman and Redei (1996). 3 7See also Schulman (1997) and Dorfman (1999). 3 8 Whence the dismissal of the 'measure-zero' problem Sklar is so fond of. see Goldstein (2001). 3 9 According to Emch and Liu (2002, 12-13), state-space models are subsets of states that a physical system occupies. The best example in the classical regime is trajectories on phase space. A n equivalent example in the quantum domain are spaces of quantum states whose trajectories are traced out by the systems in accordance with the Schrodinger equation. Structural models are, on the other hand, representations of systems whose structural and causal properties provide explanation for observed phenomena. Chapter 6. A Tale of Two Theories 112 system by assuming equiprobability of all micro-states which represent an equilibrium macro-state.40 In the second case he must assume continuous randomization of molecules' collisions.41 To put things more bluntly, Boltzmann's answer to the question why thermodynamic systems approach equilibrium is that equilibrium is a vastly more probable state than non-equilibrium. However, if our project is to dynamically underpin thermodynamics, then this answer is flatly incomplete unless the notion of probability introduced, the choice of the probability measure, and the probabilistic assumptions are all justified on the basis of the underlying dynamics!42 Second, why is the Boltzmannian account, modulo the problem of proba-bility, 'as reductive as it can get' ? The answer resides in the simple difference between validity and soundness. We know that in the models introduced by Boltzmann and others with which one can construct the reduction relation between TD and SM there are many idealizations and approximations. The ergodic theory yields unrealistic models and Lanford's theorem (which is a proof for the peaceful co-existence of reversible micro-dynamics and irre-versible macro-dynamics) holds only in a certain limit and for ridiculously short time. 4 3 Thus one cannot hide behind the ergodic theory as an answer to the problem of probability in SM since there is little reason to believe the measure it yields applies to physical dynamical systems such as we usually observe, say, in the lab, and one cannot cite Lanford's theorem as a proof for the relaxation into Maxwell-Boltzmann probability distribution since the proof allows such relaxation for extremely diluted gas and relies on an as-sumption which is falsified by the character of the dynamics for time scales 4 0 Ergodicity is usually invoked to account for the micro-canonical measure, yet ergodic systems relax to equilibrium only in infinite time-scales. 4 1 To derive the Maxwell-Boltzmann distribution it is not enough to assume that the particles of the gas are "weakly interacting" or not interacting at all, for it is possible that because of past interaction the particles are still correlated sufficiently at present. We have to assume in addition that there is some mechanism that continuously causes the particles to forget their past. It is important not to interpret these claims as an attempt to underestimate the success of current thermostatics. On the contrary, the puzzle is that this success is achieved with almost no foundational basis, and the idea here is that establishing a dynamical justification may serve to fill this foundational gap. See Hellman (1998, 209-213). 4 3 T h e degree of validity of the Boltzmann equation and its relation to the kinetic theory of gases is verified by a limiting tool (the Boltzmann-Grad limit) where the ratio of the mean free path to macroscopic dimensions is held fixed while the gas becomes more and more rarefied. The validity of the equation increases as the density of the gas decreases, the limiting length is the diameter of the molecule and the limiting time is the mean duration of collision. See Grad (1958). Chapter 6. A Tale of Two Theories 113 longer than the time between collisions. But the claim that these models are unrealistic is not an admissible crit-icism against the Boltzmannian narrative in the context of the 'new wave' view of inter-theoretic reduction since the sufficient condition for a reduc-tion relation is the existence of non-empty set of ordered pairs of blurred models of the reducing and the reduced theories, and no matter how unreal-istic the counterfactuals Cn are, in the two cases above a reduction relation can be established. The crucial points are that (A) if physical systems were ergodic, then they would provide a good mechanical model for thermody-namic phenomena, and that (B) If the molecular chaos hypothesis held for molecules that constitute a rarefied gas in the Boltzmann-Grad limit and obey Hamiltonian equations of motion, then for almost all initial conditions the set of solutions to the equations of motions of these molecules converged to the set of solutions to the Boltzmann equation, and this is what is needed for establishing a reduction relation. The sufficient condition for establishing the reduction relation is that (A) and (B) should be valid, i.e., that the class of models (which Boltzmann called 'Orthodes') will allow one to recover the thermodynamic relations for certain macroscopic variables and that the rare case of Lanford's proof will allow one to recover thermodynamic phenomena from the underlying dynamics. The sufficient condition for justifying the use of the probability measure in individual concrete experiments, on the other hand, is that (A) and (B) should be sound. Since soundness is usually harder to establish than validity, the problem of probability is much harder to solve. But even without the solution to the problem of probability, and whether you like it or not, according to our best available account of inter-theoretic reduction so far, the Boltzmannian approach satisfies the elementary condition of the existence of a reduction relation: no matter how counterfactual the constrains in (A) and (B) are, the two cases demonstrate the existence of a non-empty set of pairs of models of the reducing and the reduced theory. There is, of course, another problem with the Boltzmannian story which some, e.g., Price (1996; 2002); Goldstein (2001), regard to be the only prob-lem in the foundations of SM: Boltzmann answers the question why systems approach equilibrium with a typicality claim. But if equilibrium states are the most probable states then how come we, or the observed universe, are not occupying one? Boltzmann's first solution is to suggest we inhabit a cosmic fluctuation from equilibrium but then the retrodictions of the the-ory stand in flat contradictions to our memories. The rejoinder is that one must postulate non-generic initial conditions of low entropy (Albert's (2001) Chapter 6. A Tale of Two Theories 114 'past hypothesis'). Now, this requirement of non-generic low entropy initial conditions of the universe is disturbing inasmuch as the only motivation for introducing it comes from the need to correct wrong retrodictions of the theory. We can create low entropy states in the lab, but who created the low entropy state of the universe? Whether this is indeed an interesting problem is arguable, but I hope that the foregoing demonstrates how viewing it as the only problem in the foundations of SM is a misrepresentation of the subject matter, especially when Boltzmann himself (1895, 447) took his remarks on the application of SM to the universe less seriously than Price does. Be that as it may, with Boltzmann's approach behind us we turn to the following question: how can the introduction of chance into the dynamics solve the problem of probabilities in SM? A general exposition was given in the previous chapter and more details are found in the final part of the thesis. Here I shall allow myself the following remarks: To begin with we should note that although the idea to solve the prob-lem of probability in SM with Q M is almost a decade old, there still exists no specific model that allows one to establish the existence of such solu-tion. What exists is a dynamical hypothesis which states that if one writes down the quantum mechanical kinematics and GRW dynamics for a certain thermodynamical system, say, a gas in a box, then for all initial conditions and almost immediately this system will concur in its behaviour with the thermodynamical predictions. In other words, the hypothesis is that the dynamical chances of the GRW approach reproduce the measure imposed by Boltzmann which the ergodic theory aims to justify. If this dynamical hypothesis were proved one could release the foundations of SM from the embarrassment of endowing our ignorance explanatory power by justifying the choice of the probability measure. But this is not the only philosophical dividend one gains from inject-ing chance into the explanatory narrative. First, as described in chapter five (section 5.3), the introduction of chance would make the macro-micro distinction a natural consequence of the dynamics. Next, it would free the models from the dependence on specific initial conditions: thermodynamic evolution is highly probable irrespective of the initial state of the system.44 Finally, by introducing chance one could now implement the deus ex machina M\t is true that what allows this is the conjecture that the number of 'good' microstates overwhelm the 'bad' ones, but contrary to the standard explanation, where one assumes with no justification that the number of microstates that instantiate equilibrium outnum-ber the ones that instantiate non-equilibrium this is now an empirical hypothesis rather than a derivative of an a priori postulate. Chapter 6. A Tale of Two Theories 115 constraint (the 'past hypothesis') not to correct a mistake but 'to fill in the blank space'—recall that the GRW theory is non-TRi2 hence says nothing about the past. And yet there exists a way in which the standard explanation can be remedied without invoking chance. Agreed, this path involves introducing ignorance to the dynamics, yet for reasons expressed in chapter three (section 3.5) this ignorance had better not be the kind of ignorance usually invoked in the foundations of SM. More specifically, it had better not be ignorance of the exact state of a system a-la Boltzmann. We have already argued that by answering the first part of the ques-tion of probabilities in SM (which concerns the origins of SM probabilities) with Boltzmannian ignorance one leaves the second part of this question (which concerns the choice of the probability distribution with which one calculates the SM probabilities) unsolved since the choice of the probability distribution and the probabilistic assumptions are unjustified. This is the truth behind the slogan "ignorance does not do anything".4 5 In previous chapters, however, we have suggested another kind of ignorance which re-sults from 'tracing out' environmental noise, and we dubbed the approach in which this ignorance is manifest the open system approach. I suggest that it is precisely the difference between the notion of ignorance invoked by the open system approach and the one invoked by Boltzmann which allows the former to solve the problem of probability in SM. In the standard explanation SM probabilities are traced back to the thermodynamic limit and the law of large numbers, thus they are understood as a result of ignorance of the exact micro-state the system is actually in. But if our project is to give a dynamical justification to SM probabilities then this kind of ignorance leaves the problem of probability in SM unsolved. If SM is to be regarded as more than mere combinatorics, then there must be a way to introduce ignorance to the mechanical model other than ignorance of the exact state of the system, because in the classical domain in which the Boltzmannian story is told the exact state of the system can be verified in principle as accurately as one wishes. Thus, if ignorance is to be introduced as the origins of SM probability, it should be introduced as the neglect of 'noise' stemming from the interaction of the system with its environment, but because one can always enlarge one's d 5See Albert (1994, 670, emphasis in original) Nothing, surely, about what anybody may or may not have known ... can have played any role in bringing it [approach to equilibrium - A.H] about (that is: in causing it to happen ...) Chapter 6. A Tale of Two Theories 116 system to include the environment and hence eliminate the noise in one's description, it is more plausible to regard the noise as a natural consequence of 'tracing out' the environment from the description. After all, the only true system we have is the universe as a whole, and so what one regards as 'system' and as an 'environment' is completely conventional. Once the envi-ronment is traced out, however, the open system approach yields, formally, the same net result as the GRW theory,46 thus it solves the probability problem in SM to the extent the GRW theory does. But net results and calculations are not enough. If physicists usually solve well-defined problems, then philosophers usually find problems in what previously were thought well-settled solutions. The conceptual difficulties in the open system approach were discussed in the previous chapter, but even if we disregard obstacles such as (1) the question about what is captured by the theory—reality or appearance;—and (2) the difficulty in accounting for an individual system, in the wider context of reproducing thermodynamic phenomena from the underlying mechanics the open system approach still depends—as the Boltzmannian account—on specific initial conditions, albeit dynamical rather than thermodynamical ones.47 This fact, of course, comes as no surprise since, as discussed in chapter four (section 4.1.1), the dynamics of no-collapse theories is TRI2, whence the necessity of including initial conditions in any explanation of irreversibility, thermodynamic or other, within this framework.48 Since by invoking non-orthodox Q M as an alternative in the foundations of SM one regards the Boltzmannian approach as a starting point and the problem of probability it leaves behind as a motivation, this section suggests that the dynamical solution to the problem of probability is also the point where SM itself becomes obsolete: one can ignore the need for such dynam-ical solution and leave the problem unsolved, but in so doing one admits, as Boltzmann does, that SM is here to remain: If indeed probability is needed to pick out the relevant classes of states, trajectories, etc., and/or to establish compatibility of 4 6See chapters seven and Appendix C. '"'Recall that the basic requirements for decoherence to 'work' is that the system and the environment be separable, i.e., that they can be represented as a tensor product before the interaction and that the interaction Hamiltonian commutes with the system's observables in the position basis. 4 8 I t is indeed interesting to investigate whether the two types of initial conditions— the dynamical and the thermodynamical—are related. For example, it has been shown (Linblad, 1996) that the lack of correlation between the system and the environment is a necessary condition for the desired irreversibility in the reduced dynamics of the subsystem. For further exploration of this subject see Hemmo and Shenker (2003). Chapter 6. A Tale of Two Theories 117 macroscopic irreversibility with microdynamics, then we would clearly have irreducibility of T[D] to Mechanics] + anything not containing probability concepts. And this would leave SM as a kind of middle level theory, perhaps indispensable as a means of linking entropy and other thermodynamical concepts to micromechanics but perhaps also ultimately anthropocentric in character.49 On the other hand one can solve the problem of probability in SM by uni-fying SM probabilities with Q M probabilities and in so doing one replaces SM and reduces TD to QM, by turning it into a by-product of (quantum) mechanics. But while the GRW theory leaves unchanged the character of TD and SM, the open system approach modifies the standard view of SM (and consequently, of TD) in the spirit of Blatt (1959, 754): Statistical mechanics is not the mechanics of large, complicated systems; rather it is the mechanics of limited, not completely isolated systems. In concluding this section let me make the following remark. The analysis presented here seems to go against the motto of this chapter which says that the truth could not be worth much if everyone was a little bit right. The scientifically inclined reader might even be baffled: scientific theories are about results of experiments, and not about our conception of the world— whether it is deterministic or not. Instead of going around in circles and discussing what the different approaches mean and how they all fit. into this or that category (in our case: reductionism) give us an experiment with which we can test them; select one of them; and demonstrate its truth. And if such an experiment does not exist, then so much the worse for your thesis. In defence I can say that it is indeed true that currently there are no experiments or tests with which one could identify the 'correct' theory. Yet this does not render the discussion useless. Under the same reasoning the transformation between Ptolemaic and Copernican cosmology would have been incomprehensible, for what was at issue there was a novel perception of how nature is, and this could not be immediately 'proved' by a new 'prediction'. As Holland (1993, 11) puts it, '[t]o find a test, ideas must be nurtured. In the mean time they should be assessed according to different criteria, such as their explanatory power.' Finally we arrive at the cogent claim I promised to make some twenty pages back. Hellman (1998, 212). Chapter 6. A Tale of Two Theories 118 6.3 Reductionism—a Holy Gra i l or a Misguided Quest? Many textbooks on statistical mechanics dedicate their opening chapter to a short exposition of the foundations of SM. This exposition is generally re-stricted to classical dynamical systems and is usually followed by a disclaimer in which the author mentions that while quantum statistical mechanics is to be treated towards the end of the textbook, as far as the foundations of the subject are concerned, there is no loss of generality in restricting the discussion to the classical regime. But as is well known, our world is not classical, but quantum, and clas-sical mechanics is false. It might be the case that the authors of the text-books presuppose that there exists a domain of observed phenomena in which classical statistical mechanics applies but in which quantum mechan-ical effects make no appreciable contribution to the underlying physics; a domain which lies in the intersection of the classical and the quantum and in which statements like 'ordinary macroscopic systems have some unknown state approximately localized in position and momentum' are meaningful. But what if orthodox non-relativistic Q M forbids the existence of such domain? That is, what if according to orthodox non-relativistic Q M state-ments of this kind are not approximately true but universally false? Note that this is not just a foundational dilemma. Non-relativistic Q M is, of course, not the most fundamental theory; it is only the tractable lim-iting case of quantum field theory (QFT), which itself is expected someday to be replaced by quantum gravity. Yet since it is practically impossible to work with computationally intractable theories, or with theories that we do not yet know, we usually look for domains, of the more fundamental the-ories in which they approximate the more tractable and less fundamental theories. This is the whole point behind the reduction vehicle—the dimen-sionless parameter 5—introduced in the beginning of this chapter (section 6.1). However, if such domains do not exist, then physicists working in classical SM must be working with a theory which is known to be false,50 5 0 Under no account should this claim be interpreted as underestimation of the remark-able achievements in mathematical physics of classical dynamical systems in the last cen-tury. The scepticism here is only directed at the relevance of the beautiful mathematical results to the foundations of S M . One may be reminded of Einstein's remark to Cartan (cited in Emch and Liu , 2002) regarding the cosmological constant: ...[M]y feeling for this theory is like that of a starved ape who after a long search, has found an amazing coconut, but cannot open it; so he doesn't even know whether there is anything inside. Chapter 6. A Tale of Two Theories 119 and philosophers who do foundational work in physics must be stumbling at foundational problems in classical SM which are merely an artefact of classical mechanics. Anti reductionists, on the other hand, are quite content with the non-existence of domains allowing inter-theoretic reduction. From their point of view it is yet another vindication to the thesis that science and reality are just 'a patchwork of laws'. 5 1 Consider the following case study which is a mirror image of the aforementioned claim. Classical non-integrable dynami-cal systems are the natural habitat for deterministic chaos.52 But according to almost all the criteria for chaos in classical mechanics, in finite conserva-tive quantum systems there exists no chaos.53 The significant difficulty in reproducing one of the most characteristic classical phenomena in the quan-tum regime is interpreted by many anti-reductionists as demonstrating the explanatory indispensability of classical mechanics. Take the solar system as an example. For almost a century physicists have speculated that its motion is chaotic.5 4 What does it mean? Well, we can construct chaotic classical models to describe its motion. The crux is that quantum models of the solar system cannot yield the mixing behaviour which is characteris-tic of their classical kin; nor they can reproduce their presumably adequate predictions.55 Here we have a feature of the world which not only is better explained by a less fundamental and less accurate theory but is also absent from the models of the more fundamental one. On behalf of the reductionists let me make three comments on this ex-ample. The first is a general observation; the second regards QM; and the third—SM. First, there is a substantial difference between the metaphysical and the methodological claims advanced by reductionism, and, insofar as we humans are deprived of the 'Olympian world-view' the former does not entail the lat-ter. The metaphysical claim is arguably undisputable. Our world is one, inasmuch as nature doesn't really care where lies the humanly delineated boundary between the quantum and the classical. But the phenomena in this unique world can be described in many different levels, and more impor-5 1Cartwright (1995, 292); Galison (1997, 781- 803). 5 2 For finite conservative systems the two aspects of this phenomenon—exponential di-vergence of nearby trajectories and long time mixing behaviour—arc co-extensive. See Belot and Earman (1997) and Appendix B. 5 3See Casati and Chirikov (1995) and Belot and Earman (ibid.) for a survey of this much disputed issue. 5 4 Wisdom (1987); Wisdom and Sussman (1992). 5 5 Bclo t (1998, 460-461). Chapter 6. A Tale of Two Theories 120 tant, can be explained by many different theories. These two observations do not contradict each other, hence nothing can be argued for or against reductionism from the mere fact that some theories present a better tool for understanding and explaining nature. Even if one 'derived' classical mechan-ics from non-relativistic orthodox Q M and explained the chaotic character of the solar system within the latter, this explanation might still be far less telling than the one offered in the classical regime. As Putnam (1975, 295-296; 1994, 429-430) argues, explanations are not necessarily transitive, i.e., an explanation of an explanation is not necessarily an explanation since sometimes it is irrelevant to the level of description in which one is working. 5 6 Unfortunately, the claim that reductionist metaphysics necessarily dis-credits or eliminates reduced theories appears in many anti reductionists' writings.5 7 This claim is groundless. Nothing is eliminated or discredited except, maybe, the reputation of the anti-reductionists who bark up the wrong tree. When vitalism was banished from the scientific community and the term 'life' was 'explained away' by molecular biology, it still remained if only to delineate the domain to which biology applies. No one would dream of doing biology on a rock, but every biologist would agree that living organ-isms are constituted of nothing more and nothing less than the same stuff that constitutes rocks, and can, in principle, be described as such. Simi-larly, if chaotic phenomena are better explained by exponential divergence of trajectories in phase space and by mixing behaviour, then chaos is there to remain, if only to circumscribe the domain to which classical mechanics applies. Having said that, I turn to the claim that orthodox non-relativistic Q M cannot accommodate an important feature of the world and of our expe-rience. One need not go as far as the debate on quantum chaos to make this point. That our experience of the classical world as a whole is inconsis-tent with the formalism of orthodox non-relativistic Q M is already manifest in the quantum measurement problem (chapter five, section 5.2). Yet this only emphasizes the urgent need to revise orthodox Q M . 5 8 In so doing, one automatically (although with some complicated mathematical machinery) accommodates quantum chaos. For example, Zurek (1998) shows that tak-5 6 Putnam gives a simple example: when one wants to explain why a triangular peg passes through a triangular hole in a box one can refer to the micro-constituents of the peg and the box etc., but far more telling would be simply to rely on geometry, notwithstanding that the latter is a by-product of the former. 5 7See, e.g., the vast literature against the neuro-computational view of the mind. 5 8 I t is hard to see what is the empirical content of Q M without such a solution, unless one renounces the applicability of Q M to the measuring apparatus. Chapter 6. A Tale of Two Theories 121 ing decoherence caused by stellar dust into account allows one to produce quantum models of celestial motion whose dynamics exactly mimics that of the classical models.59 Vitali and Grigolini (1998) demonstrate that the same effect can be achieved via GRW collapse. But now if modifying orthodox QM—either with a unifying dynamics a-la GRW, or with a complicated machinery which yields an appearance of a classical world via decoherence—allows us to accommodate the classical world, or at least an important feature of it, then I see no reason to resist the introduction of quantum mechanical considerations into the classically treated foundations of SM. And I see no reason not to allow for at least certain quantum mechanical phenomena to do foundational work in SM. And this is what I have tried to argue all along the last two chapters. Setting aside the various obstacles such a move faces—they will be described in detail in what follows—the moral of this part of the dissertation is that the puzzle of the thermodynamic arrow in time and the twofold debate on the choice and meaning of probabilities in SM are both closely related to the problem of interpreting QM, i.e, to the measurement problem. We may solve one by fiat and thus solve the others, but it seems unwise to try solving them independently. 5 9 Recal l from chapter five (section 5.4) that decoherence must be supplemented with no-collapse interpretations in order to make sense of its claims. 122 Part III Constructing the Principles Chapter 7. For a Fistful of Entropy 123 Chapter 7 For a Fistful of Entropy [... N ] o o n e k n o w s w h a t e n t r o p y r e a l l y is, so in a d e b a t e y o u w i l l a l w a y s h a v e t h e a d v a n t a g e . J. von Neumann -A-mong the thermodynamical terms that demand a statistical mechanical conceptualization, the case of entropy is an illuminating one. Originally invented by Clausius as a state function of an individual system in equilib-rium which measures its energy degradedness, entropy remains one of the "most elusive theoretical concepts in modern physics. The problem with TD entropy, as defined by Clausius and presented in the first section of this chapter, is not that it lacks a statistical mechanical analogue. Given that entropy is one of the most abused thermodynamical terms outside TD, it is natural that the situation in SM is quite to the contrary: SM seems to offer multiple realization of entropy, and the problem transforms into a case of a coherent choice between possibilities. TD entropy, which is defined only in equilibrium state, has two coun-terparts in SM: Boltzmann's and Gibbs' entropies. In classical SM the two definitions of entropy yield similar, yet not identical, results in certain cir-cumstances,1 but rest on different conceptual foundations: the first applies to individual systems; the second to ensembles of systems. Notwithstanding instrumental arguments for preferring the latter, the aim of the second sec-tion of this chapter is to establish the conceptual superiority of the former in the classical realm. In the third section we shall present the quantum mechanical entropy, the von Neumann entropy, and discuss some technical details necessary for evaluating the plausibility of the decoherence and the GRW approaches in the foundations of SM. Remarkably, Gibbs' old neglected fine-grained en-tropy becomes an attractive candidate in this regime. Finally in the fourth section we explore how the different approaches divide the above inventory of entropies between them. 'E .g . , for an ideal gas in equilibrium. See Jaynes (1965). Chapter 7. For a Fistful of Entropy 124 7.1 Holy Entropy, It's Boil ing! Historically, heat was conceived as a weightless, elastic and fluid substance, or caloric, but at the end of the 17th-century, after being the subject of much respectable scientific work, it was pronounced non-existent, and the caloric theory of heat was put on the shelf.2 The concept of energy was introduced in 1847 by Helmholtz, who cited the work of Joule and used the term Kraft to denote the causal power of nature to act upon matter. Helmholtz's contemporaries and successors believed that energy was present in bodies in two main forms only: (i) as energy of motion, or kinetic energy, equal to one-half the body mass times its squared velocity, and (ii) as energy of position or potential energy, equal to the work that must be done against ambient forces to carry the body from a position of (conventionally) zero potential energy to its present place. It was Joule's contribution that led to the recognition of the principle of conservation of energy. In this context heat was then regarded as kinetic energy.3 Already within the caloric theory of heat it was known, due to Sadi Carnot, that just as water yields mechanical work as it falls from a higher to a lower level, so does heat, then regarded as a substance, as it falls from a higher to a lower temperature. Carnot showed that the efficiency of a heat engine, i . e. , the amount of work it can obtain from a unit of heat as this falls to a lower temperature, has an upper bound that depends only on the temperatures between which the engine operates. The two principles, the first and the second laws of TD, were the building blocks of the new science of heat, founded by Clausius and Kelvin in the midst of the 19th-century. Kelvin and Clausius gave two different versions of the second principle, embodying the gist of Carnot's idea without the water metaphor:4 A transformation whose only final result is to transform into work heat extracted from a source which is at the same temper-ature throughout is impossible (Kelvin). A transformation whose only final result is to transfer heat from a body at a given temperature to a body at a higher temperature is impossible (Clausius). 2 For a detailed report see Fox (1971). 3 Maxwell (1883). 4 The citations are taken from Fermi (1936, 30). Chapter 7. For a Fistful of Entropy 125 Building on Carnot's work, Kelvin developed the absolute scale of tem-perature and, more relevant to us, Clausius the concept of entropy. It was defined by using the ratio between the amount of heat, Q, and the temper-ature, T, in a cyclic Carnot engine £. Multiplied by the temperature differ-ence in such an engine this ratio yields the amount of work that the engine can produce. The limitation of transforming heat into work is captured by the second law of TD, and in the general continuous case of exchanging infinitesimal quantities of heat we get: if ™ the integral being taken over a complete cycle off. If the cycle is quasi-static we get,5 If -and because the integral is independent of the course of the process then given a conventional reference state 0 , we can now define a property of the thermal state A by: SM = £ f - (7-3) Clausius called the quantity S(A) the entropy of A. Evidently we get: n - n•rf - n -r? -«»-™ The definition of entropy as a property of state in eqn. (7.3) presupposes that the integral on the right-hand side depends only on O and A. The inte-gral must therefore be taken over a continuous succession of quasi-static heat exchanges. But once entropy was defined as a property of thermodynamic systems, one may well compare the entropy difference on the right-hand side in eqn. (7.4) taken over a continuum of arbitrary heat exchanges. Consider a cycle formed by an arbitrary process from A to B combined with a reversible one from B to A. We then get by eqn. (7.1), JABA 1 JA J- JB 1 5 Quasi-static means a slow and smooth transformations from one equilibrium state to another. Chapter 7. For a Fistful of Entropy 126 Hence, by eqn. (7.4) 0 > and in the general case: S(B) - S{A) > f (BdQ 'A T (7.5) If the process under consideration occurs in an adiabatically isolated system, cLQ = 0;6 therefore, the entropy of the final state B is always equal to or greater than that of the initial state A. Thus, in a closed thermodynamic system the entropy can never decrease.1 Three remarks are in order. First, note that Clausius—introducing as he did entropy as a primary concept—relied on the intuitive premise of the impossibility of a perpetuum mobile of the second kind which states that heat cannot, of itself, pass from a colder to a hotter body. This has led many to regard irreversible phenomena as essential to the proof of the existence of entropy. Thus Sklar (1993, 21) writes: The crucial fact needed to justify the introduction of such defi-nite entropy value is the irreversibility of physical processes. It is the fact that heat engines cannot not only not [sic] generate mechanical work without the consumption of heat, but that they cannot be run in such a way as to produce work without degrad-ing the quality of heat in the world, that is crucial to the proof of the existence of entropy.8 But clearly the existence of the definite state function S, rather than be-ing a consequence of the irreversible character of spontaneous heat processes is a result of nothing more than the mere fact that the integral 6 An adiabatically isolated system will have constant heat, apart from constant energy and constant mass. 7Note that this result applies only to isolated systems. Thus, it is possible with the aid of external system to decrease the entropy of a system. The entropy of both systems taken together, however, cannot decrease. 8 B y now the attentive reader begins to appreciate the sloppy editorial mistakes Sklar's monumental book suffers from. This is surprising since a good editor could have made the book much shorter yet no less monumental... [BdQ 'A T Chapter 7. For a Fistful of Entropy 127 depends only on the extreme states of the transformation and not on the path of the transformation itself.9 This point is important since many text-books on SM misleadingly state that the essence of the second law of ther-modynamics is that it 'drives' systems to their equilibrium states and that entropy increases monotonically during this approach to equilibrium, where no such statements can be found in Clausius' work. Sklar should have been more cautious to spell out exactly what he means when he says a sentence later that 'the fundamental fact of irreversibility is summarized in the second law'. The second law simply states that when a process is irreversible, that is, in Clausius' terms, non-quasi-static,10 then the entropy difference between its initial and final states is positive. It is true that a simple reasoning leads to the conclusion that when a system reaches its maximum entropy state then it stays there forever unless external intervention drives it away from this state, but the fact that thermal systems spontaneously evolve towards equilibrium is not encompassed in the second law unless further conditions are satisfied.11 In sum, the connection between entropy and the second law, if there is one, is quite simple and unfortunately different than what Sklar wants us to believe: the essential content of the second law of thermodynamics is the existence of an entropy state function (and of absolute temperature) for every equilibrium state.12 Next, the classical understanding of Carnot cycles on which Clausius relies involves quasi-static processes and these are by definition traceable by trajectories (or transformations) entirely contained in the space of possible states of the system. But since all that is needed to construct the entropy function is that the initial and final states belong to this space, the value of the entropy function depends only on the state and not on its history. From this it clear that TD entropy can be defined only in equilibrium, and that rather than a primary concept, entropy should be best regarded as derivative.13 Finally, Clausius' definition of entropy requires an arbitrary choice of a standard state. It can be easily shown that the difference between the entropies of a state A obtained with two different standard states is a con-9 Fermi (1936, 49-50). Note, however, that dQ itself does depend on the path hence cannot be interpreted as a differential of a putative state function Q which actually does not exist. 1 0Uffink (2001, 318-319). 1 1 For more on the truth and myth behind the second law see Uffink (2001) and Uffink and Brown (2002). These issues are discussed below in chapter eight. 1 2 This result is also known as the Heat Theorem. See also Emch and Liu (2002, 78). 1 3 A way to formulate T D in these terms was pioneered by Cartheodory in 1909 and was later consolidated by Giles (1964) and Lieb and Yngvason (1999). Chapter 7. For a Fistful of Entropy 128 stant, and since we are dealing with entropy differences this indeterminacy should not trouble us. However, it was only the third law of TD which com-pleted the entropy definition and enabled to determine this constant. So far we have described TD entropy as a state function of an individ-ual system in equilibrium which never decreases in thermodynamical trans-formations. But as function entropy has many more properties, of which three—additivity, extensivity, and concavity—are important to the discus-sion that follows. 1. Additivity. Assuming that the energy of a system is the sum of the energies of all its parts, and that the work performed by the system is equal to the sum of the amounts of work performed by all the parts, the entropy of a composed system is equal to the sum of the entropies of all its parts. In the axiomatic framework of Lieb and Yngvason (1999) the requirement of sum functions is subsumed under the axioms, i.e., under the definitions of adiabatic processes and states in state space. It is then possible to write s(x,y) = s(x) + s(y). (7.6) for every state of any composed system.15 2. Extensivity. An extensive variable scales with the size of a macro-scopic system. Thus, entropy, energy and volume are extensive (but temperature is intensive) and we can write for each t > 0 and for each state X and its scaled copy tX:ls S(tX)=tS{X). (7.7) 3. Concavity. Assuming that the state space of a thermodynamic sys-tem is a convex set, the entropy of an isolated system is a strictly concave function of its arguments.17 Why is concavity considered im-portant? It means that for Ai , A2 > 0; Ai + A2 = 1: s{(x®y))>x1s(x) + x2s(y). (7.8) "Fermi (1936, 52). 1 5 Recal l that X and y are equilibrium states on the state space the system. 1 6 A scaled state space is physically interpreted as a state space of a system whose properties are the same as in the original one, except that the amount of each chemical substance in the system has been scaled by the factor t and the range of other extensive variables, e.g., energy, volume, has been scaled accordingly. 1 7 W e say that a function / is (strictly) concave if —/ is (strictly) convex. What is convexity? Let R n denote n-dimensional Euclidean space (i.e. the set of all. real-valued vectors of length n). A subset D of R n is said to be convex if, V(x, y) 6 D , and VA £ [0,1], we have \x+ (1 — X)y € D . (i.e., for any two points in D , the line segment connecting the Chapter 7. For a Fistful of Entropy 129 where equality holds when X = y or when the As are 0 or 1. We shall see that when one moves to SM, entropy is usually regarded a measure of the lack of information, hence if two ensembles of identical systems in different states X and y are fitted together,18 one loses information that tells from which ensemble a specific sample stems, and therefore entropy increases. Summarizing, entropy in TD is a well-defined state-function of an in-dividual physical system in equilibrium. Its elusive character, however, is revealed once we move to the realm of statistical mechanics and join the quest for the 'holy grail'—a mechanical model for thermal phenomena. 7.2 Entropy in S M - Bol tzmann vs. Gibbs The gap between continental philosophy and the Anglo-American tradition has drawn much attention in many different contexts. It is thus not surpris-ing that also in classical SM two orthodoxies—continental and American— clash. Back in the continent contemporary debates on the atomic hypothesis and the kinetic theory led Boltzmann to introduce his concept of entropy. On the other side of the ocean it was Gibbs who formulated SM in pragmatic terms of which one was his coarse-grained entropy. Both approaches and the entropies they introduce as counterparts to the thermodynamic concept pre-sented briefly above are in many cases in perfect agreement in equilibrium states. Yet, as a matter of fact, Gibbs' approach governs the literature. It is the purpose of this section to persuade the reader that from a foundational point of view and within the context of constructing a mechanical model for thermodynamic phenomena on the basis of Hamiltonian dynamics, one should prefer Boltzmann's entropy to Gibbs'. My strategy in achieving this goal is an old one. Rather than defending Boltzmann's entropy I am going to attack Gibbs'. The tactic, however, is novel. I am going to elaborate on a long forgotten essay of Carnap (1977) written when he was a visiting fellow in Princeton. What is interesting in this essay is that although Carnap is right on the money when he criticizes two points lies entirely in D ) . Now Let / be a function defined on a convex subset D of R n . We say that / is convex if V{x,y) e D and VA e [0,1], we have f ( \ x + (1 - X)y) < Xf(x) + (1 — X)f(y) • For function / defined on R , this inequality says that the chord connecting any two points on the graph of / lies above the graph. If equality holds if and only if either x and y are identical, or A = 0 or A = 1, then / is said to be strictly convex. 1 8 What in the mathematical language is described by a 'convex combination' of X \ X + X ^ y , and is noted here with the sign ffi. Chapter 7. For a Fistful of Entropy 130 Gibbs' approach and the hastiness in identifying entropy with information-theoretic uncertainty which accompanies it, he does so on the basis of what he considers an epistemological flavour which the Gibssian approach intro-duces into an otherwise strictly physical context. This, unfortunately, blunts his criticism since he can be accused of committing the fallacy of exchang-ing 'subjectivity' with 'contextuality'. By amending this flaw I expose the genuine and more serious problem in Gibbs' coarse-grained entropy and con-clude that within the standard Hamiltonian framework Boltzmann presents a far more desirable alternative to the foundations of SM. 7.2.1 Gibbs In Gibbs' approach one represents the microstate of a physical system with N particles each with / degrees of freedom by a point X G T where T, the phase space of the system, is a 2A r/-dimensional space spanned by the Nf momenta and Nf configuration axes. As the system evolves this representa-tive point will trace out a trajectory in T which obeys Hamilton's equations of motion. 1 9 Next, one considers a fictitious ensemble of individual systems (repre-sented by a 'cloud', or a 'fluid', of points on phase space) each in a microstate compatible with a given macrostate (say, such and such energy in such and such pressure contained in such and such volume). The macroscopic pa-rameters thus pick out a distribution of points in T. We then ascribe a normalized density function to the ensemble, p(p,q,t)20 and, except for en-tropy and temperature, the mean value of any meaningful phase function with respect to p describes the system's thermodynamic properties.21 For entropy Gibbs chooses the expression SFG(p(X)) = -K J p(X)[logp(X)}dT. (7.9) where the integral is over T and K is Boltzmann's constant. For isolated systems we use the microcanonical probability distribution in (7.9),22 and this will match up to an additive constant the value for TD entropy. Eqn. (7.9) also allows us to define a temperature. Remarkably, using these definitions Gibbs recovers the familiar thermodynamic relations for systems in equilibrium. 1 9See Gibbs (1902) and Appendix B. 2 0 Where p is position, q is momentum, and t is time. This follows from the fact that for extensive functions of macroscopic systems with large number of particles the mean value is identical with the maximum value. 2 2 Tha t is, we regard each microstate as cquiprobable. Chapter 7. For a Fistful of Entropy 131 A l l this is so very fine, but if Gibbs' systems obey Hamilton's equations of motion then the 'cloud' representing them in phase space swarms like an incompressible fluid. 2 3 Consequently his 'fine-grained' entropy as defined in (7.9) is invariant under the Hamiltonian flow: dSFG{p) dt If there is a problem in SFG it is not just the fact that it does not move. Recall that TD entropy is defined only in equilibrium, so in order to construct a mechanical counterpart one only needs to find a function whose value at a later equilibrium state is higher than at an earlier equilibrium state.24 But since the macroscopic parameters change between the two equilibrium states, the Gibbs' approach has no problem in doing this just by defining a new ensemble with a new probability distribution for the new equilibrium state and this will match the thermodynamic entropy as before. This solution, however, reminds one of a famous exchange in the Journal of Philosophy where, in commenting on a paper titled "Supervenience Is a Two-Way Street" G. Hellman wrote "Yes, But One of the Ways is the 'Wrong Way'!" 2 5 Indeed, as Callender (ibid.) who follows Sklar (1993, 54) notes, it is not fair to use the macro-parameters, which are supposed to be derived from the micro-parameters, in order to construct the latter. In other words, the ensemble at later equilibrium state should be the Hamiltonian-time-evolved ensemble of the earlier equilibrium state, otherwise the system is not governed by Hamilton's equations as one originally presupposes. Thus, if one wants to use Gibbs' fine-grained entropy as a mechanical counterpart to TD entropy, then one must abandon standard, Hamiltonian, dynamics since it does not connect the two fine-grained equilibrium states. That this is the true problem with Gibbs' fine-grained entropy escaped many commentators, and as a result the foundations of SM were soon piled with a lot of dead wood. Stemming from the famous Ehrenfests' paper (1912, 43-79) where Boltzmann's students complained of Gibbs' treatment of irreversibility by categorizing it bluntly as "incorrect" , 2 6 the last century was consumed with attempts to find a monotonically increasing function as a counterpart for TD entropy. One way to achieve this goal is to follow Gibbs himself, who introduces the mathematical trick of 'coarse graining' and devises new notions of en-2 3 T h i s fact is also known as 'Liouville's theorem'. See Appendix B. 2 4Callender (1999, 358). 2 5 Hellman (1992). 2 6Ehrenfest (ibid., 71). (7.10) Chapter 7. For a Fistful of Entropy 132 tropy and equilibrium. In this approach one divides T into many small finite cells of volume w and then takes the average of p over these cells. The result, p, is attributed to all the points in the cell f2j. This allows us to write the coarse-grained probability distribution as the ensemble density in each cell: And although p is conserved by the Liouville flow, its density in each cell need not be so. Thus, equilibrium is defined as a state in which p has fibrillated uniformly throughout the available T, and by substituting p with p in (7.9) one then defines the coarse-grained entropy as: But as Callender (ibid., 360) notes, this cannot be the whole story, since according to Gibbs the irreversible behaviour of SCG is due solely to coarse graining, that is, to our incomplete knowledge of the microstates: Thermodynamic behaviour does not depend for its existence on the precision with which we measure systems. Even if we knew the positions and the momenta of all the particles in the system, gases would still diffuse through their available volume. Callender, however, is both right and wrong. He is right that in order to account for irreversibility coarse graining alone is not sufficient. As Ridder-boss (2002, 69) tells us, to complete the Gibbsian story the dynamics of the system must also lead representative points far away from each other, that is, the dynamics should satisfy mixing conditions or any of the stronger con-ditions in the ergodic hierarchy.27 But Callender claims further that coarse graining cannot be necessary for the explanation of irreversibility since it hinges on a kind of epistemological subjectivity which seems irrelevant to the physical course of events, and although I agree with his further claim, I beg to differ on the reason behind it. While it is true that coarse graining introduces a kind of 'subjectivity' into SM which presumably not only has no counterpart in TD but also depends on an arbitrary choice of resolution on the experimenter's side, Callender's criticism and the line of thought it gives voice to has only limited validity. There are more serious criticisms of the coarse graining method and (7.11) (7.12) 2 7See Appendix B and also Krylov (1979) who constructs the entire foundations of SM on the basis of dynamical instability of trajectories in phase space. Chapter 7. For a Fistful of Entropy 133 the concept of entropy which accompanies it. Before fleshing these out let me mention here yet another philosopher who took the claim about subjectivity to its extreme. 7.2.2 Carnap Against 'Subjectivity' Carnap's two essays on entropy were written during his tenure fellowship at the Institute of Advanced Studies in Princeton between 1952 and 1954. Here is a passage that summarizes best his conclusion from conversations concerning entropy with mathematicians and physicists in Princeton: 2 8 It seemed to me that the customary way which the statisti-cal concept of entropy is defined or interpreted makes it per-haps against the intention of physicists a purely logical [read 'epistemologcal'—A.H.] instead of a physical concept; If so, it can no longer be, as it was intended to be, a counterpart to the classical macro-concept of entropy introduced by Clausius, which is obviously a physical and not a logical [read 'epistemologcal'— A.H.] concept. The same objection holds in my opinion against the recent view that entropy may be regarded as identical with the negative amount of information. The core of Carnap's complaint is that TD entropy has the same char-acter as temperature, pressure, heat, etc. all of which serve "for the quan-titative characterization of some objective property of a state of physical system" . 2 9 Gibbs' entropy, according to Carnap, cannot be regarded a coun-terpart to TD entropy since by definition it depends upon the specificity of the description, hence it is an epistemological rather than a physical concept. Furthermore, by referring to an unpublished paper of his, "The con-cept of degree of order", Carnap (ibid., 10) criticizes the view which regards entropy as a measure of disorder. If one distinguishes, as he does in his un-published paper (and as we have done in chapter three, section 3.4), between epistemic and ontological randomness, it becomes clear that the existence of genuine randomizing procedure entails disorder, or epistemic randomness, in any chosen level of description, but the converse does not hold. It then fol-lows that if entropy is defined in terms of a genuine randomizing procedure or mechanism, it would provide more information about disorder in various levels of description than an entropy concept defined in terms of disorder at a certain chosen level of description. 2 8 Carnap (ibid., xii). 2 9 Carnap (ibid., 35). Chapter 7. For a Fistful of Entropy 134 In order to see why this line of criticism has only limited validity, and why, if one wants to reject Gibbs' entropy on a foundational basis, a more serious line of criticism should be taken, consider the following three points: 1. Contrary to what Carnap (and Callender) claim, there are many ther-modynamic entropies, corresponding to different degrees of experimen-tal discrimination and different choices of parameters. Similarly, in SM the definition of entropy depends on what macroscopic description of the system is chosen, and in a given macroscopic description a variety of definitions are possible.30 If this is so, then (a) Carnap's reasoning can be accommodated with the different entropies in TD, to each of which one can construct a physical objective counterpart, and (b) that human choice dictates the use of this or that family of concepts does not discredit the objectivity of each member of the family. 2. Even if we accept Carnap's demand for a randomizing procedure in the definition of an objective concept of entropy, Gibbs' approach is still consistent with such demand. Entropy, according to Gibbs, is a functional of the probability distribution on T. If this probability distribution were established by a random procedure then Carnap's requirement would be fulfilled. A careful reading of Gibbs' (1902, ch. 14) shows that this is indeed the case—the canonical distribution is appropriate for a system which has reached equilibrium with a heat reservoir, while the contact with the latter serves as a physical ran-domizing process.31 3. Finally, if entropy is defined according to a level of description, and if the latter is justified by the experimental arrangement and the physics involved, then this only means that entropy is a contextual, or a rela-tional, concept, and not a primitive one. Given a specific observer with a given, fixed, measurement resolution, it is the dynamics of the system which uniquely determines whether or not a particular non-uniform probability distribution evolves to a coarse grained distribution which is uniform with respect to the given measurement resolution.32 The upshot is that if one argues against coarse graining, as Carnap and Callender do, on the basis of its 'subjectivity' alone, then one allows the 3 0 G r a d (1961); Penrose (1981). 3 1 Note the striking similarity here to the open system approach and the caveat that this procedure is not genuinely random. 32r See Ridderbos (ibid., 72). Chapter 7. For a Fistful of Entropy- US Gibbsian to escape the criticism through the above three loopholes. It is one thing to say that the probabilities of SM are purely epistemic, and another to ground these epistemic probabilities in the physics of thermal phenomena. Thus, the coarse graining method yields an entropy which is not subjective but contextual,33 and the more serious criticism of the coarse-grained entropy lies not in its 'subjectivity' but in the fact that this concept of entropy is foreign to the project of constructing a dynamical model for thermal phenomena. 7.2.3 The (Real) Case Against Coarse Graining A short reminder: thermodynamic equilibrium is defined as a state in which the thermodynamic variables are stationary. Let us call the part of the theory which involves the relations between these variables in equilibrium thermostatics, and the rest of the theory which describes the relations be-tween different equilibrium states thermodynamics.34 Since TD entropy (which despite the fact that it cannot be measured directly is still a member of a distinguished small set of the thermodynamic variables) is defined only in equilibrium, it 'plays' its thermodynamical 'role' in the relation between the entropies in two equilibrium states.35 Prima facie Gibbs' approach satisfies the requirement for reproducing thermostatics since Gibbs' ensembles give correct results in equilibrium states and allows one to recover the correct relations between the thermody-namic variables. Yet this achievement loses much of its appeal as soon as one recalls that the original project was to do so on the basis of the underlying dynamics. 3 3 T h e distinction between these two adjectives was, unfortunately, blurred because of careless readings of many authorities in physics. Thus for example, Heisenberg (1970, 38) writes: Gibbs was the first to introduce a physical concept which can only be applied to an object when our knowledge is incomplete. and Born (1964, 72) adds: Irreversibility is therefore a consequence of the explicit introduction of igno-rance into the fundamental laws. Some may regard these quotes and the acceptance of Gibbs' approach they advocate as yet another distressing effect that Q M has had upon the foundations of physics. •"Although also here the T D trajectories trace 'quasi-static' processes, that is, infinitely slow transitions from one equilibrium state to another. If the process which connects two equilibrium states is not quasi-static no thermody-namic trajectory can trace it. Nevertheless the entropy difference between the two states is still a well defined quantity. Chapter 7. For a Fistful of Entropy 136 In TD it is meaningful to say that a certain individual system, say, a gas in a box, occupies an equilibrium state. But in Gibbs' approach an equilibrium ensemble contains by definition individual systems which are far away from equilibrium. Without further criteria for identifying the ensem-ble components it remains unclear how to relate in dynamical terms the average quantities the equilibrium ensemble yields with the thermodynamic variables which apply to an individual system. Since Gibbs' micro-canonical distribution generates empirically verified predictions, the question here is not whether but why such a relation holds.3 6 The gap between Gibbs' ensembles and the underlying dynamics be-comes wider when we move from thermostatics to thermodynamics. We have already mentioned the problem with Gibbs' fine-grained entropy: there exists no way to connect two fine-grained entropies in different equilibrium states with Hamilton's equations. This is the price we pay when in order to circumvent the famous reversibility and recurrence objections we define Gibbs' entropy as a function of a probability distribution of a fictitious en-semble of systems, rather than of an individual system.37 Unfortunately, in paying this price we simply abandon the original goal which was to account for thermal phenomena with the micro-dynamics. Presumably, those who insist on maintaining Gibbs' fine-grained entropy can live with the lack of dynamical justification for its verified predictions in thermostatics. Yet even they must admit that Gibbs' coarse-grained entropy must be abandoned if it yields false predictions in thermodynamics. Recall that coarse-graining is invoked because (due to Liouville's theo-rem) the fine-grained entropy cannot evolve into a uniform distribution in phase space which is the mark of 'true' equilibrium. As a result an alter-native concept of equilibrium is introduced in which only an "appearance" of equilibrium is obtained. Let us call this "apparent" equilibrium 'quasi-equilibrium'. 3 8 This 'quasi-equilibrium' distribution indeed gives rise to the same values of macroscopic variables that define the thermodynamic state of the system as does the fine-grained distribution and for all practical pur-poses the two definitions are empirically indistinguishable. The fine-grained entropy, however, is still subject to the dynamical laws which preserve the 3 6Leeds (1989) goes further and argues that the question is not why but rather why do Gibbs' averages work for one observable quickly, and for another slowly. The remarkable fact is that unless one assumes (non-physical) strict ergodicity the microcanonical prob-ability distribution is even not unique, that is it is not the only one which is preserved under the dynamics. See Earman and Redei (1996). 3 7See Callender (ibid., 352) for a lucid formalization of the problem. 3 8 T h e term was coined by Blatt (1959, 749). Chapter 7. For a Fistful of Entropy 137 correlations between the system's micro-components and reflect the hidden 'order' in the system. These correlations are ignored in 'quasi-equilibrium', and the discrepancy between the 'true' and 'quasi' equilibria becomes evi-dent when one is able to control the micro-components of the system, either directly or indirectly. As already mentioned in chapter five (section 5.4), the famous 'spin echo' experiments exemplify exactly such a case of indirect control. Since in these experiments the first RF signal is induced after the spins have reached 'quasi-equilibrium', the echo produced by their alignment after the first signal comes as a complete surprise to the coarse-graining method which must regard such innocent velocity reversal as a violation of the second law. 3 9 Another facet of the same problem arises when one recalls that the fine-grained distribution—subject as it is to Hamilton's laws—is also susceptible to the consequences of Poincare's recurrence theorem.40 In thermodynamics such recurrence will result in decreasing TD entropy. It is true that, as Boltz-mann remarked to Zermelo, we should live that long to observe such a state, but what is important here is that the coarse-grained entropy—increasing monotonically in time as it is—never decreases, hence cannot account for such true violation of the second law which according to Hamilton's equa-tions is possible in principle.41 Summarizing, Gibbs' coarse-graining approach fails in two ways. First, as the spin echo experiments show it predicts a violation of the second law when there is none. Second, with its insistence on a monotonically increasing function of entropy it fails to predict such violation when there is one. Both failures stem from the simple fact that the Gibbs' approach, in its attempt to free itself from the dynamical restrictions imposed on an individual system, becomes irrelevant to the individual system, hence has little to do with the micro-dynamical origins of thermal phenomena. Consequently, if one's aim is to construct a counterpart for TD entropy in classical mechanics one must abandon Gibbs' coarse-grained entropy and look elsewhere. 7.2.4 Boltzmann It has become somewhat fashionable to resurrect Ludwig Boltzmann's old neglected concept of entropy, especially the reconstructed post-iiT-theorem entropy recently championed by authorities in statistical physics.42 Boltz-3 9Ridderbos and Redhead (1998). '"'See Appendix B. 4 1Callender (1999, 366). 4 2 E . g . , Lebowitz (1994); Gallavoti (1999); and Goldstein (2001). Chapter 7. For a Fistful of Entropy- US mann introduces a concept of entropy twice along his career: first when he derives the ii-theorem, and second, when he defends this theorem against the reversibility and recurrence objections of Lochsmidt and Zermelo.43 Both concepts are statistical in character, yet they differ in the origins of the probabilistic assumptions introduced into the underlying dynam-ics: 4 4 In the H-theorem it is an assumption about continuous randomization of molecules' collisions (the 'molecular chaos' assumption) which is neces-sary for the derivation of a stationary Maxwell-Boltzmann velocity distri-bution; in the post-iJ-theorem case it an assumption of equiprobability of microstates combined with pure combinatorics. Although the former concept of entropy has been described by some, e.g., Price (1996) as 'a dead horse', it is the one that is more related to the dynamics of the system hence is much closer in spirit to Boltzmann's orig-inal goal—that of constructing a mechanical counterpart to TD entropy.45 Setting this involved issue aside, I want to concentrate here on Boltzmann's later concept, the one which appears as an epitaph on his tomb in Vienna's Central Cemetery. Contrary to Gibbs ensemble approach, Boltzmann's entropy SB is de-fined for the actual microstate x of an individual system which corresponds to a macrostate M(x). The latter, in turn, is compatible with many dif-ferent microstates. In order to count how many microstates are compati-ble with a given macrostate we partition the 6-dimensional energy surface of the phase space of the N-particles individual system, say, an ideal gas, (this space is called /iz-space) into compartments which are macroscopically indistinguishable—they share the same thermodynamic features—and spec-ify the number of particles in each cell. Each such specification, or arrange-ments (Boltzmann called them 'complexions'), determines a macrostate. The /x-space is important only for counting the number of arrangements compatible with M (x). Once we determine this we are able to associate with each M a volume in V. Different arrangements yield different macrostates and these partition T into disjunctive volumes. The volumes are 'gener-ated' by the projection of the Lebesgue measure onto the energy surface.46 Boltzmann's entropy is then defined as: SB = i f l o g | r M ( x ) | +C 4 3See Klein (1973) for an account of the evolution of Boltzmann's ideas. 4 4See chapter five (section 5.1). 4 5 A quick survey of the physics literature demonstrates, however, that working physicists are unimpressed by Price's remark. Boltzmann's H theorem is a derivative of Boltzmann's equation which is one of the most useful equations in statistical physics. 4 6See Appendix B. (7.13) Chapter 7. For a Fistful of Entropy 139 where K is Boltzmann's constant, C is an additive constant which de-pends on N, and | rM( x ) | is the volume of phase space associated with the macrostate M(x) . For a gas in a box, where N is in the order of 10 2 3, there are over-whelmingly more arrangements corresponding to an equilibrium state (where the macro-parameters are uniformly distributed in the physical space, and the Maxwell-Boltzmann velocity distribution is uniformly distributed in ft-space) than, say, to a state in which most of the gas is confined to a certain corner of the box with nonuniform distributions. Thus, an equilibrium state occupies almost all the relevant energy surface on V. In chapter two (sec-tion 2.2.2) and in Appendix B we discuss how to translate statements about volume in phase space into probability statements. We can then say with Boltzmann that thermal equilibrium is the most probable, or the typical, state of a physical system.4 7 Gibbs' coarse-grained entropy displays one of the characteristics of TD entropy, namely strict non-decrease; Boltzmann's displays another, namely additivity. Both, however, are concave and extensive. The latter property allows Boltzmann's entropy to attain non-decrease except for exceedingly rare cases. The reason is simple. If one acknowledges the vast separation of scales between the macroscopic and the microscopic levels then after dividing SB with the spatial volume of the system one obtains entropy per unit volume whose differences for different macrostates are of order unity. This means that if NEQ is a state in which a gas of N particles is confined to half a container with an external constraint and EQ is a state in which the same gas is spread all over the container, the ratios of the volumes I T ^ I and | F J V £ Q | is of the order 2^, which in the case of a gas with 10 2 3 particles is 2 l t ) 2 3 . Thus, there is a probability of 2 _ 1 ° 2 3 that a gas in NEQ would stay there once an external constraint is dropped.48 Many physicists can live with that. 7.2.5 A Short Bookkeeping As in the case of Gibbs' coarse-grained entropy, this cannot be the whole story. Since Boltzmann's entropy measures the number of microstates that the system is not in but could be in without us noticing, it serves more as a (quantitative) description than as a causal explanation to thermal phenom-ena. Surely it cannot be the case that the number of microstates our gas in a box does not occupy 'drives' the gas towards equilibrium. Considering 4 7Lebowitz(1994); Goldstein (2001). 4 8 Thi s , in fact, is the precise meaning of the term typical. See Goldstein (2001, 43). Chapter 7. For a Fistful of Entropy 140 that one's project is to construct a mechanical model for TD, the dynam-ics must play a certain role—if only to justify the probabilistic assumptions that lead to the definition of entropy as a probability measure. Furthermore, Boltzmann's approach leaves us with another problem: since the dynamics of classical mechanics are TRI i , unless we postulate an initial low entropy state for the universe as a whole, nothing prevents entropy from increasing also towards the past, which seems to go against our memories and experi-ence. And although we can design low entropy states in the lab, why was the entropy of the universe low to begin with? Notwithstanding these shortcomings which were discussed extensively in previous chapters, Boltzmann's entropy is a better starting point for a reduction project than Gibbs', simply because (1) it is defined as a function of individual systems and these are what we usually observe,49 (2) it has almost all the properties of TD entropy, and (3) it behaves correctly. Among its three merits, (1) is the most important since it ties Boltz-mann's entropy to the underlying dynamics and this immediately leads to (3). The price, as is well known, is the 'almost' in (2): the second law be-comes a statistical law, in the spirit of Maxwell's (1995, 583) famous remark: The second law has the same degree of truth as the statement that if you throw a tumblerful of water into the sea you cannot get the same tumblerful of water out again. Callender (ibid., 371), however, adds another feature: (4) Boltzmann's entropy can be extended to the quantum regime since the volume of a macrostate in phase space has a natural quantum analogue, namely, the dimension of the projector on the macrostate in Hilbert space. Yet this fea-ture is not unique to Boltzmann's entropy since Gibbs' fine-grained entropy has similar analogues. We move, then, to the quantum regime to meet these counterparts. 4 9 T h e term 'objective' is deliberately omitted here since although the number of mi-crostates corresponding to a macrostate is an objective matter, there still exists a kind of coarse-graining in Boltzmann's definition, apart from the fact that if Boltzmann were to pick the velocity-position space instead of the momenta-position space his entropy defini-tion would not have worked (Gallavotti 1999). It is noteworthy that Callender (ibid.) who attacks Gibbs' entropy on the basis of its subjectivity allows a fair amount of physically justified choice of description when his pet entropy is concerned. Chapter 7. For a Fistful of Entropy 141 7.3 Entropy in the Q u a n t u m W o r l d 7.3.1 Von Neumann's Entropy In his famous treatise on Q M von Neumann introduces his concept of quan-tum entropy:50 S V N =-KTrplnp , (7.14) and justifies his definition on a thermodynamical basis. Contrary to Gibbs' who was cautious to identify his concept of entropy with the thermodynam-ical concept and referred to it only as an "analogue', von Neumann and many other authorities in Q M take SVN to be identical with TD entropy.51 Von Neumann's quantum mechanical formalism guarantees that during a measurement SVN increases.52 This feature, however, does not in itself assure us that SVN is indeed TD entropy or that entropy increases dur-ing a measurement. In order to establish that von Neumann proposes a thought experiment which heuristically equates mixed states with chemical mixtures.5 3 This experiment leads to an arithmetical argument intended to prove that the decrease in thermodynamical entropy is compensated by an increase in SVN-Add