Saturday, September 15, 2007

Transcript discussion Albert

July 20 9.30
Probability in the Everett picture
Speaker: David Z. Albert
Commentator: David Papineau
Floor speakers (in order of appearance):
Lehner
Ladyman
Wallace
Saunders
Greaves
Loewer
Vaidman

45


Comment by Papineau

1. That was great, you did put all these issues on the table and that’s very helpful and address all the – I just want to make one general point; this’ll take about ten minutes, I hope you’ll bear with me, I won’t waste everybody’s time. Let me first make one specific point about one of your arguments, the fatness argument; I just want to make one little comment on that because it’s relevant to what I’m going to say in a second. I think it’s important that even if we go with Hilary and the fission programme and we have Everettian – I’m going to call them probabilities, but that’s just a word - I mean these things that we orient our actions to. Even if we have them without uncertainty, without any ignorance, and so we can think of them as a measure of caring for future branches, we need to distinguish to kinds of caring. There’s a kind of caring that goes with ordinary wanting, desiring certain outcomes and there’s a kind of caring that goes with expecting certain results. I don’t think you disagree about this. I mean, they’re both kinds of caring in the sense that they inform decisions without any uncertainty or ignorance around. But they’re still different, they have a different direction of fit to the world. The carings that go with wantings are kind of up to you, everybody to their own taste. Whereas the carings that go with expectings are supposed to be somehow responsive to features of the world. So, having made that distinction, your question is: can you rationally expect the fat future more than the thin one; the two branches, even the squared moduli of the amplitudes of the two branches are just the same and you don’t actually care wanting-wise about fatness and thinness, that’s the question, and I don’t know what the answer to that is, I just wanted to clarify these different kinds of caring. I suspect you’re right and that the Deutsch-Wallace proof doesn’t force the Born rule on Everettians, but I’m not going to talk about that. What I want to try and show is that even if the Deutsch-Wallace proof doesn’t do the job it’s advertised to do, even so, Everettians are much better off with probability than orthodox metaphysics.

2. My view, and I don’t know if I’m the only one, has always been that we should accept Everett because of what it says about probability, not despite what it says about probability. I think that the issue here is a decision-theoretic one – I know you don’t think about it like this – I think the central issue is why should we maximise objective expected utility. Equally, why should we bet with the probabilities, subjective probabilities; why should we set our credences equal to the squared amplitudes, why should we conform to the Principal Principle – that’s all variants of the same idea. There’s a chancy roulette wheel in front of us, red is two-to-one on, black is two-to-one against, you’re offered a bet of evens, you should bet on red. Why? Why should you bet on red? I don’t think you realise how much of a mess orthodoxy is in with respect to that question. Some of your remarks suggested that this is something to do with frequencies; we bet the way we do as a rational reaction to the frequencies of outcomes. I just don’t – I mean I’d like to see it spelt -. So, we’re not sure to win this time but if we do it ten times, no, we’re not sure to win .. ten times. So you can go, you know, the infinite long run and frequency theories of probability and – I just don’t think it works. I think that once one looks through all that what one has to realise is that the principle of betting with the probabilities is a primitive principle, the Principal Principle is primitive. There’s no further explanation.

3. I think that means that Everettianism is no worse off than orthodoxy with respect to chancy decisions. Both of them just say it’s a primitive fact that you ought to set your credences to the squared amplitudes and act accordingly. Lev said to me yesterday that that’s an unorthodox view. And I guess it depends on where you come from. I was brought up in Cambridge by Richard Braithwaite and Hugh Mellor and Ian Rankin and I was taught it’s obvious that the Principal Principle is primitive. It’s obvious that you can’t get a further justification. And I take that to be – Anyway, the point is that if you think that then Everettian is no worse off than orthodoxy. Actually, I think, having got this far, that I can show that not only is orthodoxy as badly off as Everett, it’s worse off than Everett. Think about an ordinary person …. I’m going to talk about an ordinary gambler but this is a model for decisions in general. Even after the ordinary gambler has embraced the Principal Principle as a primitive constraint on actions there’s still something funny about the ordinary decision makers. Here they are, they’re maximising expected utility they’ve committed themselves to choosing their actions to maximise objective expected utility, but that’s not what they want; what they want is the money. They don’t want to make the bet that will maximise expected utility over the two possible outcomes, they want actually to win. So what they’re doing is not what they’re after and one wants to know, well, why are you doing this thing given what you’re after is that? Now, maybe that this is a bit of an obscure question; what I’m in fact saying is that we want a justification for the Principal Principle because there’s this gap between what they’re doing and what they’re after, and we haven’t got it. You might feel this is an obscure sort of question but still it is a question and I can bring out that it’s a real question by comparing the ordinary gambler with the dedicated gambler. Imagine a dedicated gambler; this is somebody that doesn’t care about money, he’s one of the world’s … he’s got a lot of money. What he cares about is making good bets. So, I mean gambler’s have a notion of a bet that offers good value and my dedicated gambler is looking out for good value bets. He’s looking for cases where his bets will give him positive expected utility and when he finds such a case he makes a bet. But after that he doesn’t care; he thinks it’s vulgar to be upset if the bet doesn’t win. What satisfies him is that it’s a good bet; he’s found a good bet; he’s made it. Now, this person’s a bit odd maybe but people take pride in all kinds of different things, I can make sense of that. The point isn’t that how odd this person is. But for this person this kind of betting behaviour isn’t chancy. There’s no issue after the person’s made the bet whether it worked out okay. For them, given their aims, all bets are sure-things so to speak.

4. Compare the dedicated gambler with the ordinary gambler. In a sense they’ve both made an arbitrary commitment. The ordinary gambler has committed themselves arbitrarily to setting their credences to match the squared amplitudes, say; they conform to the Principal Principle. The dedicated gambler has committed themselves to desiring good bets. There’s a kind of isomorphism between the two commitments, they’ll both act the same. But there is a difference; there’s a question that arises for the ordinary gambler that doesn’t arise for the dedicated gambler. Why are you acting in this way? Why are you maximising expected utility? That’s not what you want. You want something else. With the dedicated gambler that question has gone away. You can’t ask the dedicated gambler are you maximising, that’s what the dedicated gambler is after ….. there’s no further question about it. No further …… Okay, well I hope you can see where I’m – the analogy’s not perfect but the point is that if we’re knowledgeable Everettians then we’ll be like the dedicated gambler, not the ordinary gambler with respect to the question ‘Why are you behaving this way?’. If you’re a dedicated Everettian you will favour maximising expected utility over the future branches per se, not as a means to some further end. I mean, the whole idea of there being some further end assumes that either you’ll win or you won’t. But our knowledgeable Everettian doesn’t think like that. There’s a branch, two-thirds intensity, where you’ll win and a one-third intensity branch where you won’t. If the winning branch has two-thirds intensity that was a good bet. Once you’ve done the bet, which is good in this sense, there’s no further question of whether the bet worked out well in this particular case. So there’s no room to ask a knowledgeable Everettian, as you can ask the ordinary gambler, the person who thinks of themselves as in an orthodox universe, ‘Why are you maximising expected utility?’. There’s no further aim beyond maximising expected utility for the Everettian. There’s no further result that might or might not happen.

5. So I take the fact that this nasty philosophical question that arises for the ordinary gambler and doesn’t arise for the Everettian to be a very strong point in favour of Everettians. That’s actually what got me to believe Everettian in the first place. I used to worry about this thing, I was brought up to worry about it and I saw that if you were an Everettian the question just went away. Okay, maybe not everyone is bugged by that puzzling philosophical question about maximising expected utility in the way was taught to worry about it so maybe this isn’t the place to make it. But at least I hope that I’ve been able to show that probabilities are a plus for Everett and not a minus.

Albert
6. Very, very briefly, look, on this first terminological issue, I’m inclined to follow Hilary. Hilary’s quite adamant, and I think rightly, that in the context of a fission picture expectation is absolutely the wrong word for this kind of caring. Hilary is pretty clear as I remember, in earlier papers. You want to know what to expect? Expect each of the results. Period, end of story. I think that’s an admirably clear way to put it and it seems to me to dangerously muddy the waters to use the language of expectation about that kind of caring. I think Hilary’s choice of ‘caring’ to describe this – of these amplitudes if I use them in a way I use probabilities in ordinary decision theory are representing the degree to which I’m concerned about what goes on on this or that branch. That seems to me exactly the right way to put it.

7. As to your much larger second point, there’s a lot to say about that, let me just confine myself to one line. and there’ll be much more to say later I know. Which is that it doesn’t seem a very compelling – yeah, probability’s hard and philosophy is hard and philosophy of probability is particularly hard and there are lots of mysteries about it. It’s never been for me a compelling way to argue for something to say, look, we have no idea what it is, why not say it’s this? It is hard, but there are things that we always thought we knew about it, even if they were far from everything, and one of those things was well you’re talking, in cases where probabilistic language is intelligible, you’re talking about cases of one outcome, or something like that. I don’t see any – of course things are going to get a lot easier if you say, oh, what I was really after was never the money, it was just maximising this function or something like that. That seems to be a way of avoiding seeing what’s hard about these problems rather than solving them. But we can talk more about that later, and thank you very much for your comment.

Lehner
8. I’ll save the big questions for later. Just a small question about your fat and skinny example. Is the fatness and skinniness supposed to be some physical property or supervene over some physical property?

Albert
9. Yeah, sure.

Lehner
10. So why isn’t that just part of the payoff?

Albert
11. It’s not, because I’ve said explicitly that this guy’s preferences are such that if you give him a choice between two deterministic evolutions, one in which he’s fat and one in which he’s thin, he’s indifferent between them.

Lehner
12. Look, I would personally rather be thin, right. So I think to me it doesn’t matter that the world is deterministic and I’m insecure? about the future or

Albert
13. No, no, I think you’re misunderstanding what I’m saying. There’s a simple test, okay, for what kind of role this is playing in his decision theory. Whether it’s playing the role of a weight, as it were (unfortunate comment), or the role of a preference. Here’s how to distinguish between them. If you offer the guy – you forget about branching – you say to the guy, you have a choice: if I press button A you’re gonna be fat, for sure, and if I press button B you’re gonna be thin, okay. If the guy is indifferent between those, which I’m positing he is, then his calculations are not coming out the way they are because of a preference to be fat or thin. They’re coming out the way they are because this fatness and thinness is playing the role of what Hilary calls the caring measure, rather than playing the role of a preference. So I’m positing that he’s indifferent between those; the claim that he’s indifferent has a perfectly definite measurable cash value and I’m telling you what that is.

[new question]

Ladyman
14. ……a question about the explanatory requirement of Everett ….. Why isn’t the claimed explanation that the Everettian approach gives of the frequencies we actually observe just a kind of standard one, that we say, look, so why do we get – a random shuffle of cards gives a very discernable pattern of suits or numbers because there are many more deals like that, than there are where everything comes out…………so why do we observe that the frequencies of a repeated series of Stern-Gerlach experiments are those predicted by the Born rule, well, because there are just many more branches the relative frequency … the Born rule …

Albert
15. Right. This was a conception that I mentioned at the very beginning of the talk. I just, I don’t understand how an account of the kind you just described can make sense without there being some element of either stochasticity or ignorance in the world, okay. And it’s both of those that are being resolutely denied when we start off with something like, say, the fission picture. That is, in the case of the cards, typically, it’s going to be a case of our ignorance of the exact initial conditions and we find that it turns out to be an empirically successful hypothesis to put a uniform probability over which of those might obtain and so on and so forth. In the case where you’re not ignorant of the initial conditions, as I say, and maybe this is what you’re doing in the back of your mind, there’s a strong semi-conscious temptation to say to yourself something like: well, I guess what I do, what the real me does is pick at random among these branches or something like that. But it’s important to see that the minute you catch yourself thinking like that you’ve got to pull back, because you’re going down – your either kidding yourself about what you’re doing, about whether you’re genuinely adhering to the Everett picture, or, if you do go down that road, you’re going to end up like Barry and me, in this single-minds picture, and you don’t want to go there.

[new question]

Wallace
16. Two quick points. One very quick point about the motivation for these semantic type arguments. They go something like this. Yes, absolutely, as you’ve said at various points, we know quite a lot about probability, uncertainty. We know quite a lot about the rules of epistemic confirmation an so on. That stuff we know in natural language and the danger is that how to phrase that in metaphysical language depends on how that language maps on. So we get one story as to how it works in a single universe and a different story about how it works in branching universes. So to assume that the correct metaphysical level story is the single universe story and then criticise Everett for breeching that on the grounds that we need a single universe story to be correct is to beg the question.

Albert
17. But let me just ask you a question here because I’m really not sure I understand the situation very well. I want to know more about what these arguments aim to accomplish. You guys are very forthright when you make these arguments, about saying: you want to know what the metaphysics is? It’s the fission metaphysics. Period, end of story, okay. Now, as far as I can tell, but I may be misunderstanding my own reasoning here; as far as I can tell, once that’s granted all the worries I have about this stay in place. All the worries I have about this come from the fundamental metaphysics of the fission picture. So that, if somebody’s telling me about some manoeuvre they’re making that’s going to leave all that in place my immediate reaction is, gee, this manoeuvre isn’t going to be interesting vis a vis solving these problems. Do you think I’m missing something when I say this?

Wallace
18. I think you are. I think what you’re missing is that it does leave the metaphysics in place but it doesn’t leave the correct epistemology and confirmation theory in place that maps onto that metaphysical story.

Albert
19. Okay, we should talk about that more.

Saunders
20. As co-author here, perhaps David and I disagree about this. I don’t think it leaves the fission picture in place at all.

Albert
21. Oh, okay, but you say it does!

Saunders
22. No,…..[hubbub]

Albert
23. Well, one of you…the paper says it does. It says, look – it’s important for the reader to understand, and I’ll read the quote, that we’re not looking for deep metaphysics from our semantics we’re looking for serviceability.

Saunders
24. And in particular the fission picture is metaphysics. And we’re not looking to find truth about that

Albert
25. Oh, I see, so the way you regard it is that on metaphysical questions you’re being agnostic; is that it?

Saunders
26. Not quite.

Wallace
27. I think there’s a slight misunderstanding about what counts as semantics and what counts as metaphysics

[hubbub]

Saunders
28. Take the single universe, now take four-dimensionalism; we have a complete physics, suppose its General Relativity, we’ve known this theory, we know the kind of picture of the universe it represents – 4-dimensionalism. Philosophers for a century, but especially in the last thirty or forty years, debate how ordinary language usage should map on to that physics. There was never any question of the physics, the question was how to extend ordinary language beyond ordinary usage. That question is a metaphysical one it is not a physical one.

Albert
29. Okay, good, now I see what David meant a second ago, good. The question is if someone’s coming to you in my position where the worry is, look, what’s left fixed by all these debates is already where the problem is then he’s not going to be impressed by these arguments. Now, David thinks that that’s not the case, that part of what’s in play here, part of what’s in flux here, are questions of epistemic strategy and so on and so forth. And, good, that’s worth talking about.

[new question]

Greaves
30. My question is about…whether or not …theory of rationality is relevant to….. You said that you agree that, yes, this narrow sense of the physical modelling of a system where it shouldn’t be relevant and isn’t in the sorts of ……[Albert saying ‘right,right,right’]…….You also agree that it should be relevant when what we’re talking about is confirmation theory. But you think, no, what we’re really talking about is this third thing, explanation, and it shouldn’t be relevant to that. So I’d just like to make a comment about that.

31. There’s always been this sort of minimal explanation you can get from an Everett interpretation where you just say, look, I had a theory which entailed that there would exist all these branches and there would exist all these observers and in particular there would exist an observer with precisely the record of sequence of actions [?] that you have in fact got…..[noise]…..and what you found out was that, indeed, there is……[noise]…….and the reply that one usually gives to the question is: here’s what more I want, a minimal explanation doesn’t give me a reason for having increased my degree of belief in the theory or regarding the theory as confirmed as a result of what I’ve seen. So the more that we want is precisely confirmation-theoretic and then we’re back to the second thing.

Albert
32. But no, that doesn’t seem the right way to put it. The right way to put it seems to be: what I want explained is why these frequencies as opposed to others are the ones that emerge; are the ones that I saw.

Greaves
33. Because that’s the branch that you’re on.

Albert
34. Good, but that’s not – Here’s a way to explain everything: everything happens with certainty. Let’s go home. You know, science is over. That’s not a good explanation. What we want is an explanation of why these particular frequencies emerged. I didn’t put that question in a way that involves observers or rationality or degrees of belief or anything like that. That’s what I want explained.

[new question]

Loewer
35. A question to David Papineau. I just didn’t understand how it is that you’re getting around the problem of providing some understanding of rationality?.........many-worlds……I mean, the question could arise that? somebody could be trained to minimise expected utility. Or to maximise on Monday, minimise on Tuesday, you could have all sorts of desires..

Papineau
36. I wasn’t trying to do that at all. I still take the principle that you should maximise expected utility to be primitive and not justified in terms of anything else. Both for the Everettian and for the orthodox metaphysician. My complaint was: the orthodox metaphysician, that was putting the primitive commitment in the wrong place. You are committing yourself to doing something that was detached from what outcome you hoped to achieve. Whereas for the Everettian you’re putting the primitive commitment in the right place. I wasn’t justifying the commitment at all.

Loewer
37. Why would anyone have such a primitive commitment?

Papineau
38. You tell me why you’ve got it as an orthodox – why are you maximising….

Loewer
39. ..as a step to explain – expected utility with? .a high chance of luck…….tell a story about chance….Okay, now, are there stories about chance……………..yeah, there are stories

Papineau
40. You kind of neo-Lewisians have “things that you can see dimly but well enough”. Now you can justify acting on the probabilities…….[laughter]……..I was presupposing in my argument that that wasn’t going to work. There’s a further issue about whether this Lewisian finite-frequentism business can give a justification of acting with the probabilities. I don’t believe it for a moment.

Albert
41. But if that’s what you’re doing, isn’t the rhetorical structure here, like I was saying, to get a lot of mileage – it’s some kind of mystery-mongering. Look, we don’t know what it is, it could be anything, it could be a hippopotamus, you know, let’s say it’s that.

Papineau
42. If you’re presenting an argument here it had better be something other than that the Everettian view of these matters is unfamiliar. Of course it’s different – what I’m happy to call probabilities isn’t a matter of a measure of some outcomes, chances of success, in the competition to become real. It’s a different kind of thing. So it’s different from what you think of, but is it incoherent? Unfamiliarity isn’t incoherence. You’ve got to do something to show it’s incoherent.

Albert
43. Well, I have tried to do something..

Papineau
44. Sure, sure

Vaidman
45. Just a brief reply to your complaints. First you said it’s avoidable. I don’t think it’s avoidable. Just to make it vivid, I put a sleeping pill in……in every quantum experiment the branching will happen much faster than your conciousness.. thinking?

Albert
46. Yeah, but there won’t be a period when you’re saying to yourself I wonder what’s going on.

Vaidman
47. But, again, conceptually in fact all your thinking? ..will be…

Albert
48. Well, okay, I don’t understand why that’s relevant but go ahead

Vaidman
49. …you say it’s too late. It’s too late to get ….chance…uncertainty….I don’t claim so, I think there’s no probability, no uncertainty, what there is is a kind of effective probability justification for this caring measure…That you have an ignorance probability you can define, and I don’t want to prove it, I just take the postulate, Born postulate, and define it, but I have to find some…

Albert
50. Okay, this is something I want to know more about. This hooks up with what David was talking about a few minutes ago. The view of this that I guess I understand best is the one that Hilary seems to have which is adamant about justifying the caring measure having nothing to do with epistemic issues at all. This is a case of decision theory in the face of certainty; your job is to choose which branches – which branchings, excuse me, not branches – which branchings you prefer and there are supposed to be arguments to the effect that you’d be crazy, given basic preferences for more money and on and so forth, not to choose certain branchings over other branchings. And notions of ignorance on here view have nothing at all to do with it. Now, a number of people seem to think that if I deprive myself of this talk of ignorance, or if I deprive many-worlders of their talk of ignorance, I’m going to be depriving them of very important resources with which they can do a lot, even though the basic metaphysics remains the same. I guess I’m not understanding yet how that works. But I’m eager to.

No comments: