The word 'probability' is ambiguously used to refer to two distinct ideas:

(i) Chances, which are conjectured to be a property of Physical systems - a property, or propensity, independent of the state of human knowledge.
(ii) Degrees of belief or betting quotients, which describe the extent of human confidence in the truth of a claim.

Chancy systems are analysed following the approach of Von Mises and Popper: Real Physical systems behave in various ways; to describe some such systems' behaviour, humans devise a model in which a system has the property of being able to generate, in the long term, an infinite sequence of outcomes (a collective) with each outcome having a particular relative frequency of occurrence. This relative frequency provides the numerical value for a chance. In the short term no pattern appears in the outcomes.
An outline of the Physics of Poincare, Hopf, and Engel, on arbitrary functions, in included, to indicate how systems, while fully determined by laws and initial conditions, come to behave in this characteristic way.

Subjective degree of belief has no necessary relationship to chance. It is methodological, concerning what people judge to be a reasonable extent of confidence in a claim, given the available evidence. The gulf between chances and reasonable degrees of belief can be bridged by an Inductive Presupposition, which itself seems to be unjustifiable, but is in universal use.

This Dual Theory fits unproblematically with all our intuitions concerning Probability. Traditional problems have arisen as a result of excessive emphasis either on objective chances, or on subjective degrees of belief.

(A Dual Theory of Probability)

24.3.1997; 17 512 words; Version 5.2


{I thank Rom Harré, David Papineau, Brian Skyrms, and John Welch, for valuable comments on a previous version of this paper}

"Distant stars are moving away from us". "This coin will land displaying a head". In everyday language, both of these claims are described as "probablex'1 (Footnotes are collected at the end of the essay), meaning that people are not able to establish them as true or false. But, despite this common feature, they are very different: the first only concerns a human degree of belief (betting quotient) in a claim, given the available evidence; it applies to all types of conjectures; this is epistemic, more subjective. The second concerns both this, and a conjectured feature of the natural world: 'chance'; this is ontological, less subjective.
In everyday communication, context and empathy identify which idea we intend. To emphasise one aspect, as an Objectivist or a Subjectivist, is, though temptingly simple2, to oversimplify. Such an attempt is a 'sub-theory'. Consider: "People have rational degrees of belief in propensities, given the evidence of relative frequencies". Each of the highlighted words and phrases is associated with a sub-theory. Since our Dual Theory includes part of each sub-theory, it cannot usefully be categorised as either 'Subjective' or 'Objective'. It contains both elements, working harmoniously side by side.

Our Problem and Metamethodology

What is the Dual Theory a theory about ? How can it be assessed? Our evidence - the facts for our theory to explain - is, firstly, the everyday uses of the word 'probabilityx', and, secondly, since we are typical users, our intuitions. Our problem is to provide an organised, summary of these uses - to summarise explicitly and truthfully the implicit ideas which are guiding the usage.
Though we do not assume that there is one essential concept present, we do assume that people have some coherent ideas when they use the word 'probabilityx'.
We therefore assess our theory by its (a) consistency (b) accordance with our uses and intuitions. This is what is meant by 'analysing', 'unpacking', and 'giving an account' of, a concept.
The challenges that such a theory faces are typically that it cannot make sense of a particular familiar intuition: say, the intuition that I have a definite probability of dying in the next year, or the intuition that a 5 has a probability of 1/6 of being thrown, not say 1/2 (because it can come up either 5 or not-5). The response that the intuition is mistaken, can only be made with caution 3.

Our primary aim is to find the truth about these uses and intuitions. It is a reasonable subsidiary aim both to describe the facts of human use and intuition in a simple unitary system, and to mark the limits of such simplicity. But it is not reasonable to insist on such simplicity. Whether it exists, or not, is an empirical question.
Similarly, it is a reasonable subsidiary aim both to describe the extent of justification for these human uses and intuitions - and to mark the limits of such justification. But it is not reasonable to insist that such justification must always exist 4. Again, whether it does or not is an empirical question.

The Dual Theory

The word 'probabilityx' is used to refer to two complementary things, one relatively less subjective, the other relatively more subjective:

(a) Chance : Some particular Kinds of physical system are conjectured to have the property (propensity) of chanciness: they generate outcomes which are very hard for humans to predict in the short term, yet display steady relative frequencies in the long term.
This is a matter of Semantics and Ontology - the forming of a clear conjecture about Nature. It is relatively less subjective.
(b) Degree of belief : Any claim, including ones about chancy systems, can have a degree of human belief associated with it. This degree of belief is either personal (based on anything) or consensual (rational - based on an assessment of the available evidence).
This is a matter of Methodology - of how reliable our conjectures are, how much we should believe that they are true, given the evidence. It is relatively more subjective.

These apparently dissociated things have in common only that they both concern human uncertainty; in (a), because of the nature of the system, we are uncertain of its short-term outcome; in (b) because of general lack of evidence, we are uncertain of the truth of the claim. While this family resemblance fully justifies the everyday use of one word for both, it has confused Philosophers.

This dual theory is not original. There are repeated references to it in the literature. I have tried to complete the model by fitting together available parts - after removing some inappropriate excrescences, and constructing a few pieces to fill in the gaps. Nonetheless, to avoid exegetical criticism, I will call it my theory 5.
Given the considerable number of Philosophers who believe that one or other of the parts is the whole model, I fear that DT may not be popular.
I now outline of the two aspects of this theory.

Element 1: CHANCE
"This coin will land displaying a head"

Probabilityx has an objective aspect. Insurance companies, as Poincaré, for example, wrote, successfully pay out dividends on the basis of probabilitiesx; they could continue to do so, even if further information on the medical conditions of its clients was provided by unscrupulous doctors - indeed, even if total evidence was supplied. If probabilityx ascriptions were entirely subjective, dependent on human ignorance - and therefore not perceived in Nature by a super-being - then we would not be able to explain "Why chance obeys laws" (Poincaré p. 403). Why are we, as non-super-beings, able to use probabilityx assignments in cases where effects are being produced by certain kinds of causes, to "successfully foresee, if not their effects in each case, at least what their effects will be, on the average"?
Consider, he suggested, the Kinetic theory of gases: we are presently unable to compute, given initial conditions at a certain time, and physical laws, how many molecules would hit the side of a box 5 seconds later; we cannot even establish the initial conditions; yet, oddly, the very complexity of the motions leads us to simple predictions - which turn out to be true. Even if, with future technology - perhaps as superbeings - we could do the computation, and could establish the initial conditions - removing our ignorance - the predictions based on randomness and equiprobability would still be correct; chance would still obey laws. The natural system, consisting of a large number of molecules in a box, has a property, linked to the success of these predictions, which is independent of human beings in general, and of their ignorance in particular.
What is this property? Humans have experience of many systems which appear to have a characteristic Kind of behaviour: their outcomes, while seeming to occur in approximately constant ratios in the medium term, hop about unpredictably in the short term. Stimulated by this experience, humans have developed the concept of a property, which these systems might have. Without trying to make it absolutely precise6, we now give guidelines for a meaning of the concept of chance1, sufficient, for example, to help an alien from a non-chancy world to understand what we intend to mean by the word7.

The conjecture "The chance1 of an output of the system being 5 is 1/6" is taken to mean that we are conjecturing that the system has a characteristic property that displays8 itself thus:
(i) Long term limiting relative frequency : if, in a series9 of tests, certain aspects of the system repeat without change10 while other aspects vary11, then the output 5 would appear with a relative frequency of 1/6 in an infinite series of tests12
(ii) Short term randomness : The sequence does not obey any easily recognisable law, or fall into any computable pattern. There is therefore, for a human whose only evidence is the previous sequence of outcomes, an inescapable element of doubt concerning the next outcome13.

Chance1 is thus a theoretical concept, in the Physicist's sense . Humans can define any concept they like.
"To what extent does it apply to the external world?" is an important, separate, question. Compare defining the vis viva of a moving object as the product of its mass, volume, and speed cubed. We define the effect that this vis viva has, such that, when the object's speed relative to a target is within 0.000 000 1% of 0.5 of the speed of light in vacuo , vis viva, rather than momentum, is conserved in the collision. So, we have a nice clear concept. Now we need to obtain evidence to assess whether objects really have vis viva . And this will be difficult, because the property has consequences only in circumstances difficult to test.
This is the situation we are now in with chance1. We have defined a concept. We can see immediately that the claim that it applies to a particular real system will be difficult to test (i) because of the reference to the sequence being infinite (ii) because the non-existence of a pattern in a sequence is hard to establish.

This is for later. Our analytical, onto-semantic, task with respect to chance is complete . We are now at liberty to use the word, and the idea, freely.

"Distant stars are moving away from us"

As already explained, the only thing that this methodological, Epistemological, area has in common with the conjectures concerning Nature in Element 1 is that both involve uncertainty. Philosophically this family resemblance is unimportant. The two elements are conceptually unrelated . This is a vital source of long-standing confusion.
To what extent do we have confidence in a claim C, given indecisive evidence E (insufficient to establish that C is true or false). It concerns the extent to which we would bet that C is true - the extent of our degree of belief in C.
Recalling our meta-methodology, we are aiming to describe, in as simple a way as possible, the judgements that people make. Then we are aiming to describe the extent to which these judgements are justified.

Firstly we distinguish reasonable degree of belief mB and individual degree of belief iB. The former is characterised by consensus agreement; the word 'reasonable' does not imply that the judgement is going to be justified - it is merely a convenient label. The latter is any personal degree of belief, regardless of what the consensus may judge.

In this next section we discuss the best available principled description of human reasonable degrees of belief, which is that according to Bayes' theorem.

Degree of belief can be roughly classified as from 0 (no belief) to 1(total confidence). mB is part of a triadic relationship between {claim, evidence, reasonable degree of belief}. Values of mB are intuitive, part of everyday life and scientific method; judging by the failure of many attempts, we do not think that there are simple rules to summarise them, other than Bayes' Theorem . This says that our reasonable degree of belief in a claim, given the evidence and background knowledge, is increased by:
(BTi) increases in our degree of belief in the claim, given background knowledge
(BTii) decreases in our degree of belief in the evidence, given background knowledge
(BTiii) increases in our degree of belief in the evidence, given the theory and background knowledge.
As Howson and Urbach show convincingly in their (1993), this theorem satisfactorily summarises the facts of human use and intuitions concerning the triad {claim, evidence, reasonable degree of belief}. In particular, for example, it summarises the vital role of novel fact prediction , as a human method in the search for the truth about Nature.
Howson and Urbach also explain, with unusual clarity and firmness, that the theorem does not justify human behaviour14. Its only normative force, they insist, is that a person who denies a consequence of the theorem, but insists that he is using 'degree of belief' (or, loosely, 'probability') in the usual way, is guilty of inconsistency. In other words, the theorem captures a key aspect of the standard use of these words.

The role of chance in degree of belief : This is the first of the confusing links between the two elements of our dual theory. Our phrasing of Bayes' Theorem was odd, because we avoided the word 'probabilityx'. BTiii would usually be expressed as "the probability of the evidence, given the theory and background knowledge". But, given our theory, this means either 'degree of belief' or 'chance'. Which is it?
Consider the successful prediction by Fresnel, using his new wave-theory of light, of the spot of light in the middle of the shadow of a small object. This novel fact prediction increased Physicists' degree of belief in the wave theory, following their intuition that the chance of a random theory with no truth in it successfully predicting this observation was very small.
Physicists were comparing the reasonable degree of belief in two meta-theories: ( MTt) Fresnel's theory contained some truth; (MTf) Fresnel's theory is totally false (entirely a human fiction). The degree of belief in each of these meta-theories, given background knowledge, is equal (say), and the degree of belief in the novel fact, similarly, is equal. But Physicists assessed the chance of the dot occurring, given MTf, as very small. This leads to a very low degree of belief that the dot would have occurred, given MTf. This then implies, inverting according to Bayes' theorem, a very low value for the degree of belief in MTf, given the dot occurring. In other words, it is not reasonable to believe that Fresnel' theory is entirely a human fiction, given this evidence.

Summarising this use of Bayes' theorem: People are inclined not to believe in the truth of a theory if events which, according to the theory, have a very low chance of occurrence, and which therefore they would tend not to believe would be observed, have been observed.

This, we suggest, is well described by Bayes' theorem . It is, as I have indicated, an aspect of Bayes' theorem which is relevant to the indirect testing of any theory. The importance of this fact becomes apparent when we consider the potentially confusing situation where the theory to be tested is a chance; where Bayes' theorem describes a reasonable degree of belief in the truth of the conjecture that a Physical system possesses a chancy quality.
This almost self-referential situation, combined with refusal to accept that justification may be impossible, combined with a tendency to use 'probabilityx' to mean a mixture of objective chance and subjective degree of belief, has been the cause of some Philosophical confusion.

The Inductive Presupposition

In this section we consider the extent of justification for Bayes' theorem. Every second of their lives, humans make an Inductive Presupposition IP. As generalising creatures, they instinctively jump to general conclusions from particular experiences. They presume - they believe - that their spatio-temporally local experiences have been, and will continue to be, typical of the Universe. They presume that Nature will not use their experiences to mislead them. Sceptical Philosophers have long suspected that they lack justification for this presupposition - for this belief. As a key step in our analysis, these located sceptical doubts are now quarantined. We are not implying that they are solved; we are merely separating our variables. If the reader thinks that they are solved, good - she should insert her solution before the presupposition, and move on. If the reader, like us, thinks that they are not solved, then she should note that an unjustified presupposition has been made, and move on. Either way, the important point is that she should note the presupposition and move on. So we now define 'reasonable/IP belief' to mean a belief that is reasonable, conditional on the universally accepted Inductive Presupposition/IP.

Predicting, and Testing - the route to Bayes' theorem

Degrees of belief in consequences, given chances (Predicting): Suppose that we accept that the chance of event e occurring is 0.000 1. Then our reasonable/IP degree of belief in the event, given this chance, is 0.000 1. Degrees of belief in chances, given consequences (Testing): Inversely, suppose that we accept that an event e does occur. Then our reasonable/IP degree of belief in the chance of it occurring, given that it has occurred, is high. If an event occurs, then our reasonable/IP degree of belief in a theory which claims that the chance of this event occurring is extremely low, is extremely low (unless we have very strong other reasons for believing the theory).

To what extent are these two moves justified? Isn't a chance hypothesis consistent with any finite sequence of outcomes, however long? To what extent is it fair to bet 1:10 000 on e occurring in the next test, conditional on the chance of e being 1/10 000? It is not fair; it is not justified. But it is fair/IP and justified/IP. If our experiences are typical, a fair sample, of how Nature behaves, then: (i) we can reasonably/IP believe that we will not observe the occurrence of an event which has a very low chance of occurring (Predicting observed events). (ii) we can reasonably/IP disbelieve theories whose truth would require, unreasonably/IP, that our experience was untypical (Testing theories)).

Substituting the first link into the second, we conclude that if an event occurs, then our reasonable/IP degree of belief in a theory according to which our reasonable/IP degree of belief in the occurrence of this event would have been extremely low, is extremely low (unless we have very strong other reasons for believing the theory). The two reasonable/IP degrees of belief are correlated. In notation:

mB(h/e) is proportional to mB(e/h) X mB(h)

This discussion need not be prolonged. It is heading, qualitatively, towards Bayes' theorem, which we have already agreed to be an elegant summary of a cluster of human intuitions as to what is reasonable/IP to believe. Howson and Urbach's admirable book provides many examples.

But why, it may be asked, did we go to this trouble, when we were already using Bayes' theorem at the beginning? There are two reasons: (i) If 'probability' is ambiguous, then the calculus of probabilityx, and statements within it such as Bayes' theorem, will tend to be tarnished by confusions engendered by a unitary interpretation of P(x). (ii) We needed to establish not only a principled description, but also the extent of justification, for human behaviour. The calculus, including Bayes' theorem, is a principled summary description of the use of the word 'probable' in everyday language - a summary of the idea (concept), as used and approved by the community of users. It summarises the use which is regarded as reasonable, that which is accepted by the consensus. The limit of its normative force is to impose consistency: if a user claims to be using the word in this accepted way, and then does not accept some result which follows within the calculus, he can be accused of inconsistency (as Howson and Urbach emphasise). But it provides no justification for these uses - nor does it aim to do so. Indeed, by following everyday consensus usage, it precisely replicates the human attitude to Philosophical doubts concerning Induction - it ignores them. Mathematicians, Statisticians, and those who use their results, instinctively ignore such doubts. Our Philosophical aim, by contrast, required us to (i) separate the less subjective from the more subjective aspects of probabilityx, and (ii) assess the extent of justification for the judgements made in its name. In these tasks we could receive no assistance from the calculus, nor from Bayes' theorem. However, our aim also required us to capture, to reconstruct in a principled form, human intuitions. Since these are nicely summarised by Bayes' theorem, it was essential that we were able to reconstruct it.

We now turn to the classic hurdles which loom up, ready to trip a theory of probabilityx.

Hurdles for any theory of Probability

In the rest of this paper I indicate how this DT jumps the following hurdles:

(i) Are conjectures concerning chances1 empirical? {If no definite testable consequences can be derived from them, nor any evidence prove or disprove them, then they are metaphysical, and have no place in positive science}
(ii) Can we explain the application of probabilities to single events?
(iii) Can we explain how chances have arisen, on the supposition that Nature is deterministic? Can we explain how some systems lead to disorder, and then back to some kind of statistical order?
(iv) Can we explain what the chance1 that we associate with a system amounts to, other than the infinite sequence ratio? (Have we explained what a chance is - in the world?}
(v) Can we explain how chances seem to vary, depending on the choice of outcome space? {If we regard the die as having 6 outcomes, then the chance of getting 5 is 1/6; if we regard it as having 2 outcomes - 5 or not-5 - then the chance of getting 5 is 1/2}
(vi) Can we account for conditional probabilities?
(vii) To what extent do we have evidence that any real systems are approximately chancy?
(viii) Suppose that a die has been thrown, at t = 0, and a 5 has just been obtained. What was the probability of this event occurring? {Was it, for example, 1/6, or 1?}
(ix) Can we explain how the system, and indeed the outcome, is to be specified? After all, if the system is specified too precisely, there may be no variation in the outcome, while if the outcome is specified too precisely, every one will be unique. Doesn't ambiguity over the Unique Experimental Protocol - the specification of the system - make the objective probability unacceptably variable, for a quality that is supposed to exist in the external world?
(x) Wouldn't it be preferable to stick to reasonably definite, testable, things like degrees of belief (in the form of betting quotients, and utility), rather than conjecturing the real existence of peculiar qualities of systems, which are not positively testable?
(xi) Does ignorance lead to equiprobability?
(xii) What happens if we interpolate two throws of a fair die into a sequence of throws of a heavily loaded one?

I hope that this list includes your favourite hurdles for a theory of probabilityx. I will tackle them in the order that they would perhaps have occurred to you - an order therefore more pedagogical than logical. In the process I will repeat and develop aspects of the theory.

Hurdle 1. Are conjectures concerning chances empirical?

To what extent do we have evidence that there are some such systems, at least approximately, in Nature? To what extent can we establish values for the chances in these systems?

Since this concerns how conjectured chances are tested, it is a methodological question.
What methods are used to test particular conjectures, such as "This die has 6 sides"? This is direct testing ; observation confirms or disconfirms it. This case is unfortunately not relevant to us.
What about general and theoretical conjectures? This is indirect testing ; testable consequences are deduced from them. Verifying these does not, however, establish the conjecture as true, since many other conjectures could have had the same consequences. By observation of human behaviour (intuitions) we have already discovered that humans make the completely unjustified Inductive Presupposition (IP ): If one theory T1 implies that a consequence C is very likely to occur, and T2 implies that C is very unlikely to occur, and then C is observed, then T1 is more likely to be true than T2. This is, of course, entangled with Induction; it is the presupposition that what we happen to observe is a fair sample of the consequences of the true general laws and theories; it is the presupposition that Nature does not deceive us, when we collect our measly fragments of information from her.
It is, of course, the presupposition that we identified before, in our discussion of Bayes' theorem. But at that stage we were merely noting its presence in a description of human intuitions concerning the triad {claim, evidence, reasonable degree of belief}, as it applied to all claims. Now we apply it to claims concerning chances.
Conjectures concerning chances are made empirically testable by the application of Cournot's Rule (see, for example, Gillies (1973)). Criticism of this rule is misconceived. His rule is not a part of the Ontology or Semantics of chance; it is not a part of the less subjective element 1at all; it is part of element 2 - the methodology. Consider a familiar example: Our conjecture is C1: a die has a chance of 1/6 of coming up 5. We obtain evidence E1: we throw the die 6000 times, and get only 2 5s. We now have a claim, and some evidence. The evidence is a logical consequence of the hypothesis - that being an inevitable feature of chance hypotheses. Now humans use Cournot's rule, which is that, roughly, very small probabilities are impossibilities. Our form of the rule is IP: "Do not believe in the truth of theories which imply that events you have observed have a very small chance of occurring". If our conjecture C1 is true, then E1 had a very low chance of occurring. We conclude that we should not believe that C1 is true.
Thus our conjecture concerning a chance is indeed empirical, and is tested in a familiar-sounding way15.

Have we justified this procedure? No. We judge that the extent of justification is zero16. This is an interesting fact about human indirect testing of claims. Can we define what counts as "very small". No. That is another interesting observation, concerning the apparent roughness of this human methodology.

Our unpleasant conclusion is that our best model of chancy systems produces consequences which are compatible with any finite observed sequence of outcomes. Chancy conjectures have no justifiable empirical significance. But this is not a scandal; they merely 'join the club' of other hypotheses which, being too distant from direct testing, suffer from the problem of 'Inference To The Best Explanation'. Human intuition carelessly leaps the logical gap.
The reader should find this unsatisfactory. Of course it is. But it is the truth about our human situation - our reach exceeds our grasp.

Hurdle 2: Can we explain the application of probabilities to single events?

Following Von Mises, and Howson, (and not, for example, Miller (1994)) we suggest that single events, if not regarded as part of a collective, do not have chances1 associated with them. I can coherently, of course, express a degree of belief that I, as an individual, am going to die in the next year - and a fortiori I can express a probabilityx that I will die. I can coherently conjecture a chance of dying if I consider myself as a person, or as a man, or as a man aged 48, or as a man aged 48 who takes a bit of exercise, but if I insist on cutting adrift from all collectives, the sentence "I have a high probability of dying in the next year" can only express a degree of belief.
Von Mises' chance1 cannot, consistent with its meaning, apply to events at all, multiple or single. A single event is generated by the system; a collective of events is, hypothetically, generated by the system - it makes no difference; the property chance1 is a Physical quality of the system - the property of tending to generate collectives. In other words, the question: "What is the chance1, (as a 40 year-old person who has just signed on to the Life Insurance Company) of the event 'Me dying in the next 10 years'?" is a misposed question. This is unsurprising: "What is the weight1 of his troubles?", where 'weight1' is as defined in Physics, is similarly misposed.
In this case, we can continue to talk of our degree of belief that we will die in the next 10 years; we may even have a betting quotient associated with the degree of belief; but this has nothing to do with chance1 .
In other words, if people ask: "What is the probabilityx associated with the event 'Obtaining 5 on the throw of this die at t +1'?, they are making either a concealed reference to the chance1 associated with the system generating a 5, or a reference to degree of belief.
Is there any reason why we might want chance1 to be a property of events, or indeed of a single event, rather than - or as well as - a Physical property of a single system?
(a) "Some patterns of everyday speech seem to have the form of an assignation of a probabilityx to an event" . This is trivial. It is typically a degree of belief.
(b) "We would like chancex to be a single-case propensity". This is meaningless. A propensity is a property of a Physical system.

Hurdle 3: Can we explain how chances have arisen, on the supposition that Nature is deterministic? Can we explain how some systems lead to disorder, and then back to some kind of statistical order?

We here summarise the approach of Arbitrary Functions, due to Poincaré. It provides a model for how the phenomena we call 'chances1 ' could arise naturally in a deterministic world.

The Experiment


On the bench in front of us is a gas container, connected to an electronic device, on which is a blank display. When we press the button beside the display, a number appears. We press it a couple of times; numbers between 1 and 6 appear. They show no immediately obvious pattern; they hop about. We record them for 360 presses (tests); each appears about 1/6 of the times17. We record 360 000 tests; each appears very nearly 1/6 of the times . This is an interesting phenomenon. We cannot predict individual outputs, despite our best efforts to find some pattern to the sequences. But we seem to have a physical law that we can use to predict ratios for large numbers of outputs.

We now decide to study the system on the bench. We hope to devise a Physical model of the system, to see if our model might display, in the short and long term, the characteristic behaviour.

The computer simulation of a model of the gas in the container

We find that the gas container has a small pressure sensor inside it. This generates a voltage V , proportional to the pressure detected. When we press the button, the device samples this voltage. If it has value a the device outputs a 1 to the display. If the voltage is a + e , where e is a very small value, then it will output a 2; if to a + 2e , as a 3; and so on, in a cycling sequence18. In other words, the device's output is very sensitive to small variations in its input. Because different numbers of molecules hit the sensor per second, the pressure on the sensor varies, and therefore the voltage varies, giving outputs which hop about.
Since the voltage is the key variable which links the two parts of the system, we call it the 'Poincaré variable'.
Which part of the system is responsible for the short-term unpatterned, yet long-term patterned, variation in output? If it is not the electronic processor (the secondary system ), it must be the gas and sensor (the primary system ). The molecules in the box, starting with some initial conditions, and governed by, say, deterministic laws, display behaviour over time - including the entire sequence of outputs - which is determined19. Nonetheless, the pressure on the sensor is varying in a way which, though hard to predict, has some kind of long-term pattern in it.
The gas contains about 1027 molecules, so we are unable, as humans in 1997, to calculate what will happen. We cannot even establish where all the molecules are, and what their speeds are, at t = 0, far less calculate, using classical mechanics, where they will all be at t = 5, how many of them will hit the sensor, how hard, and therefore what the pressure on the sensor will be. We are reduced to argument, and computer simulation, to work out what is happening.
Suppose, following Poincaré, that our first simulation is very simple - too simple. At t = 0, all the molecules are projected in the +x direction. They are in a perfect cube. with idealised smooth walls at which molecules rebound, with i equalling r . This simulation produces outcomes which time-repeat, giving a cyclic pattern to the outputs which we have not observed. It gives no sign of the characteristic behaviour we are seeking.
This failure is not surprising. Our idealisation left out essential aspects of the situation. The real container is not a perfect regular shape. The molecules will change direction after their first collision. They will hit a rough wall - at the molecular level - and not obey the law of reflection. An infinitesimal variation in their original path will cause a finite change in their succeeding velocity. Infinitesimal further changes, due to collisions with other molecules, will then produce further large changes in their direction.
Sometimes, as when Galileo initially left out air resistance, an idealised model can still give very accurate predictions, covering all the main aspects of the phenomenon - leaving only details to be tidied up. But other times, especially when positive feedback, or non-linear equations, govern aspects of the system, leaving one apparently small factor out of the model can lead to inability to derive major aspects of the behaviour of the real system20.
We run our second, less idealised, simulation, on a more powerful computer, with more realistic walls - consisting of lots of tiny bumps and dips - and realistic interactions between the molecules. We find that if all the molecules start off in one direction, at one speed, then, after a certain number of collisions, the order is lost. Very soon "their final distribution has no longer any relation to their original distribution" (p.401). If the simulation is run on, with 1000 000 molecules, we find that a simulated sensor in the corner indicates a characteristic kind of variation in number of hits - a variation which recurs however we set the molecules off. The immensely complex simulated behaviour of the more realistic situation has led, in one respect, to a law-like simplicity.
The simplicity is this. The simulated sensor repeatedly records the number of molecules that hit it in successive 1 ms intervals. In the first interval it records, say, 1000 atoms. In the next, it records 890. After, say, 100 s of recording these numbers, it processes its data, to find the frequency of occurrence of intervals in which particular numbers of atoms arrived. It finds that between 986 and 995 atoms arrived, in 100 of these intervals; between 996 and 1005 atoms arrived, in 102 intervals; between 1006 and 1015 atoms arrived, in 101 intervals. In other words, the number of times that similar numbers of molecules arrived, is very similar. The larger the number of molecules we simulate, the more accurately this holds true.
Suppose that we cannot yet explain, using only our objective theoretical description of the situation, why there is this characteristic similarity in the number of times that one number of hits are recorded, and the number of times that very similar numbers of hits are recorded. This present failure does not imply that the characteristic similarity is not objective ; it just implies that we have not yet managed to tease out decisive predictions, explanations, of every aspect of the behaviour of the system, by methods other than modelling. Why should we have done? Computer modelling is regarded by Physicists as a fully acceptable substitute for solving equations, in cases where the latter has proved impossible.
In our simulation, the ratio of occurrence of pressures in one interval, which we could label b , has come out very similar to the ratio of occurrence of pressures in a closely neighbouring interval21, which we label b + f . The longer we run the simulation, and the more particles we put into the box, the more accurately this result is obtained. We call this property of the pressure variable, 'Poincaré variation'22 (Poincaré pp.403-4).
Comparing the simulation to the real world, we find evidence, by direct measurement of the pressure on a sensor in a real gas, that this variation is indeed occurring. Now we recall that the sensor converts this varying pressure to a varying voltage. Therefore the voltage will also vary such that if a occurs a certain proportion of the total number of times in a long sequence, then a + e will occur the same proportion of times, as will a + 2e . Similarly, if a + 5e occurs a certain proportion of the times in a long sequence; we know that a + 6e occurs the same number of times.
Suppose that a , a + 5e , a + 10e , and so on, cause the device to display 1. a + e , a + 6e , a + 11e , and so on, cause the device to display 2. And so on.
If therefore the electronic device continually recorded all the voltages received, it would end up with equal numbers of each output, from 1 to 623. We are closing in on the appearance of equichance for the six outcomes.
Suppose instead that the device does not produce an output until the button is pressed. If a is detected, then 1 is displayed as output. If a + e is detected, or a + 6e , then 2 is output. We now simulate, on our computer, different patterns of button pressing. We are facing the problem of sampling , in various ways, without appealing to randomness. We first try patterns where the button is pressed at regular intervals. We try patterns where the intervals increase and decrease according to periodic functions. We try every ordered pattern we can. We find that we always get the ratios 1/6 for output 1, 2, etc.. , just as we did in the real experiment. We are aware of the possibility that a sequence of presses could give a quite different ratio - we could get 1 every time the button is pressed. But every case we try gives these ratios. We can simulate the actual times of an experimental run. The functions that we have tried, cover a large number of times of pressing. We find that, with our simulation, only once in, say, 106 trials do the ratios not come out to 1/6.
Returning to the real world, this fits with the human case, where people, whether trying to follow simple patterns of pressing (every 1 s), following highly complex patterns, or pressing when they feel like doing so, get the ratios to come out to 1/6 about the same number of times (if they are patient enough).
'Sampling every 1 s' is an easy sampling pattern to model. 'Pressing when I feel like it' is a hard pattern to model. If we suppose that the human being does not introduce some new kind of randomness into the world - if we suppose that we are a complex, Natural, electrical system24 - we could roughly simulate the working of the brain. Viewed as a complex system producing an output at irregular intervals, we could model it as a second complete system of the above kind. Linked electrical sections of the brain (neurons) give a chaotic variation in a voltage at one point in the brain, acting as a Poincaré variable. Each time this variable, say, rises above 0.2 mV, the person presses the button; he also gets an inclination ("I feel like pressing it now") to press the button.


So, in a sequence very difficult to predict, the first system has its button pressed25. Run after run, the computer simulation gives output ratios of 1/6 for each output.
We can also link the sampling to other parts of the natural world which seem to display this characteristic feature. We could link it to another identical system, so arranged that when more than 1000 atoms hit the second sensor in 1 s, the second system outputs the command to push the button on the first. Alternatively, we could press the button if a Geiger counter detects more than 100 counts from a radioactive Cobalt-60 sample in the preceding second. One part of the Physical world, generating a certain kind of varying variable and linked to a sensitive processor, when it interacts with another such part, tends to produce, 99.9999% of the times, a characteristic patterned output.


The computer simulation shows that certain systems, even when micro-deterministic, could develop the kind of characteristic behaviour which we have agreed to call 'chancy'. We conjecture that these systems include not just the real case discussed above, but coin tossing, roulette, and other classic systems.
In each such system:
(i) There is a conjecturally deterministic primary system . Typically because it contains large numbers of moving independent molecules, governed by processes including positive feedback, it generates a physical quantity, the Poincaré variable, which displays characteristic Poincaré variation (the ratio of occurrence of one small interval of values is very similar to the ratio of occurrence of the adjoining small interval, over an infinitely long sequence of tests). This variable is the input to the next system.
(ii) The Poincaré variable is fed into the secondary system , which is a processor , which processes the varying input in a characteristic way, being sensitive to small changes in the Poincaré variable. It then produces a selection of specific outputs .

The vital feature of this description is that objective features of the world have merely led to further objective features. No aspect of the above involves reference to the limitations of human knowledge concerning the outcomes of the systems.
This completes our theory of chance1 - objective probabilityx (see diagram below).


The simulation model shows that micro-randomness can emerge from a micro-deterministic system. This fills in the step before Poincaré (see also Knitchine (quoted in Gillies)), who argued on from micro-randomness to macro-randomness26. We do not need to answer the objection that you can'tx get chances out unless you put them in. Unless the critic can explain in what sense he means 'can't', we can merely point to the above passage, and indicate that it shows that chances do , and therefore can , emerge from deterministic systems. This short-circuits any attempt at a conceptual argument that this is somehow impossible (perhaps because of the meanings of the words involved).
We are not claiming that personal probabilityx, reasonable degree of belief or betting quotient, emerge from the Physics of the external world; these concepts are person-relative, arising when we face events we have difficulty in predicting. We are claiming that the characteristic property of systems in the world, which leads them to produce such events, could be a predictable consequence of deterministic Physics. We suggest that our simulation presents the reader with an example defying his faith. To just state that this cannot be happening is like stating that a rocket cannot accelerate in space, because it has nothing to push against, even when presented with evidence of a rocket accelerating.
Where is the fault in the simulation? We suggest that it provides evidence that chancy1 behaviour can be generated from entirely deterministic micro-behaviour.
Gettingx probabilityx is vague. We cannot expect to get the concept of chance1 from a simulation; that is a category mistake. We can try to get chancy1behaviour from a simulation.

Have we made some assumption about the chances of the presses? Not necessarily. The system, following deterministic laws, generates a sequence of numbers if pressed every second . This pressing is not chancy - it could be done by a mechanical clock. No part of the system, nor of the pressing, is smuggling chance or randomness into the system.
But couldn't it happen, by chance, that the pressing gives a 5 every time? Certainly, but this does not imply that we have failed to devise a chancy system; this is a typical property of a chancy system. Once again we are confusing the ontology with the methodology. The chanciness of the system is perfectly compatible with a particular observed finite sequence being apparently not chancy.

Is the preceding rather scientific section on molecules and chaos actually necessary to the view?
Maybe not. Suppose that the world had turned out, as far as we can tell, to be tychistic. Suppose, in other words, that the die tossing, the gas experiment, and so on, all behaved as they do on Earth, but that our further investigations of the structure of these systems did not unravel any deterministic laws. We could see that there were 6 outcomes, but beyond this we could not get. Either there seemed to be no lower-observability parts of the systems, or, if there were, they seemed to be all vague, loosely defined, and indeterministic. Perhaps we had reason to suspect that human ingenuity would be permanently blocked from discovering what, if anything, was actually going on.
We could still propose that the system was objectively chancy1, and that the value of the chance property attached to each outcome was 1/6. There need be no suggestion that the chance was a measure of human degree of belief, or betting quotient. We would be in the position of Galileo saying that a sheet has some colour property, which causes it to produce in us the sensation of whiteness; this is an objective property - but he had no idea what was causing or underpinning it. The chance would be a property of the system, just like any other property.
If we later found that the system actually had some hidden variables , whose variation was causing the chance1 property (as shown by computer models), then this would not alter our description of the system as possessing objective chanciness. And if instead we did not find this, then we could reasonably stick with the view that, say, Nature is essentially tychistic, such that randomness and chance1 outcomes are a fundamental aspect of macroscopic Natural systems because chanciness is built into them (what could be called 'genuine indeterministic chance').

Does our computer model approach fit with the basic theory of statistical mechanics and thermodynamics?
(See e.g.. Sklar Physics and Chance )
(i) Will our model explain how the gas reaches the equilibrium state in which a gas in the actual world mostly is?
(ii) Will our gas display Poincaré recurrence? At some stage it will reach exactly the state it was in at the stage when we released all the molecules. Does this matter? Does this disobey the law that entropy increases?

Our gas will also display time reversal symmetry, such that it could run backwards in time, consistent with physical laws. In other words, we do not establish an arrow for time. Does this matter?
Our inclination at present is to suggest that the laws of thermodynamics are themselves probabilistic, in the sense that they are not precisely true. What is our evidence that all systems always, to any degree of precision, tend to increase in entropy? We see no reason why a gas in a container should not start with its molecules all along one side, about to travel in the x-direction at a constant speed (low entropy), proceed, as the simulation indicates, rapidly to a state of disorganisation (increasing entropy), continue for some variable length of time in various disorganised states, and then, for an instant, return to its initial state (low entropy), and then the cycle begin again. Do have any evidence that this does not happen? It seems a reasonable sequence of events.
Consider the classic experiment of James Joule, based on one by Gay-Lussac: a gas is in one of two containers, linked by a tube, but separated by a valve. The second container is evacuated. When the valve is opened, Joule found that the gas tended to fill both containers, though there was no change in the overall energy of the system. Entropy has increased, and the Second Law implies that the process is not reversible. But again, we could harmlessly suppose that the law is a version of our Inductive Doubt quaranting - it is saying that we do not tend to observe very low-chance1 events. We could say this, and still say that that, according to our theories, they will occur.
Poincaré recurrence for a gas in our container will occur, in our simulation, after a certain amount of time. But if, more realistically, we change some tiny aspect of the container, or motion of some particles, before this happens, then the recurrence is spoiled. Such a change every hour will ensure that recurrence never occurs. In the real world, either as a result of the bombardment, or because of external changes, this is just what happens. The tiniest change in any aspect of the external Universe will produce this effect, because of, for example, the change in the gravitational force on the molecules. We conclude that the periodic recurrence of the behaviour of a gas in a container only occurs in an oversimplified model, not in the real world.

Hurdle 4: Can we explain what the chance that we associate with a system amounts to, other than the infinite sequence ratio? What is a chance?

Some systems, governed by positive feedback of small changes, tend to behave in a characteristic way. The system's tendency to behave in this way could be described as a property of a characteristic Kind (associated with a ratio) - the system has the power, or property, or quality, of tending, in an infinite series of independent tests, to generate a limiting frequency, and randomness. This objectivex aspect of the system, which causes it to behave in this characteristic way, is referred to, in everyday English, as 'chancex' ('games of chance'), or as 'probabilistic'. We will call such systems 'chancy1' (this being an ugly version of the more familiar 'probabilistic'.)
This aspect is objectivex, in that its existence has nothing to do with degrees of belief. The characteristic property, used to identify the Kind, is that:
(i) the outputs in the short term are hard for humans to predict - they seem to be hop about lawlessly - yet:
(ii) the outputs in the long terms appear to be governed by lawlike ratios we can associate with them.
To say that a system has a characteristic property , is to say no more than "They do certain things, make certain things happen, in certain situations". When it is not in these situations, although it is not doing these things, it retains the ability to do them, when its situation changes. How we describe this is up to us. The particular choice of language of 'qualities' has always created difficulties. Some systems behave in a certain characteristic way in some situations. To say that this is because they possess a certain property, makes us seem to be explaining the behaviour, when we are doing no more than locating it. We are just finding a different way of describing what the system can do.
The characteristic short and long-term pattern in certain events, and the tendency of certain systems to produce this pattern, exist and have been explained. There is no further answer to the question "What is chance1 ?".

'Chance1 outcomes', which is the highest-observability objective aspect of chance, are the aspect of the world which is displayed by the outputs of a system of the Kind described above. A superbeing would regard our six outcomes as similar in a characteristic way. She would suggest the coining of the term 'chance1' to refer to the occurrence of a particular one of the six outcomes. In general, she would suggest the use of the word, objectively, to describe a particular outcome of a system which, with a very small change in its initial conditions - where such changes are occurring - would have led to another, considerably different, outcome. She classifies the outcome as 'the result of chance1'; the 5 is obtained 'by chance'.
She also suggests the use of the word to refer to the feature of the situation that, over finite repeated outcomes, using the Improbability Presupposition, a particular outcome will tend to be recorded a certain specific proportion of the times (the limiting frequency); the asymptotic ratio resulting from an infinite sequence of tests gives the true chance1. Thus she now refers to an outcome which has a chance1 of occurring of 0.527.
The superbeing has identified a particular interesting kind of situation, characteristic of the present state of the Earth28, which she calls 'chancy1 '. She would be able to assign the same chances1 to the outcomes of trials of the die in a situation, even though she knows what all the outcomes are going to be.

Could we have done without chances, and make do with relative frequencies? We cannot, because the relative frequencies are not the property, they are the result of the property, that the system has. True, when we refer to the chance1 of a certain output being 0.5, we are referring to the relative frequency of its appearance in a putative infinite series of independent tests. This series cannot be undertaken. But when we refer to the system as 'chancy1', we are referring to a property that the system has, the property of generating outputs which hop about in the short term, yet obey certain relative frequency laws. The property is not a relative frequency; it is about relative frequencies that will ensue, if tested. The relative frequency is the outcome of the property displayed under test.

The chance in a coin-tossing system is not a property of the coin, it is a property of the system, including, for example, the skill of the person doing the tossing (a trained person could learn the skill to control the toss so as always to throw a head). This is a non-localised relational property .
Finally, therefore, there is no extra property called 'chance', in addition to the 'power to generate limiting relative frequencies in infinite sequences of outcomes of tests'29.

Hurdle 5: Can we explain how chances seem to vary, depending on the choice of outcome space?

If we regard the die as having 6 outcomes, then the chance of getting 5 is 1/6; but if we regard it as having 2 outcomes - 5 or not-5 - then the chance of getting 5 is 1/2.
This hurdle derives from the classical ignorance theory of probability, in which outcomes are assigned equiprobability. In Dual Theory the chances derive from the nature of the system, and do not display this alarming variability. Suppose that, after a brief look at the system, we conjecture that C1: the chance of getting a 5 is 0.5. We then find that, after the machine has displayed 6000 numbers, only 1010 are 5s. Using Cournot's Rule, this disproves the hypothesis C1.

Hurdle 6: Can we explain the use of conditional probabilitiesx ?

Chances are not relative to evidence; they are inhuman; they do not change as a result of changes in human background knowledge. There is no such thing as a conditional chance1.
However, a human conjecture as to the value of this chance1 does change, as a result of changes in evidence. And the human degree of belief that the value of the chance1 is true varies with the extent of evidence. Thus the degree of belief in a particular outcome is, for two reasons, relative to evidence, and human. There is conditional degree of belief.

Hurdle 7: To what extent do we have evidence that any real systems are approximately chancy?

(a) Outcome evidence

With the help of our concept, humans conjecturally classify some systems as examples of the Natural Kind called 'chancy1 ', on the basis of evidence of behaviour, and of structure. This conjecture cannot be proved true, just like many conjectures in Physics . Its claim is low-observability, not in the sense that we are conjecturing the existence of a low-observability entity, but in the sense that the truth of a universal law is low-observability, because it has an infinite number of consequences. We cannot prove beyond doubt that a chance is present , because, in claiming that it is, we are making a claim concerning an infinite sequence of outcomes.
Owners of casinos base their livelihood on the conjecture that their games of roulette, poker, and blackjack, are systems of this Kind - systems possessing the chance property30.
More precisely, they base their livelihood on this conjecture, combined with an unjustified presupposition, or prejudice - the prejudice that in the finite sequence of outcomes, generated by a week of play, the ratios of outcomes will be close to those associated with the conjectured chance property, and the prejudice that the finite sequence will display no pattern humanly-computable by place selection31. If they are reflective owners, they will be aware that this presupposition could be proved false, for a day, a week, or a year; in this event, they will go out of business. But they presume that the finite sequence will be a fair sample of the infinite one. By making this presupposition they are Quarantining Inductive Doubt, as all humans do in their everyday life.
They therefore presume that the finite sequence obtained each week in their casino will be typical , that it will be a fair sample of the infinite sequence, that Nature will not mislead, and deceive, them.
They accept that the ratiof in the finite sequence will not be exactly the ratioi of outcomes predicted for the infinite sequence. But they presume that the ratiof will only differ from the ratioi by an amount up to that one would expect, if the sequence had been selected by chance from all possible sequences, and that the chance of selecting that one was more than, say, 1 %. In other words, they are presupposing that a sequence-event whose chance of occurring (in a human-selecting-sequence system) is less than 1/100 will not have occurred when they were sampling. In brief, they presume that events with a very low chance of occurring did not occur when they were sampling . This, as we have seen, is Cournot's Rule.

(b) Structural evidence

Some features of a system may lead us to conjecture that it will display chancy behaviour, even without outcome evidence. These features are those, for example, displayed by roulette, cards, coins, and dice. By studying the die-human-hand-table system, we conjecture that it has, say, 6 outcomes. We also conjecture that, if the hand is more than about 1 m above the table, then very small variations in the initial position and velocity of the die, will lead to considerable variation in the outcome. We hence conjecture that no pattern will emerge in the outcomes. We also conjecture that, in the long run, each outcome will occur 1/6th of the times.
We have observed that the chancy property of a system is linked to objective aspects of the structure of the system: the outcome of a roulette wheel hops around, in the short term, between rouge and noir, but in the long term we notice a rough ratio of 1:1 starts to appear; we also notice that the structural ratio of red to black slots in the wheel is 32:32, or 1:1. We do not judge that this is a coincidence.

Hurdle 8: Suppose that a die has been thrown, at t = 0, and a 5 has just been obtained. What was the probabilityx of this event occurring? {Was it, for example, 1/6, or 1?}

The chance of the 5 being obtained was 1/6. Our state of knowledge of the outcome is irrelevant, since the chance is a physical property of the system.
If, say, the initial conditions, and deterministic laws, caused the outcome to be entirely determined, then at t = -0.1 s the 5 was determined to occur - that event was going to happen. In other words, in an odd-sounding phrase: The 5 was definitely going to happen in that test, but the chance1 of getting a 5 in the test was 1/6.
The previous sentence, in ordinary English, feels contradictory. But a chance1 does not tell us whether a particular event is going to happen; it tells us, quite specifically, how often this outcome occurs in a long sequence of outcomes32. These are quite different claims.
My superbeing example below is an attempt to clarify this point; the superbeing, who can assess the initial conditions and do the calculations, can know for certain that the next test will give a 5 (at the instant of release), yet also say, consistently, that the chance1 of getting a 5 is 1/6.
We may persist: "Look, this is ridiculous. Are you saying that an insurance company can consistently say, on the one hand: "The chance1 of someone like you, who has signed on for Life Insurance at age 48, dying in the next year is 0.623" (hence your premium), yet also say, on the other hand: "We have no idea whether you will die in the next year"? Yes, we are - this is one of Von Mises' examples. The company knows very little about your individual circumstances - the equivalent of the initial conditions and laws in the die case. If you ask: "Look, am I likelyx to die in the next year?", the company can do more than repeat what it said; 'likelyx' is just too vague - on the one hand, of 1000 people who sign on at your age, in general, about 623 die in the next year; on the other hand, you personally may well have an undiagnosed fatal disease, or be very accident prone, or whatever, and die tomorrow. Indeed, the murderer who is going to kill you tomorrow as a result of your past actions may be completing his final plans as you speak.
You could, in the search for a prediction as to how long you personally are going to live, try to narrow down the collective, alter the Unique Experimental Protocol. But either you will find that just as it gets interesting, the data runs out, or you will find that you end up, just as it gets interesting, with just you (in other words, there is no longer a collective).
We may still persist: "Look, what is a reasonable degree of belief in the claim 'I personally am going to die in the next year'? if it isn't 0.623, what is it?" This is a clear, meaningful, question. But unfortunately it is extraordinarily hard to answer , which is, of course, why people are fascinated by fortune-tellers, tarot cards, and astrology. Predicting the future life of a person, like predicting the future behaviour of one particular particle in the container, is beyond our present computational power. Perhaps one day it will be possible, in which case the time-traveller, or computer, will tell us that the reasonable degree of belief is 1; yet the chance1 determined by the insurance company will be unchanged33.

Hurdle 9: Can we explain how the system is to be specified?

If the system is specified too precisely, there may be no variation in the outcome. Doesn't ambiguity over the Unique Experimental Protocol (UEP) - the specification of the system - make the chance unacceptably variable, for a quality that is supposed to exist in the external world?
Consider coin-tossing. The system which we conjecture has a chance of 0.5 of giving a head needs to be specified. A normal person (ie. one who has not specially trained in coin tossing) is tossing a coin more than 2 m above the floor, and projecting it upwards at least 2 m/s, with an angular velocity of 70 radians/s. This provides a UEP for the system.
If we change the UEP, we change the system, and we change the chance. This is an aspect of the property of chanciness.
To take the classic example: if I am considered merely as a person aged 48, then my chance of dying in the next year, by reference to the relevant collective, is, say, 0.7. If instead, I am considered as a man aged 48, my chance of dying suddenly changes to 0.6. How can I have two different chances of dying?
I am placing myself into two different systems, like a coin in a system close to the bench (chance of heads is 0.9) and far above the bench (chance of heads is 0.5). In the first, the entities specified are just people. In the second, the specification has changed to men. The social/physical/biological system which, we conjecture, generates the collectives, differs. Since the chance is a property of the system which generates the collective, if the system changes, and the collective changes, then the chance changes.
There is no correct way to specify the system. I, as an individual as opposed to 'a person', 'a man', etc, have no chance of dying next year (as already discussed).
Suppose a deterministic macro-world. The specification of the system - the description of what aspects of it are to be repeated (to persist through change) - is as precise as it is. If it is very precise, (S1), then the chance1 of getting a 5 will be 1; if our description of S1 includes not just the die, and the bench, but the exact position and velocity of the die as it is released, the air currents, position of the Moon, and so on, repeats of the test would always give 5 ex hypothesi . S1, as specified, does not possess a chancy1 property.
If, instead, it is less precise, (S2), such that our description merely specifies the die, and a human more than 1 m above a bench, and leaves all other aspects of the world unspecified, then we conjecture that repeats of the test now would give 5 1/6 of the times (in an infinite test sequence).
This is unproblematic.
Most outcomes of S2 are different; for example, the die ends up in a different position. What humans do is to regard "5 uppermost" as a Natural Kind, meaning that it is of causal significance in the world, in a way that "being 0.251 cm from the edge of the bench" is not.

Hurdle 10: Wouldn't it be preferable to stick to reasonably definite, testable, things like degrees of belief (in the form of betting quotients, and utility)?

Howson writes, at the end of his review article (1995 p.27): "We clearly need a theory of objective probability, and science positively demands one", which, considering his important work on Bayesianism, is a weighty endorsement of the realist element in DT .
We have presented evidence for the existence of these chancy properties in systems, independent of human being's existence, and, a fortiori , their degrees of belief. These properties are not as if or in a manner of speaking , they are conjectured to be real properties of Nature.
A way of emphasising the gulf that separates the broadly Positivist and Realist approaches, is to consider the former's Principal Principle : This equates the value of chance, in the objective world, to that of a reasonable human degree of belief. If the chance of A is r , then the reasonable degree of belief in A is r . Hence one might try to derive the form and the language, of chancesx from the subjective area of degrees of belief and betting quotients - without being committed to unacceptably low-testability chancy properties (metaphysics); one might hope to justify regarding (Howson (1995 p.25) "chance, as reasonable personal probability".
The Principle could thus be used by an Anti-realist as part of a theory according to which no objective chances exist in the external world - what exists are just unreasonable subjective probabilities, reasonable ones, and physical events. The value of chancesx is derived from the value of reasonable personal probabilities.
The danger of this view is that (Howson (1995) p.20) "the lack of an explicit argument ... for the existence of chance seems to leave the Principal Principle with an undetermined parameter 'the chancex of A', as the quantity which is supposed to determine our degrees of belief"; "a proof of existence .... is lacking here".
Unsurprisingly, the Principal Principle, on our Conjecturally Realist Dual theory, is misleading and unimportant. It merely expresses the aim, the hope , that mBe - the consensus degree of belief that an event will occur - should equal the objective chance1 Co that it will occur. We cannot know this, because the consensus cannot get its hands on methodology which infallibly gives 100% truth-credit to conjectures concerning chances - in other words, which establishes that mBc = 1. Therefore, in most situations, with limited evidence, we may conjecture that the true chance of an outcome is 0.5, when our degree of belief in this outcome is considerably less.

Hurdle 11: Does ignorance lead to equiprobability?

Suppose that we have no evidence at all about a system except that it has two possible outcomes, A and B. We have no relative frequencies, no structural evidence, no evidence of similar systems, nothing. This is not outcome ignorance - the short-term ignorance that is characteristic of a chancy system. This is system ignorance - ignorance of the nature of the system; the system could, for all we know, not be chancy.

What is our reasonable conjectured chance for the outcome being A? 0.5? No, this is unreasonable. We have no idea what the chance is.
What is our reasonable degree of belief, in the outcome being A? In the absence of evidence, we do not reasonably believe either of them to any degree. After all, if we said 0.5, we could be reasonably asked why our degree of belief was not 0.1? What could we answer? Why don't we believe that the outcome is always going to be B? In this situation no choice is reasonable.
What is a reasonable betting quotient? In the total absence of evidence, if forced to bet, we can only guess. There is no fair bet, because we have no evidence to justify the fairness. The idea of 'fairness', as opposed to just guessing, is that we have made a conjecture as to the chance of the event; we have a long-term conjecture, despite our short-term ignorance. A 'fair' bet is then one which would ensure that, after an infinity of tests, the better-on has not definitely gained or lost money. Conditional on the Inductive Presupposition, it is one that would ensure that, after a fairly long sequence of tests, the better-on has no definitely gained or lost money. We conclude that while short-term ignorance of the next outcome is sometimes associated with equi-chance (in the card games and coloured ball selections of classical probabilityx), ignorance of the nature of the system is associated with neither equi-chance nor equi-reasonable degree of belief.

Hurdle 12: What happens if we interpolate two throws of a fair die into a sequence of throws of a heavily loaded one?

It could be objected that we are in the odd position of having to claim that the probabilityx of a 5 occurring on each of the two occasions of a throw of a fair die is equal to the probability of a 5 as estimated by the long-run relative frequency in the sequence in which those throws actually occur.
This is a successful criticism of a completely different, very unsatisfactory, theory - a kind of Actual Frequentism, in which chancex is a property of an event (such as 'Coming heads'), where the value of the property is determined by the actual frequency of the event in an observed sequence.
More importantly, if we regard 'the long-run' as being evidence for the relative frequency that would be obtained in an infinite sequence, then we have a successful criticism of a Hypothetical Frequentism, an anti-Realist version of our theory in which chancex is a property of a collective of events (ie. not of the system that generates it).
On our theory, there is no problem: system S1: {fair die, unaided human thrower releasing die more than 1 m above a table} has the chancy1 property of 1/6 of giving 5, while system S2: {loaded die, etc} has the chancy1 property of 1/2 of giving 5. Picking up the fair die immediately changes the system, and the chance1.

This completes my discussion of classic hurdles.

I now continue by illustrating the Dual Theory using some defined terms, and considering how the parts of the theory vary, as viewed by a superbeing, a clever-being, and human being.

Further Notation, and Various Beings

We use Co ('o' for 'objective') to refer to the chance1 of a 5 being output on the next press of our device. It may be 1/6.
We use Cc to refer to the humanly conjectured value of the chance.
We use iBc, ('i' for 'individual'), to refer to an individual human's personal degree of belief, her betting quotient, that a chance1 conjecture is true. iBc can have any value between 0 and 1, regardless of the available evidence. The value of her individual degree of belief in the occurrence of a certain event, that a 5 will occur, iBe, follows as Cc X Bi. For example, she may be absolutely certain that the chance of a 5 occurring in the next press is 1/6 ( iBc= 1), even though she has no evidence to support this. Or she may be 0.9 sure that the next press will give a 5 (iBe = 0.9) because of the evidence of the previous sequence of numbers, or because she is feeling lucky. iBe could be called her 'personal probability'.
Finally, we use mBc , ('m' for methodological), to refer to the rough degree of belief in the truth of the conjecture (that the chance1 of 5 occurring is 1/6), given the evidence, prescribed by the present consensus. This is the reasonable degree of belief. The value of the consensus degree of belief in an event , say that a 5 will occur, mBe , follows as Cc X mBc .
The consensus is our touchstone of reasonableness. This is not ideal, but humanity has not yet been able to think of a criterion which is better. The 'reasonable degree of belief' is the degree of belief that the present consensus would have, given the evidence34.
Methodology is not yet an exact science; degrees of evidential support, though not perhaps merely qualitative (very poor, poor, OK, good, excellent), may well not be precisely quantitative either. Summarising, and attempting to justify, the consensus' rough degree of belief in a conjecture, given a certain amount of evidence - the task of Inductive Logic - is very difficult. This, however, applies to degrees of belief in all conjectures, not just those concerning objective chances35.

Using this notation, we can express the following claims:

Cc = Co is the statement of the human aim that their conjectures should be true.
mBc = 1 is a statement of the human hope -- not, we think, realised - that our consensus methods of justifying chance conjectures (the ones that define 'rational') are foolproof36.
iBe= mBe is a statement of the agreement between an individual's beliefs (judgment, and behaviour) and those sanctioned by the consensus as 'reasonable'. It states that the individual's degree of belief is reasonable.
iBe= mBe = Co is a statement of the human hope that the value of a personal probabilistic belief that an event will occur, if rationally supported, equals the true value of the objective chance1 of the event occurring.


To clarify the relationship between these terms, we now consider, in turn, a superbeing, a cleverbeing, and a human being, faced with a world which contains chancy systems.
Superbeing : The superbeing can either perceive all past and future events at one time, or perceive the finest details of all physical situations - the initial conditions - and then apply the true laws to predict its state at any later time. Either way, her evidence is accepted by the consensus to be conclusive. Bm, the reasonable (consensus) degree of belief that her conjectured value for the objective chance1, given the evidence, is true, is 1.
Cc , her conjectured value of the objective chance1 of 5 occurring, is 1/6.
mBc , the reasonable degree of belief, for any being with this kind of evidence, that Cc is true, is 1 - as is mBe .
iBc is 1. iBe, her personal degree of belief, her betting quotient, that the next display will be a 5, can also be 137.
She can consistently say "the next display will definitely be a 5 (degree of belief = 1), and the chance1 of a 5 being generated is 1/6". The appearance of contradiction is due to the confusion of chance1 with degree of belief.

Law-but-not-initial condition cleverbeing : This is a cleverbeing who can conclusively identify the value of the chance1, by evidence from study of the system, or by evidence from recording an infinite sequence but cannot establish the initial conditions of each test sufficiently accurately to calculate the outcome - positive feedback, the butterfly effect, defeats him.
(i) Co is 1/6
(ii) Cc is 1/6 (his evidence indicates this value)
(iii) mBc is 1; the consensus degree of belief that the conjecture that the chance1 is 1/6 is true, given that amount of evidence, remains 1. So the reasonable degree of belief in, say, a 5 occurring in the next press, mBe , is 1/6
(v) iBc could be anything. He may have just developed an irrational obsession with the number 5, so that he is personally certain that the next press will give a 5.

Human being (1997 model):
(i) Co is 1/6.
(ii) Cc is 1/6 (his evidence indicates this value)
(iii) mBc is less than 1. The reasonable degree of belief that this conjectured value is the true value, depends, in some imprecise way, on the extent of the available evidence. Despite its best efforts, the human consensus cannot provide clear guidelines on what truth-credit to assign to a conjecture, given a particular amount of available evidence. In this case it might be assessed as, very roughly, 0.138.
(iv) The reasonable39 degree of belief, betting quotient, mBe , that a 5 will occur next, is therefore less than 1/6. It is Cc X Br, which is, very roughly, 1/60 - or, more sensibly, distinctly small40.
(v) iBe, his personal degree of belief, could be anything. He might have looked at the system, pressed the button 10 times, obtained: 1, 4, 5, 3, 3, 2, 1, 5, 5, 5, and decided that the next press will definitely give him another 5, because he is gambling on it, and "he feels lucky". In this case iBe is 1, and is unreasonable.

If instead the individual had investigated the system more thoroughly, and consensus judged that the new amount of evidence - extensive study of the system and extended relative frequency tests - gave the conjecture considerable truth-credit - then the mBc , the methodological support, might be assessed as very roughly 0.99, in which case the reasonable41 betting quotient mBe on a 5 becomes very nearly 1/6.
This is the situation that humans always face when they are making conjectures about the world, on the basis of inadequate evidence - whether or not the conjecture concerns chances. In the simpler case, when the conjecture is a non-chancy1 fact or theory, the vague, rough, reasonable degree of belief mBc that the conjecture is true, given the evidence, is the only uncertainty involved, since the conjecture itself is a fact, or a theory, involving no reference to chances. mBe for the event occurring, as predicted by the theory, is therefore 1 X mBc , since if the conjecture is true, the event will occur.
In our, more complex, case, even if mBc was 1 (the cleverbeing case), the consensus still would not be able, to predict the outcome of the event with certainty, because the hypothesis we now know to be true, only gives the event a chance mBe of happening .

We now return to element 2, the Epistemological part of the dual theory. We have argued that the characteristic methods used to obtain - I avoid 'justify' - degrees of belief in an hypothesis, given some evidence, are not specially relevant to chance hypotheses; these are just a special case. Nonetheless, we can now consider these methods in a bit more detail.


Judging the value of mBc, given a conjecture, and an amount of evidence, is a methodological, Epistemological, problem, the result of a difficulty in the human situation.
There are two ways humans can get evidence to support conjectures of a value for a chance1:
(a) (direct) study the situation closely, to find the characteristic features which make it of this Kind, behave this way
(b) (indirect) collect relative frequency evidence that the propensity is present.
The evidence will always be inadequate, for at least these three reasons:
(i) the careful study of a system can always leave vital features overlooked, so that our prediction of its behaviour turns out to be completely false
(ii) an ideal test sequence needs to continue independent tests to infinity
(iii) the experiment relies on the presumption that sceptical doubts are quarantined; otherwise finite sequences, however long, could be consistently misleading (ie. despite the true presence of the chance)
How good is such evidence? How much truth-credit does it give to the resulting conjecture? What is the chance1that the conjecture is true, given the evidence?
There is no decisive set of justifiedq methodological rules establishing values for mBc, for chance1conjectures or for non-chance1 ones. The situations humans find themselves in are very varied, and not easily summarisable by rules. No generalisable methods govern the extent of truth credit indicated by direct evidence. Methods governing the truth credit indicated by ratios are the procedures of Hypothesis Testing in Statistics.
Is mBc subjective? It is the best assessment, by the consensus, of the justified degree of belief in a conjecture, the truth-credit the conjecture gains, given the amount of evidence. If it is regarded as the chance1 that a conjecture of that kind is true, given that amount of evidence, then it needs to be supported itself by meta-evidence of the success of such conjectures in the past. This pushes the problem of justifying extents of support to the meta-level.

How is rationality involved?

(a) Single values: The superbeing does not have to justify her propositions or beliefs concerning assignment of chances1, because she has decisive, direct, evidence; she knows them for sure, she is always right. Humans are not in this powerful, if slightly dull, situation. They want to make propositions concerning the value of the chances, but they only have indecisive, indirect, evidence. It is sufficient for them to justify/IP making a conjecture as to the value of the chance1 Cc - with some extent of justification mBc . They then can make reasonable bets; they can have some reasonable degree of belief mBe that an outcome will occur.
The intellectual police cannot insist that the conjectured value Cc, nor indeed, mBe has to equal the true value of chance1 Co. But they can insist that people do not pretend that the evidence strongly supports/IP a conjecture, if the consensus judges that it only very weakly supports/IP it; they can criticise an individual if iBe> mBe.
Unfortunately, humanity has found that establishing consensus guidelines for the amount of support (extent of truth-likeness, degree of belief) justified by a given amount of evidence, is very difficult. This is disappointing, but is not a problem for the Descriptive Epistemologist. He simply notes it, and passes on.
(b) Several linked values : The intellectual police can criticise people who are inconsistent in their assignments of chances1 (for instance, such that the conjectured chance of the six possible outcomes adds up to more than 1). This is because such people, if committed to using language conventionally, are asserting both something and its contrary (In this case, both that a sequence of tests has a certain number of outcomes, and that it has more outcomes than this).

Don't Objectivists usually require Indeterminism?

If we started with a full description of the environment (the values of all relevant structural parameters at that time), which conditions (warrants) the personal degree of belief, and which is then unchanged by further conditioning (e.g.. observed events), then we might seem to have to end up with output probabilitiesx of 0 or 1. The chance seems to be 0 or 1, because if the system is fully deterministic, and is fully specified, then it will have just one output, the determined state. To avoid this, Objectivists may try to include some indeterminism somewhere in the system. But this is unnecessary.
This is a widespread error. Pierre Laplace thought that if all events were physically necessary results of initial conditions and laws, then nothing could be probablex in itself - that probabilityx depended on ignorance. Writers state that in a deterministic world there would be no probabilisticx propensities - that all probabilityx is a way-station en route to real knowledge. Yet natural determinism is irrelevant to chance142.
A superbeing's full description gives iBe = 1, but still gives Co = 1/6. Each specific outcome is not only determined - the causal chains determine the exact output for any specific initial state - but also determinable by her. A human description does not obtain mBe = 1, because of our limitations. But both beings accept the same description of chancy events: infinitesimal changes in the initial conditions, at whatever time they were recorded, lead inexorably, by deterministic laws (in a chaotic system, displaying positive-feedback) to a certain Kind of variation in the Poincaré variable (the Poincaré variation), which inexorably leads, via a certain Kind of sensitive processor, to a certain Kind of output variation. No indeterminacy exists in the external world - yet the output shows a characteristic quality, identifiable by a superbeing, and such that limited humans are unable to predict specific outcomes. Each specific outcome is determined by the initial conditions, but it is not humanly determinable .
The world can thus be fully determinate, in the sense that each individual outcome is determined, governed by determinate laws acting on a system with certain initial conditions. At the same time, a feature of the world determines that such a system, repeatedly tested, would generate the characteristic short and long-term behaviour. Chance1 is a successful way of describing the outputs from this kind of system.

Is there a vicious regress in our approach, since chance reappears in the assessment of the degree of belief for a conjecture, which is itself about the assignment of a chance?

There is a reappearance. However, the onto-semantic analysis of chance1 is complete, before the epistemological analysis of degree of belief is undertaken. Therefore it is not vicious if chance1 reappears in this second analysis. Thus the reasonable extent of degree of belief in a conjecture, concerning the value of a chance in a system, could be partly based on an assessment of the chance, given structural and relative frequency evidence, that people get such conjectures right. This would need to follow the same rough criteria of such assessments. This is consistent rather than a vicious circle. If evidence began to lead us to think that our methodological guidelines were unsound, we would need to reconsider both basic and meta-assessments simultaneously.

If the degrees of belief in each outcome 1-5 all fall below 1/6, then they don't add up to 1. Does this matter?

There are two relevant degrees of belief to consider:
(a) mBc This is the consensus degree of belief (betting quotient) that the chance conjectured is true.
(b) Cc X mBc . This is the consensus degree of belief that a particular outcome will be observed in the next test.

mBc are assessments of the extent of evidence for the truth of the conjecture that the chances1 are 1/6. They are therefore bets on the truth of the conjecture that the chance1 of a 5 occurring is 1/6. The betting quotient on this truth can vary reasonably from 1 right down to 0. I can have a reasonable betting quotient of 0.1 that the true chance of a 5 occurring is 1/6 and also of 0.1 that the chance of a 4 occurring is 1/6, and so on.
Cc X mBc is different; it is betting not on chances but on outcomes - whether a 5 will appear or not.
So our question is: Can we consistently accept both of the following claims:
(i) I am sure that either outcome 1, 2, 3, 4, or 5, will appear; my degree of belief in this composite outcome is 1.
(ii) I have very little evidence that my conjecture as to the value of Cc is true. I could easily have wrongly assessed the system. For all I know, the chance of 5 appearing is 0.99, or 0.11.
All that consistency requires, as summarised by the probability calculus, is that a complete set of beliefs on the outcomes 1 to 5 should add up to 1.


Our problem was to provide an organised, summary of the human uses, and intuitions, associated with the word 'probable' - to summarise explicitly and truthfully the implicit ideas which are guiding the usage. We were to suppose that people have some coherent ideas when they use this word, but we were not to assume that a single principle (concept) would suffice
. We were to assess our theory by its (a) consistency (b) accordance with our uses and intuitions. We were not to presume that these human uses, even when regarded as typically reasonable, were justified - but instead to assess the extent of justifiability, be it high or low.

In this Dual description of aspects of probabilityx we have firstly explained chance, as a Physical property of a system. Degree of human ignorance , we have seen, is irrelevant to the description of the system; a superbeing would note exactly the same characteristic features of the system.
Secondly, we have described how humans obtain a reasonable degree of belief in the truth of any conjecture, and hence, in particular, in the chance1 of an outcome - using rough consensus guidelines on evidential support for conjectures. We have not tried to justify these guidelines.
We propose that this description solves many soluble extant problems in the Philosophy of probability.

Philip Thonemann

For general references, consult pp.28-32 of Howson's excellent survey article (Howson, C. (1995) }

Howson, C. (1995) Theories Of Probability ,BJPS 46 pp.1-32
Howson, C. and Urbach, P. (1993)Scientific Reasoning: The Bayesian Approach , Second Edition, Chicago, Open Court
Poincaré, H. (1905) The Foundations Of Science , Science Press, Lancaster, Pa.
Engel, E. (1992) A Road To Randomness In Physical Systems , (Lecture Notes in Statistics 71) Springer-Verlag
Von Mises, R. (1939) Probability, Statistics, and Truth , London, George Allen and Unwin
Von Plato, J. The Method Of Arbitrary Functions , BJPS 34 pp.34-47
Gillies, D.A. (1973) An Objective Theory Of Probability , London, Methuen
Popper, K. (1959)The Logic Of Scientific Discovery, London, Hutchinson
Popper, K. Conjectures and Refutations
Miller, D. (1994) Critical Rationalism Open Court
Harré, R. and Realism Rescued
Sklar, L. Physics and Chance

[Still to do: Sort out the references!]

Footnotes: (I don't think these can be in the body of the text in HTML...)

1 I am using the subscript 'x' to mean that the word is significantly vague, or ambiguous. Hence the subscript '1' means that I am now using the word in a more specific sense.

2 The unitary, perhaps linguistically essentialist, conjecture 'There is a single idea, unifying all human uses of the word 'probabilityx' is the core of a degenerating research programme.

3 Like a Physical theorist suggesting that an observation is mistaken, because it does not fit with her theory.

4 This is Complete Justificationism - a long-standing curse of Philosophy.

5 Howson, in his (1995) review, reckons that the key contemporary players are Bayesian theory of epistemic probability, and limiting relative frequency, propensity, prequentialist, and chance, theories of objective probability. Of these, I am not including the last two as sub-theories. Encouragingly, he writes (p.21): "a legitimate role for Von Mises' theory is that, combined with the Bayesian apparatus for constructing posterior distributions, it provides the final link between the model and reality".

6 Accepting (i) the certainty of border-region cases that it does not cover (ii) vagueness in various of the key terms in the description.

7 It will not, however, enable the alien to use the word 'chancex' in all everyday cases, where its use is vague and confused, and partly determined by context and empathy.

8 This is where we invoke a property, a propensity, to partner the relative frequency.

9 The series can be a function of time such that the relative frequency will not display this value. In this case, 'this machine or this being 1+ rest of system s' produces a system S1 which is not chancy, when 'another machine or another being 2+ rest of system s' produces a system S2 which is chancy.

10 Without this condition, there is no identifiable system, as an invariant, to have the various outcomes in tests (eg. the shape of the die).

11 eg. the air currents; the velocity of the throw.

12 We can be more specific, and say that if we have 100 tests, then the experimental ratio will lie in the range1/6 ± some small value; if we have 1000 tests, it will lie in the range 1/6 ± some smaller value; and so on. Indeed, we can specify how often, in such a test run, the resulting ratio will lie outside these ranges.

13 These commonsense descriptions were clearly expressed by Richard Von Mises, who developed (i) the definition of the collective (vaguely: chance as determining limiting relative frequencies) (ii) the idea that a physical system can be conjectured to have the property of tending to generate the collective (vaguely: chance as a propensity) (iii) the idea that this property can be initially defined any way we wish, because, like any other conjectured physical property, its appropriateness will be tested by experience of Nature (vaguely: 'chance as a theoretical entity'). We could alter the second description so that it merely refers to "1997 human inability to find a pattern". This would still define a perfectly respectable property of Nature - whose full description requires reference to a particular sensing being, just as 'whiteness' does. I am unsure if this gives any advantage, but it is a coherent option.

14 Thus preempting the criticism that Bayesians fail to provide justification for their helpful principled description (Miller (1994))

15 Howson, I suggest, uncharacteristically slips up when he writes (1995 p.16): "almost every hypothesis of use to Statistics is a priori declared false by it {Cournot's Rule}". This would only be true if the rule was interpreted non-methodologically, as a restriction within the onto-semantics (the model) on the possible consequences of the chance conjecture - as the claim that 2 5s is not a possible consequence of C1. Such an interpretation would indeed be incoherent interference with the model - which is why we have not considered it.

16 Howson tells us that the weak and strong laws of large numbers are logical consequences of Von Mises' axioms of convergence and randomness. They will not help us in our problem in this section. Howson, separately, hopes to prove, using Bayesian arguments, that (op.cit. p.18): "despite their infinitary character, Von Mises collectives satisfy a criterion of empirical significance". This enterprise, we can now see, is circular, because the presupposition IP that underpins the crucial theorem is the very one that we are trying to justify.

17 Or, possibly, 5 appears 360 000 times in a row - in which case the system would appear to be of no interest at all, being like looking at a die on a table which had 5 uppermost, recording its display, looking away, looking back, recording its display again, and continuing for a couple of months.

18 In other words, a + 6e produces output 1 again.

19 It may not be determinable by 1997 humans, but our extent of evidence - our degree of ignorance - is irrelevant.

20 As we have discovered in weather forecasting, tiny variations in the initial conditions lead to massive variations in the later state of the system.

21 Rather than always refer to intervals, we will return to referring only to particular values. This does not affect the argument.

22 He describes this property in terms of the analyticity of the probability function, where 'analytic' means that the slope of the function always exists, and varies continuously. We avoid this formulation, but retain the concept.

23 We assume that a is the voltage associated with some value of pressure around atmospheric.

24 The consciousness of the being, the mind, may feel that something much freer is going on. This could be a delusion. Just as consciously undetermined, free, actions, (random slips of the tongue) may be the result of unconscious determined processes, so the self-consciously random button presses, could be the result of a process in the neurons of the brain, which does not generate some kind of real randomness, but is just the kind of process which is being modelled by the second system.

25 As our Physical understanding of the brain develops, we might be able to substitute a better model here.

26 Which is interesting, but of no Philosophical significance.

27 We are assuming that she is fully aware of what every outcome of a test will be. This does not affect the usefulness of the natural classification 'chancy1' and the numerical measure 'chance1' to her.

28 The existence of these depends on the existence of gases, of rivers, of people, behaving as they behave on the planet Earth. If nothing displayed Poincaré variation, the concept of chance1 would have no reference.

29 Thanks to Rom Harré for making this point.

30 Again we follow Von Mises.

31 ie. no gambler will be able to devise a system to beat the odds.

32 Von Mises makes this clear. He also makes clear his fear that it is difficult to understand - because of the hypnotic effect of everyday language.

33 Until such information causes people's behaviour to start changing, as it would presumably do.

34 It is open to individuals to try to change the view of the consensus. Thus Galileo's view of the degree of belief afforded by the evidence for the Copernican theory may have been inconsistent with that of the consensus at that time. His task, then, was to persuade the consensus to change. If he had failed to do so, then we would now regard him as a crank. But he succeeded, so we regard him as a great man. Whether we now judge him to have been 'reasonable' or not, is a measure of our own consensus view on the weight of his evidence.

35 We could, with some danger of regress, conjecture a value for the chance of us getting such a conjecture right, given such amounts of evidence. This takes us to the meta-level, at which we need to appeal to the consensus to judge what meta-evidence we have, from the history of investigations, as to the success-rate of such conjectures (when they had been made with this amount of evidence) - and hence, what truth-credit to assign to such a conjecture, what degree of belief or confidence.

36 ie. not just 'reasonable, given the amount of evidence we have', but 'never fail'.

37 iBe could, due to, say, an unreasonable crisis of confidence in her abilities, be 0.1.

38 He has not got much evidence.

39 'Reasonable' in the sense that consensus meta-evidence roughly suggests that conjectures of chances, based on that rough amount of evidence, tend to be right about 1 time in 10. This 'reasonable' is a matter of consensus human judgment in a situation of very limited information.

40 These numbers for mBc are misleading. The consensus realistically provides, at best, 'large', 'medium', and 'small', or - at least - of numbers with very large error bars.

41 A human who is judged, by a consensus, to have wildly overestimated the truth-credit supplied by the evidence, would produce a final subjective personal probability which would be larger than the reasonable one.

42 Donald Gillies writes (An Objective Theory Of Probability) that "probability theory is quite compatible with determinism". We can explain probabilities with an underlying deterministic theory. He rightly says (p. 136-7) that Knitchine's work, loosely following Poincaré, only shows how macro-random processes can be an amplification of micro-random ones. The question of what we mean by randomness is not answered by such work; nor is the question of how randomness originally arises.

Version History

Changes from v 2.8 (circulated June 1996)

1. Removed mistaken claim that Howson opposed a dual theory.

2. Added familiar 'long term' and 'short term' terminology (Rom Harré)

3. Included reference to Hopf and Engel on arbitrary functions (Brian Skyrms)

4. Restructured whole paper as Outline, 12 hurdles, various beings, and conclusion, reducing emphasis on Physical basis for chanciness in systems, since this is not essential to the dual theory. Removed two examples - horse racing and die tossing.

5. Removed all comparisons with contemporary theories, due to length.

Changes from v 5 (circulated March 1997)

1. Changed 'integrated theory' to 'dual theory' (John Welch)

2. Removed the potentially misleading terms 'objective' and 'subjective', substituting 'less subjective' and 'more subjective' (John Welch)

3. Corrected a serious inconsistency in, and considerably clarified, Element 2: Reasonable Degrees of Belief. The suggestion that chance and degree of belief could co-exist in Bayes' theorem is removed. In the process, I clarified how the Inductive Presupposition links both the chance of a consequence to belief in a consequence, and belief in a consequence to belief in a theory. (John Welch)

4. Changed 'probability is vague' to 'ambiguous' (Jane Hutton and John Welch)

5. Corrected an inconsistency in hurdle 11 on Ignorance and Equiprobabilityx. If there is no evidence of the presence of a chance, then no reasonable degree of belief in a particular outcome exists, and no fair betting coefficient exists. (Again, thanks to John Welch)

Return to Home Page