The word 'probability' is ambiguously used to refer to two
distinct ideas:

(i) Chances, which are conjectured to be a property of Physical
systems - a property, or propensity, independent of the state of
human knowledge.

(ii) Degrees of belief or betting quotients, which describe the
extent of human confidence in the truth of a claim.

Chancy systems are analysed following the approach of Von Mises and
Popper: Real Physical systems behave in various ways; to describe
some such systems' behaviour, humans devise a model in which a system
has the property of being able to generate, in the long term, an
infinite sequence of outcomes (a collective) with each outcome having
a particular relative frequency of occurrence. This relative
frequency provides the numerical value for a chance. In the short
term no pattern appears in the outcomes.

An outline of the Physics of Poincare, Hopf, and Engel, on arbitrary
functions, in included, to indicate how systems, while fully
determined by laws and initial conditions, come to behave in this
characteristic way.

Subjective degree of belief has no necessary relationship to chance.
It is methodological, concerning what people judge to be a reasonable
extent of confidence in a claim, given the available evidence. The
gulf between chances and reasonable degrees of belief can be bridged
by an Inductive Presupposition, which itself seems to be
unjustifiable, but is in universal use.

This Dual Theory fits unproblematically with all our intuitions
concerning Probability. Traditional problems have arisen as a result
of excessive emphasis either on objective chances, or on subjective
degrees of belief.

(A Dual Theory of Probability)

24.3.1997; 17 512 words; Version 5.2

{I thank Rom Harré, David Papineau, Brian Skyrms, and John
Welch, for valuable comments on a previous version of this paper}

"Distant stars are moving away from us". "This coin will land
displaying a head". In everyday language, both of these claims are
described as
"probable_{x}'^{1}
(Footnotes are collected at the end of the essay), meaning that
people are not able to establish them as true or false. But, despite
this common feature, they are very different: the first only concerns
a human degree of belief (betting quotient) in a claim, given the
available evidence; it applies to all types of conjectures; this is
epistemic, more subjective. The second concerns both this, and a
conjectured feature of the natural world: 'chance'; this is
ontological, less subjective.

In everyday communication, context and empathy identify which idea we
intend. To emphasise one aspect, as an Objectivist or a Subjectivist,
is, though temptingly simple^{2}, to
oversimplify. Such an attempt is a 'sub-theory'. Consider: "People
have rational degrees of belief in **propensities**, given the
evidence of **relative frequencies**". Each of the highlighted
words and phrases is associated with a sub-theory. Since our Dual Theory includes part of each sub-theory, it cannot usefully be
categorised as either 'Subjective' or 'Objective'. It contains both
elements, working harmoniously side by side.

What is the Dual Theory a theory *about* ? How can it be
assessed? Our evidence - the facts for our theory to explain - is,
firstly, the everyday uses of the word
'probability_{x}', and, secondly,
since we are typical users, our intuitions. Our problem is to provide
an organised, summary of these uses - to summarise explicitly and
truthfully the implicit ideas which are guiding the usage.

Though we do not assume that there is *one* essential concept
present, we do assume that people have *some* coherent ideas
when they use the word
'probability_{x}'.

We therefore assess our theory by its (a) consistency (b) accordance
with our uses and intuitions. This is what is meant by 'analysing',
'unpacking', and 'giving an account' of, a concept.

The challenges that such a theory faces are typically that it cannot
make sense of a particular familiar intuition: say, the intuition
that I have a definite probability of dying in the next year, or the
intuition that a 5 has a probability of 1/6 of being thrown, not say
1/2 (because it can come up either 5 or not-5). The response that the
intuition is mistaken, can only be made with caution
^{3}.

Our primary aim is to find the truth about these uses and intuitions.
It is a reasonable subsidiary aim both to describe the facts of human
use and intuition in a simple unitary system, and to mark the
*limits* of such simplicity. But it is *not* reasonable to
insist on such simplicity. Whether it exists, or not, is an empirical
question.

Similarly, it is a reasonable subsidiary aim both to describe the
extent of justification for these human uses and intuitions - and to
mark the *limits* of such justification. But it is not
reasonable to insist that such justification must always exist
^{4}. Again, whether it does or not is
an empirical question.

**The Dual Theory
**The word 'probability

(a)

This is a matter of

(b)

This is a matter of

These apparently dissociated things have in common

This

Given the considerable number of Philosophers who believe that one or other of the parts

I now outline of the two aspects of this theory.

Probability_{x }has an objective
aspect. Insurance companies, as Poincaré, for example, wrote,
successfully pay out dividends on the basis of
probabilities_{x}; they could continue
to do so, even if further information on the medical conditions of
its clients was provided by unscrupulous doctors - indeed, even if*
total evidence* was supplied. If probability_{x
}ascriptions were entirely subjective, dependent on human
ignorance - and therefore not perceived in Nature by a super-being -
then we would not be able to explain "Why chance obeys laws"
(Poincaré p. 403). Why are we, as non-super-beings, able to
use probabilityx assignments in cases where effects are being
produced by certain kinds of causes, to "successfully foresee, if not
their effects in each case, at least what their effects will be, on
the average"?

Consider, he suggested, the Kinetic theory of gases: we are presently
unable to compute, given initial conditions at a certain time, and
physical laws, how many molecules would hit the side of a box 5
seconds later; we cannot even establish the initial conditions; yet,
oddly, the very complexity of the motions leads us to simple
predictions - which turn out to be true. Even if, with future
technology - perhaps as superbeings - we could do the computation,
and could establish the initial conditions - removing our ignorance -
the predictions based on randomness and equiprobability would still
be correct; chance would still obey laws. The natural system,
consisting of a large number of molecules in a box, has a property,
linked to the success of these predictions, which is independent of
human beings in general, and of their ignorance in particular.

What is this property? Humans have experience of many systems which
appear to have a characteristic Kind of behaviour: their outcomes,
while seeming to occur in approximately constant ratios in the medium
term, hop about unpredictably in the short term. Stimulated by this
experience, humans have developed the concept of a property, which
these systems might have. Without trying to make it absolutely
precise^{6}, we now give guidelines
for a meaning of the concept of
chance_{1}, sufficient, for example,
to help an alien from a non-chancy world to understand what we intend
to mean by the word^{7}.

The conjecture "The
*chance** _{1}* of an output
of the system being 5 is 1/6" is taken to mean that we are
conjecturing that the system has a characteristic property that
displays

(i)

Chance

"To what extent does it apply to the external world?" is an important, separate, question. Compare defining the

This is the situation we are now in with chance

This is for later.

"Distant stars are moving away from us"

As already explained, the only thing that this methodological,
Epistemological, area has in common with the conjectures concerning
Nature in Element 1 is that both involve uncertainty. Philosophically
this family resemblance is unimportant. ** The two elements are conceptually unrelated . **This is a vital source of
long-standing confusion.

To what extent do we have confidence in a claim C, given indecisive evidence E (insufficient to establish that C is true or false). It concerns the extent to which we would

Recalling our meta-methodology, we are aiming to describe, in as simple a way as possible, the judgements that people make. Then we are aiming to describe the extent to which these judgements are justified.

Firstly we distinguish

In this next section we discuss the best available principled description of human reasonable degrees of belief, which is that according to Bayes' theorem.

Degree of belief can be roughly classified as from 0 (no belief) to 1(total confidence).

(BTi) increases in our degree of belief in the claim, given background knowledge

(BTii) decreases in our degree of belief in the evidence, given background knowledge

(BTiii) increases in our degree of belief in the evidence, given the theory and background knowledge.

As Howson and Urbach show convincingly in their (1993), this theorem satisfactorily summarises the facts of human use and intuitions concerning the triad {claim, evidence, reasonable degree of belief}. In particular, for example, it summarises the vital role of

Howson and Urbach also explain, with unusual clarity and firmness, that the theorem does not

Consider the successful prediction by Fresnel, using his new wave-theory of light, of the spot of light in the middle of the shadow of a small object. This novel fact prediction increased Physicists' degree of belief in the wave theory, following their intuition that the chance of a random theory with no truth in it successfully predicting this observation was very small.

Physicists were comparing the reasonable degree of belief in two meta-theories: ( MT

Summarising this use of Bayes' theorem: People are inclined not to believe in the truth of a theory if events which, according to the theory, have a very low chance of occurrence, and which therefore they would tend not to believe would be observed, have been observed.

This, we suggest, is well

This almost self-referential situation, combined with refusal to accept that justification may be impossible, combined with a tendency to use 'probability

To what extent are these two moves justified? Isn't a chance hypothesis consistent with any finite sequence of outcomes, however long? To what extent is it fair to bet 1:10 000 on e occurring in the next test, conditional on the chance of e being 1/10 000? It is not fair; it is not justified. But it is fair/IP and justified/IP. If our experiences are typical, a fair sample, of how Nature behaves, then: (i) we can reasonably/IP believe that we will not observe the occurrence of an event which has a very low chance of occurring (Predicting observed events). (ii) we can reasonably/IP disbelieve theories whose truth would require, unreasonably/IP, that our experience was untypical (Testing theories)).

Substituting the first link into the second, we conclude that if an event occurs, then our reasonable/IP degree of belief in a theory according to which our reasonable/IP degree of belief in the occurrence of this event would have been extremely low, is extremely low (unless we have very strong other reasons for believing the theory). The two reasonable/IP degrees of belief are correlated. In notation:

This discussion need not be prolonged. It is heading, qualitatively, towards Bayes' theorem, which we have already agreed to be an elegant summary of a cluster of human intuitions as to what is reasonable/IP to believe. Howson and Urbach's admirable book provides many examples.

But why, it may be asked, did we go to this trouble, when we were already using Bayes' theorem at the beginning? There are two reasons: (i) If 'probability' is ambiguous, then the calculus of probability

We now turn to the classic hurdles which loom up, ready to trip a theory of probability

In the rest of this paper I indicate how this ** DT** jumps
the following hurdles:

(i) Are conjectures concerning chances

(ii) Can we explain the application of probabilities to single events?

(iii) Can we explain how chances have arisen, on the supposition that Nature is deterministic? Can we explain how some systems lead to disorder, and then back to some kind of statistical order?

(iv) Can we explain what the chance

(v) Can we explain how chances seem to vary, depending on the choice of outcome space? {If we regard the die as having 6 outcomes, then the chance of getting 5 is 1/6; if we regard it as having 2 outcomes - 5 or not-5 - then the chance of getting 5 is 1/2}

(vi) Can we account for conditional probabilities?

(vii) To what extent do we have evidence that any real systems are approximately chancy?

(viii) Suppose that a die has been thrown, at t = 0, and a 5 has just been obtained. What was the probability of this event occurring? {Was it, for example, 1/6, or 1?}

(ix) Can we explain how the system, and indeed the outcome, is to be specified? After all, if the system is specified too precisely, there may be no variation in the outcome, while if the outcome is specified too precisely, every one will be unique. Doesn't ambiguity over the Unique Experimental Protocol - the specification of the system - make the objective probability unacceptably variable, for a quality that is supposed to exist in the external world?

(x) Wouldn't it be preferable to stick to reasonably definite, testable, things like degrees of belief (in the form of betting quotients, and utility), rather than conjecturing the real existence of peculiar qualities of systems, which are not positively testable?

(xi) Does ignorance lead to equiprobability?

(xii) What happens if we interpolate two throws of a fair die into a sequence of throws of a heavily loaded one?

I hope that this list includes your favourite hurdles for a theory of probability

To what extent do we have **evidence** that there are some such
systems, at least approximately, in Nature? To what extent can we
establish values for the chances in these systems?

Since this concerns how conjectured chances are tested, it is a
**methodological** question.

What methods are used to test *particular* conjectures, such as
"This die has 6 sides"? This is **direct testing** ; observation
confirms or disconfirms it. This case is unfortunately not relevant
to us.

What about *general* and *theoretical* conjectures? This is
**indirect testing** ; testable consequences are deduced from
them. Verifying these does not, however, establish the conjecture as
true, since many other conjectures could have had the same
consequences. By observation of human behaviour (intuitions) we have
already discovered that humans make the completely unjustified
** Inductive Presupposition** (

It is, of course, the presupposition that we identified before, in our discussion of Bayes' theorem. But at that stage we were merely noting its presence in a description of human intuitions concerning the triad {claim, evidence, reasonable degree of belief}, as it applied to all claims. Now we apply it to claims concerning chances.

Conjectures concerning chances are made empirically testable by the application of Cournot's Rule (see, for example, Gillies (1973)). Criticism of this rule is misconceived. His rule is not a part of the Ontology or Semantics of chance; it is not a part of the less subjective

Thus our conjecture concerning a chance is indeed empirical, and is tested in a familiar-sounding way

Have we justified this procedure? No. We judge that the extent of justification is zero

Our unpleasant conclusion is that our best model of chancy systems produces consequences which are compatible with any finite observed sequence of outcomes. Chancy conjectures have no justifiable empirical significance. But this is not a scandal; they merely 'join the club' of other hypotheses which, being too distant from direct testing, suffer from the problem of 'Inference To The Best Explanation'. Human intuition carelessly leaps the logical gap.

The reader should find this unsatisfactory. Of course it is. But it is the truth about our human situation - our reach exceeds our grasp.

Following Von Mises, and Howson, (and not, for example, Miller
(1994)) we suggest that single events, if not regarded as part of a
collective, do not have chances_{1}
associated with them. I can coherently, of course, express a degree
of belief that I, as an individual, am going to die in the next year
- and *a fortiori *I can express a probabilityx that I will die.
I can coherently conjecture a *chance* of dying if I consider
myself as a person, or as a man, or as a man aged 48, or as a man
aged 48 who takes a bit of exercise, but if I insist on cutting
adrift from all collectives, the sentence "I have a high probability
of dying in the next year" can only express a degree of belief.

Von Mises' chance_{1} cannot,
consistent with its meaning, apply to events at all, multiple or
single. A single event is generated by the system; a collective of
events is, hypothetically, generated by the system - it makes no
difference; the property chance_{1} is
a Physical quality of the *system* - the property of tending to
generate collectives. In other words, the question: "What is the
chance_{1}, (as a 40 year-old person
who has just signed on to the Life Insurance Company) of the event
'Me dying in the next 10 years'?" is a misposed question. This is
unsurprising: "What is the weight_{1}
of his troubles?", where 'weight1' is as defined in Physics, is
similarly misposed.

In this case, we can continue to talk of our degree of belief that we
will die in the next 10 years; we may even have a betting quotient
associated with the degree of belief; *but this has nothing to do
with chance*_{1}*
*.

In other words, if people ask: "What is the
*probability*_{x}*
associated with the event* 'Obtaining 5 on the throw of this die
at ** t** +1'?, they are making either a concealed reference
to the

Is there any reason why we might

(a) "Some patterns of everyday speech seem to have the form of an assignation of a probability

(b) "We would like chance

We here summarise the approach of Arbitrary Functions, due to
Poincaré. It provides a model for how the phenomena we call
'chances_{1} ' could arise naturally
in a deterministic world.

*The Experiment*

Fig.1

On the bench in front of us is a gas container, connected to an
electronic device, on which is a blank display. When we press the
button beside the display, a number appears. We press it a couple of
times; numbers between 1 and 6 appear. They show no immediately
obvious pattern; they hop about. We record them for 360 presses
(tests); each appears about 1/6 of the
times^{17}. We record 360 000 tests;
each appears very nearly 1/6 of the times . This is an interesting
phenomenon. We cannot predict individual outputs, despite our best
efforts to find some pattern to the sequences. But we seem to have a
physical law that we can use to predict ratios for large numbers of
outputs.

We now decide to study the system on the bench. We hope to devise a
Physical model of the system, to see if our model might display, in
the short and long term, the characteristic behaviour.

**The computer simulation of a model of the gas in the
container**

We find that the gas container has a small pressure sensor inside it.
This generates a voltage ** V** , proportional to the
pressure detected. When we press the button, the device samples this
voltage. If it has value

Since the voltage is the key variable which links the two parts of the system, we call it the 'Poincaré variable'.

Which part of the system is responsible for the short-term unpatterned, yet long-term patterned, variation in output? If it is not the electronic processor (the

The gas contains about 10

Suppose, following Poincaré, that our first simulation is very simple - too simple. At

This failure is not surprising. Our idealisation left out

Sometimes, as when Galileo initially left out air resistance, an idealised model can still give very accurate predictions, covering all the main aspects of the phenomenon - leaving only details to be tidied up. But other times, especially when positive feedback, or non-linear equations, govern aspects of the system, leaving one apparently small factor out of the model can lead to inability to derive major aspects of the behaviour of the real system

We run our second, less idealised, simulation, on a more powerful computer, with more realistic walls - consisting of lots of tiny bumps and dips - and realistic interactions between the molecules. We find that if all the molecules start off in one direction, at one speed, then, after a certain number of collisions, the order is lost. Very soon "their final distribution has no longer any relation to their original distribution" (p.401). If the simulation is run on, with 1000 000 molecules, we find that a simulated sensor in the corner indicates a characteristic kind of variation in number of hits - a variation which recurs however we set the molecules off. The immensely complex simulated behaviour of the more realistic situation has led, in one respect, to a law-like simplicity.

The simplicity is this. The simulated sensor repeatedly records the number of molecules that hit it in successive 1 ms intervals. In the first interval it records, say, 1000 atoms. In the next, it records 890. After, say, 100 s of recording these numbers, it processes its data, to find the frequency of occurrence of intervals in which particular numbers of atoms arrived. It finds that between 986 and 995 atoms arrived, in 100 of these intervals; between 996 and 1005 atoms arrived, in 102 intervals; between 1006 and 1015 atoms arrived, in 101 intervals. In other words, the number of times that similar numbers of molecules arrived, is very similar. The larger the number of molecules we simulate, the more accurately this holds true.

Suppose that we cannot yet explain, using only our objective theoretical description of the situation, why there is this characteristic similarity in the number of times that one number of hits are recorded, and the number of times that very similar numbers of hits are recorded. This present failure does not imply that the characteristic similarity is

In our simulation, the ratio of occurrence of pressures in one interval, which we could label

Comparing the simulation to the real world, we find evidence, by direct measurement of the pressure on a sensor in a real gas, that this variation is indeed occurring. Now we recall that the sensor converts this varying pressure to a varying voltage. Therefore the voltage will also vary such that if

Suppose that

If therefore the electronic device continually recorded all the voltages received, it would end up with

Suppose instead that the device does not produce an output until the button is pressed. If

Returning to the real world, this fits with the human case, where people, whether trying to follow simple patterns of pressing (every 1 s), following highly complex patterns, or pressing when they feel like doing so, get the ratios to come out to 1/6 about the same number of times (if they are patient enough).

'Sampling every 1 s' is an easy sampling pattern to model. 'Pressing when I feel like it' is a hard pattern to model. If we suppose that the human being does not introduce some new kind of randomness into the world - if we suppose that we are a complex, Natural, electrical system

Fig.2

So, in a sequence very difficult to predict, the first system has
its button pressed^{25}. Run after
run, the computer simulation gives output ratios of 1/6 for each
output.

We can also link the sampling to other parts of the natural world
which seem to display this characteristic feature. We could link it
to another identical system, so arranged that when more than 1000
atoms hit the second sensor in 1 s, the second system outputs the
command to push the button on the first. Alternatively, we could
press the button if a Geiger counter detects more than 100 counts
from a radioactive Cobalt-60 sample in the preceding second. One part
of the Physical world, generating a certain kind of varying variable
and linked to a sensitive processor, when it interacts with another
such part, tends to produce, 99.9999% of the times, a characteristic
patterned output.

**GENERAL PHYSICAL FEATURES OF SITUATIONS IN WHICH CHANCES
ARISE**

The computer simulation shows that certain systems, even when
micro-deterministic, could develop the kind of characteristic
behaviour which we have agreed to call 'chancy'. We conjecture that
these systems include not just the real case discussed above, but
coin tossing, roulette, and other classic systems.

In each such system:

(i) There is a conjecturally deterministic *primary system* .
Typically because it contains large numbers of moving independent
molecules, governed by processes including positive feedback, it
generates a physical quantity, the Poincaré variable, which
displays characteristic Poincaré variation (the ratio of
occurrence of one small interval of values is very similar to the
ratio of occurrence of the adjoining small interval, over an
infinitely long sequence of tests). This variable is the input to the
next system.

(ii) The Poincaré variable is fed into the *secondary
system* , which is a *processor* , which processes the
varying input in a characteristic way, being sensitive to small
changes in the Poincaré variable. It then produces a selection
of specific *outputs* .

The vital feature of this description is that objective features of
the world have merely led to further objective features. No aspect of
the above involves reference to the limitations of human
*knowledge* concerning the outcomes of the systems.

This completes our theory of chance_{1
}- objective
probability_{x} (see diagram
below).

Fig.3

The simulation model shows that micro-randomness *can* emerge
from a micro-deterministic system. This fills in the step before
Poincaré (see also Knitchine (quoted in Gillies)), who argued
on from micro-randomness to
macro-randomness^{26}. We do not need
to answer the objection that you *can'tx* get chances out unless
you put them in. Unless the critic can explain in what sense he means
'can't', we can merely point to the above passage, and indicate that
it shows that chances *do* , and therefore *can* , emerge
from deterministic systems. This short-circuits any attempt at a
conceptual argument that this is somehow impossible (perhaps because
of the meanings of the words involved).

We are not claiming that personal
probability_{x}, reasonable degree of
belief or betting quotient, emerge from the Physics of the external
world; these concepts are person-relative, arising when we face
events we have difficulty in predicting. We are claiming that the
characteristic property of systems in the world, which leads them to
produce such events, could be a predictable consequence of
deterministic Physics. We suggest that our simulation presents the
reader with an example defying his faith. To just state that this
*cannot* be happening is like stating that a rocket cannot
accelerate in space, because it has nothing to push against, even
when presented with evidence of a rocket accelerating.

Where is the fault in the simulation? We suggest that it provides
evidence that chancy_{1} behaviour can
be generated from entirely deterministic micro-behaviour.

*Getting** _{x}*
probability

Have we made some assumption about the chances of the presses? Not necessarily. The system, following deterministic laws, generates a sequence of numbers

But couldn't it happen, by chance, that the pressing gives a 5 every time? Certainly, but this does not imply that we have failed to devise a chancy system; this is a typical property of a chancy system. Once again we are confusing the ontology with the methodology. The chanciness of the system is perfectly compatible with a particular observed finite sequence being apparently not chancy.

Maybe not. Suppose that the world had turned out, as far as we can tell, to be tychistic. Suppose, in other words, that the die tossing, the gas experiment, and so on, all behaved as they do on Earth, but that our further investigations of the structure of these systems did not unravel any deterministic laws. We could see that there were 6 outcomes, but beyond this we could not get. Either there seemed to be

We could still propose that the system was objectively chancy

If we later found that the system actually had some

(See e.g.. Sklar

(i) Will our model explain how the gas reaches the equilibrium state in which a gas in the actual world mostly is?

(ii) Will our gas display Poincaré recurrence? At some stage it will reach

Our gas will also display time reversal symmetry, such that it could run backwards in time, consistent with physical laws. In other words, we do not establish an arrow for time. Does this matter?

Our inclination at present is to suggest that the laws of thermodynamics are themselves probabilistic, in the sense that they are not precisely true. What is our

Consider the classic experiment of James Joule, based on one by Gay-Lussac: a gas is in one of two containers, linked by a tube, but separated by a valve. The second container is evacuated. When the valve is opened, Joule found that the gas tended to fill both containers, though there was no change in the overall energy of the system. Entropy has increased, and the Second Law implies that the process is not reversible. But again, we could harmlessly suppose that the law is a version of our Inductive Doubt quaranting - it is saying that we do not tend to observe very low-chance

Poincaré recurrence for a gas in our container

Some systems, governed by positive feedback of small changes, tend to
behave in a characteristic way. The system's tendency to behave in
this way could be described as a *property* of a characteristic
Kind (associated with a ratio) - the system has the power, or
property, or quality, of tending, in an infinite series of
independent tests, to generate a limiting frequency, and randomness.
This objectivex aspect of the system, which causes it to behave in
this characteristic way, is referred to, in everyday English, as
'chance_{x}' ('games of chance'), or
as 'probabilistic'. We will call such systems
'chancy_{1}' (this being an ugly
version of the more familiar 'probabilistic'.)

This aspect is objective_{x}, in that
its existence has nothing to do with degrees of belief. The
characteristic property, used to identify the Kind, is that:

(i) the outputs in the short term are hard for humans to predict -
they seem to be hop about lawlessly - yet:

(ii) the outputs in the long terms appear to be governed by lawlike
ratios we can associate with them.

To say that a system has a characteristic *property* , is to say
no more than "They do certain things, make certain things happen, in
certain situations". When it is not *in* these situations,
although it is not *doing these* things, it retains the ability
to do them, when its situation changes. How we describe this is up to
us. The particular choice of language of 'qualities' has always
created difficulties. Some systems behave in a certain characteristic
way in some situations. To say that this is *because* they
possess a certain property, makes us seem to be *explaining* the
behaviour, when we are doing no more than *locating* it. We are
just finding a different way of *describing* what the system can
do.

The characteristic short and long-term pattern in certain events, and
the tendency of certain systems to produce this pattern, exist and
have been explained. There is no further answer to the question "What
*is* chance_{1} ?".

'Chance_{1} outcomes', which is the
highest-observability objective aspect of chance, are the aspect of
the world which is displayed by the outputs of a system of the Kind
described above. A superbeing would regard our six outcomes as
similar in a characteristic way. She would suggest the coining of the
term 'chance_{1}' to refer to the
occurrence of a particular one of the six outcomes. In general, she
would suggest the use of the word, objectively, to describe a
particular outcome of a system which, with a very small change in its
initial conditions - where such changes are occurring - would have
led to another, considerably different, outcome. She classifies the
outcome as 'the result of chance_{1}';
the 5 is obtained 'by chance'.

She also suggests the use of the word to refer to the feature of the
situation that, over finite repeated outcomes, using the
Improbability Presupposition, a particular outcome will tend to be
recorded a certain specific proportion of the times (the limiting
frequency); the asymptotic ratio resulting from an infinite sequence
of tests gives the true chance_{1}.
Thus she now refers to an outcome which has a *chance1* of
occurring of 0.5^{27}.

The superbeing has identified a particular interesting kind of
situation, characteristic of the present state of the
Earth^{28}, which she calls
'chancy_{1} '. She would be able to
assign the same chances1 to the outcomes of trials of the die in a
situation, even though she knows what all the outcomes are going to
be.

Could we have done without chances, and make do with relative
frequencies? We cannot, because the relative frequencies are not the
property, they are the *result* of the property, that the system
has. True, when we refer to the chance_{1
}of a certain output being 0.5, we are referring to the
relative frequency of its appearance in a putative infinite series of
independent tests. This series cannot be undertaken. But when we
refer to the system as 'chancy_{1}',
we are referring to a *property* that the system has, the
property of generating outputs which hop about in the short term, yet
obey certain relative frequency laws. The property *is* not a
relative frequency; it is *about* relative frequencies that will
ensue, if tested. The relative frequency is the outcome of the
property displayed under test.

The chance in a coin-tossing system is not a property of the coin, it
is a property of the system, including, for example, the skill of the
person doing the tossing (a trained person could learn the skill to
control the toss so as always to throw a head). This is a*
non-localised relational property* .

Finally, therefore, there is no *extra* property called
'chance', in addition to the 'power to generate limiting relative
frequencies in infinite sequences of outcomes of
tests'^{29}.

If we regard the die as having 6 outcomes, then the chance of getting
5 is 1/6; but if we regard it as having 2 outcomes - 5 or not-5 -
then the chance of getting 5 is 1/2.

This hurdle derives from the classical ignorance theory of
probability, in which outcomes are assigned equiprobability. In
**Dual Theory **the chances derive from the nature of the system,
and do not display this alarming variability. Suppose that, after a
brief look at the system, we conjecture that C1: the chance of
getting a 5 is 0.5. We then find that, after the machine has
displayed 6000 numbers, only 1010 are 5s. Using Cournot's Rule, this
disproves the hypothesis C1.

Chances are not relative to evidence; they are inhuman; they do
not change as a result of changes in human background knowledge.
There is no such thing as a conditional
chance_{1}.

However, a human conjecture as to the value of this
chance_{1 }does change, as a result of
changes in evidence. And the human degree of belief that the value of
the chance_{1 }is true varies with the
extent of evidence. Thus the degree of belief in a particular outcome
is, for two reasons, relative to evidence, and human. There is
conditional degree of belief.

** (a) Outcome evidence**With the help of our concept, humans conjecturally classify
some systems as examples of the Natural Kind called
'chancy

Owners of casinos base their livelihood on the conjecture that their games of roulette, poker, and blackjack, are systems of this Kind - systems possessing the chance property

More precisely, they base their livelihood on this conjecture, combined with an unjustified presupposition, or prejudice - the prejudice that in the

They therefore presume that the finite sequence obtained each week in their casino will be

They accept that the ratio

We have observed that the chancy property of a system is linked to objective aspects of the structure of the system: the outcome of a roulette wheel hops around, in the short term, between rouge and noir, but in the long term we notice a rough ratio of 1:1 starts to appear; we also notice that the structural ratio of red to black slots in the wheel is 32:32, or 1:1. We do not judge that this is a coincidence.

The chance of the 5 being obtained was 1/6. Our state of knowledge of
the outcome is irrelevant, since the chance is a physical property of
the system.

If, say, the initial conditions, and deterministic laws, caused the
outcome to be entirely determined, then at** t** = -0.1 s
the 5 was determined to occur - that event was going to happen. In
other words, in an odd-sounding phrase: The 5 was definitely going to
happen in that test, but the chance

The previous sentence, in ordinary English, feels contradictory. But a chance

My superbeing example below is an attempt to clarify this point; the superbeing, who can assess the initial conditions and do the calculations, can know for certain that the next test will give a 5 (at the instant of release), yet also say, consistently, that the chance

We may persist: "Look, this is ridiculous. Are you saying that an insurance company can consistently say, on the one hand: "The chance

You could, in the search for a prediction as to how long you personally are going to live, try to narrow down the collective, alter the Unique Experimental Protocol. But either you will find that just as it gets interesting, the data runs out, or you will find that you end up, just as it gets interesting, with just you (in other words, there is no longer a collective).

We may still persist: "Look, what is a reasonable degree of belief in the claim 'I personally am going to die in the next year'? if it isn't 0.623, what is it?" This is a clear, meaningful, question. But unfortunately

If the system is specified too precisely, there may be no variation
in the outcome. Doesn't ambiguity over the Unique Experimental
Protocol (UEP) - the specification of the system - make the chance
unacceptably variable, for a quality that is supposed to exist in the
external world?

Consider coin-tossing. The system which we conjecture has a chance of
0.5 of giving a head needs to be specified. A normal person (ie. one
who has not specially trained in coin tossing) is tossing a coin more
than 2 m above the floor, and projecting it upwards at least 2
m/s, with an angular velocity of 70 radians/s. This provides a UEP
for the system.

If we change the UEP, we change the system, and we change the chance.
This is an aspect of the property of chanciness.

To take the classic example: if I am considered merely as a person
aged 48, then my chance of dying in the next year, by reference to
the relevant collective, is, say, 0.7. If instead, I am considered as
a man aged 48, my chance of dying suddenly changes to 0.6. How can I
have two different chances of dying?

I am placing myself into two different systems, like a coin in a
system close to the bench (chance of heads is 0.9) and far above the
bench (chance of heads is 0.5). In the first, the entities specified
are just people. In the second, the specification has changed to men.
The social/physical/biological system which, we conjecture, generates
the collectives, differs. Since the chance is a property of the
system which generates the collective, if the system changes, and the
collective changes, then the chance changes.

There is no correct way to specify the system. I, as an individual as
opposed to 'a person', 'a man', etc, have no chance of dying next
year (as already discussed).

Suppose a deterministic macro-world. The specification of the system
- the description of what aspects of it are to be repeated (to
persist through change) - is as precise as it is. If it is very
precise, (S1), then the chance_{1} of
getting a 5 will be 1; if our description of S1 includes not just the
die, and the bench, but the exact position and velocity of the die as
it is released, the air currents, position of the Moon, and so on,
repeats of the test would always give 5 *ex hypothesi* . S1, as
specified, does not possess a chancy1 property.

If, instead, it is less precise, (S2), such that our description
merely specifies the die, and a human more than 1 m above a bench,
and leaves all other aspects of the world unspecified, then we
conjecture that repeats of the test now would give 5 1/6 of the times
(in an infinite test sequence).

This is unproblematic.

Most outcomes of S2 are different; for example, the die ends up in a
different position. What humans do is to regard "5 uppermost" as a
Natural Kind, meaning that it is of causal significance in the world,
in a way that "being 0.251 cm from the edge of the bench" is not.

Howson writes, at the end of his review article (1995 p.27): "We
clearly need a theory of objective probability, and science
positively demands one", which, considering his important work on
Bayesianism, is a weighty endorsement of the realist element in
** DT** .

We have presented evidence for the existence of these chancy properties in systems, independent of human being's existence, and,

A way of emphasising the gulf that separates the broadly Positivist and Realist approaches, is to consider the former's Principal

The Principle could thus be used by an

The danger of this view is that (Howson (1995) p.20) "the lack of an explicit argument ... for the existence of chance seems to leave the Principal Principle with an undetermined parameter 'the chancex of A', as the quantity which is supposed to determine our degrees of belief"; "a proof of existence .... is lacking here".

Unsurprisingly, the Principal Principle, on our

Suppose that we have no evidence at all about a system except that it
has two possible outcomes, A and B. We have no relative frequencies,
no structural evidence, no evidence of similar systems, nothing. This
is not outcome ignorance - the short-term ignorance that is
characteristic of a chancy system. This is system ignorance -
ignorance of the nature of the system; the system could, for all we
know, not be chancy.

What is our reasonable conjectured chance for the outcome being A?
0.5? No, this is unreasonable. We have no idea what the chance
is.

What is our reasonable degree of belief, in the outcome being A? In
the absence of evidence, we do not reasonably believe either of them
to any degree. After all, if we said 0.5, we could be reasonably
asked why our degree of belief was not 0.1? What could we answer? Why
*don't* we believe that the outcome is always going to be B? In
this situation no choice is *reasonable*.

What is a reasonable betting quotient? In the total absence of
evidence, if forced to bet, we can only guess. *There is no fair
bet,* because we have no evidence to justify the fairness. **The
idea of 'fairness', as opposed to just guessing, is that we have made
a conjecture as to the chance of the event**; we have a long-term
conjecture, despite our short-term ignorance. A 'fair' bet is then
one which would ensure that, after an *infinity *of tests, the
better-on has not definitely gained or lost money. Conditional on the
Inductive Presupposition, it is one that would ensure that, after a
*fairly long* sequence of tests, the better-on has no definitely
gained or lost money. We conclude that while short-term ignorance of
the next outcome is sometimes associated with equi-chance (in the
card games and coloured ball selections of classical
probability_{x}), ignorance of the
nature of the system is associated with neither equi-chance nor
equi-reasonable degree of belief.

It could be objected that we are in the odd position of having to
claim that the probability_{x} of a 5
occurring on each of the two occasions of a throw of a fair die is
equal to the probability of a 5 as estimated by the long-run relative
frequency in the sequence in which those throws actually occur.

This is a successful criticism of a completely different, *very*
unsatisfactory, theory - a kind of Actual Frequentism, in which
chance_{x} is a property of an event
(such as 'Coming heads'), where the value of the property is
determined by the actual frequency of the event in an observed
sequence.

More importantly, if we regard 'the long-run' as being evidence for
the relative frequency that would be obtained in an infinite
sequence, then we have a successful criticism of a Hypothetical
Frequentism, an anti-Realist version of our theory in which
chance_{x} is a property of a
collective of events (ie. not of the system that generates it).

On our theory, there is no problem: system S1: {fair die, unaided
human thrower releasing die more than 1 m above a table} has the
chancy_{1} property of 1/6 of giving
5, while system S2: {loaded die, etc} has the
chancy_{1} property of 1/2 of giving
5. Picking up the fair die immediately changes the system, and the
chance_{1}.

This completes my discussion of classic hurdles.

I now continue by illustrating the* Dual Theory *using some
defined terms, and considering how the parts of the theory vary, as
viewed by a superbeing, a clever-being, and human being.

We use C_{o} ('o' for 'objective') to
refer to the chance_{1} of a 5 being
output on the next press of our device. It may be 1/6.

We use C_{c} to refer to the humanly
conjectured value of the chance.

We use
_{i}B_{c},
('i' for 'individual'), to refer to an individual human's personal
degree of belief, her betting quotient, that a
chance_{1} conjecture is true.
_{i}B_{c
}can have any value between 0 and 1, regardless of the
available evidence. The value of her individual degree of belief in
the occurrence of a certain event, that a 5 will occur,
_{i}B_{e},
follows as C_{c} X
B_{i}. For example, she may be
absolutely certain that the chance of a 5 occurring in the next press
is 1/6 (
_{i}B_{c}=
1), even though she has no evidence to support this. Or she may be
0.9 sure that the next press will give a 5
(_{i}B_{e}
= 0.9) because of the evidence of the previous sequence of numbers,
or because she is feeling lucky. iBe could be called her 'personal probability'.

Finally, we use
_{m}B_{c}
, ('m' for methodological), to refer to the rough degree of belief in
the truth of the conjecture (that the
chance_{1} of 5 occurring is 1/6),
given the evidence, prescribed by the present consensus. This is the
*reasonable* degree of belief. The value of the consensus degree
of belief in an *event* , say that a 5 will occur,
_{m}B_{e}
, follows as C_{c} X
_{m}B_{c}
.

The *consensus* is our touchstone of reasonableness. This is not
ideal, but humanity has not yet been able to think of a criterion
which is better. The 'reasonable degree of belief' is the degree of
belief that the present consensus would have, given the
evidence^{34}.

Methodology is not yet an exact science; degrees of evidential
support, though not perhaps merely qualitative (very poor, poor, OK,
good, excellent), may well not be precisely quantitative either.
Summarising, and attempting to justify, the consensus' rough degree
of belief in a conjecture, given a certain amount of evidence - the
task of Inductive Logic - is very difficult. This, however, applies
to degrees of belief in *all* conjectures, not just those
concerning objective chances^{35}.

Using this notation, we can express the following claims:

C_{c} = C_{o
}is the statement of the human aim that their conjectures
should be true.

_{m}B_{c}
= 1 is a statement of the human hope -- not, we think, realised -
that our consensus methods of justifying chance conjectures (the ones
that define 'rational') are
foolproof^{36}.

_{i}B_{e}=
_{m}B_{e}
is a statement of the agreement between an individual's beliefs
(judgment, and behaviour) and those sanctioned by the consensus as
'reasonable'. It states that the individual's degree of belief is
reasonable.

_{i}B_{e}=
_{m}B_{e}
= C_{o }is a statement of the human
hope that the value of a personal probabilistic belief that an event
will occur, if rationally supported, equals the true value of the
objective chance_{1} of the event
occurring.

Fig.4

To clarify the relationship between these terms, we now consider,
in turn, a superbeing, a cleverbeing, and a human being, faced with a
world which contains chancy systems.

**Superbeing** : The superbeing can either perceive all past and
future events at one time, or perceive the finest details of all
physical situations - the initial conditions - and then apply the
true laws to predict its state at any later time. Either way, her
evidence is accepted by the consensus to be conclusive.
B_{m}, the reasonable (consensus)
degree of belief that her conjectured value for the objective
chance1, given the evidence, is true, is 1.

C_{c} , her conjectured value of the
objective chance_{1} of 5 occurring,
is 1/6.

_{m}B_{c}
, the reasonable degree of belief, for any being with this kind of
evidence, that C_{c} is true, is 1 -
as is
_{m}B_{e}
.

_{i}B_{c
}is 1.
_{i}B_{e},
her personal degree of belief, her betting quotient, that the next
display will be a 5, can also be
1^{37}.

She can consistently say "the next display will definitely be a 5
(degree of belief = 1), and the
chance_{1} of a 5 being generated is
1/6". The appearance of contradiction is due to the confusion of
chance_{1} with degree of belief.

** Law-but-not-initial condition cleverbeing** : This is a
cleverbeing who can conclusively identify the value of the
chance

(i) Co is 1/6

(ii) C

(iii)

(v) iBc could be anything. He may have just developed an irrational obsession with the number 5, so that he is personally certain that the next press will give a 5.

(i) Co is 1/6.

(ii) C

(iii)

(iv) The reasonable

(v)

If instead the individual had investigated the system more thoroughly, and consensus judged that the new amount of evidence - extensive study of the system and extended relative frequency tests - gave the conjecture

This is the situation that humans always face when they are making conjectures about the world, on the basis of inadequate evidence - whether or not the conjecture concerns chances. In the simpler case, when the conjecture is a

In our, more complex, case, even if

We now return to

Judging the value of

There are two ways humans can get evidence to support conjectures of a value for a chance

(a) (direct) study the situation closely, to find the characteristic features which make it of this Kind, behave this way

(b) (indirect) collect relative frequency evidence that the propensity is present.

The evidence will always be inadequate, for at least these three reasons:

(i) the careful study of a system can always leave vital features overlooked, so that our prediction of its behaviour turns out to be completely false

(ii) an ideal test sequence needs to continue independent tests to infinity

(iii) the experiment relies on the presumption that sceptical doubts are quarantined; otherwise finite sequences, however long, could be consistently misleading (ie. despite the true presence of the chance)

There is no decisive set of justifiedq methodological rules establishing values for

Is

(a) ** Single values: **The

The intellectual police cannot insist that the conjectured value C

Unfortunately, humanity has found that establishing consensus guidelines for the amount of support (extent of truth-likeness, degree of belief) justified by a given amount of evidence, is very difficult. This is disappointing, but is not a problem for the Descriptive Epistemologist. He simply notes it, and passes on.

(b)

If we started with a *full* description of the environment
(the values of all relevant structural parameters at that time),
which conditions (warrants) the personal degree of belief, and which
is then unchanged by further conditioning (e.g.. observed events), then
we might seem to *have* to end up with output
probabilities_{x} of 0 or 1. The
chance seems to be 0 or 1, because if the system is fully
deterministic, and is fully specified, then it will have just one
output, the determined state. To avoid this, Objectivists may try to
include some indeterminism somewhere in the system. But this is
unnecessary.

This is a widespread error. Pierre Laplace thought that if all events
were physically necessary results of initial conditions and laws,
then nothing could be probable_{x} in
itself - that probability_{x}
*depended* on ignorance. Writers state that in a deterministic
world there would be no
probabilistic_{x} propensities - that
all probability_{x} is a way-station
*en route* to real knowledge. Yet natural determinism is
irrelevant to
chance_{1}^{42}.

A *superbeing's* full description gives
_{i}B_{e}
= 1, but still gives C_{o} = 1/6. Each
specific outcome is not only determined - the causal chains determine
the exact output for any specific initial state - but also
determinable by her. A *human* description does not obtain
_{m}B_{e
}= 1, because of our limitations. But both beings accept
the same description of chancy events: infinitesimal changes in the
initial conditions, at whatever time they were recorded, lead
inexorably, by deterministic laws (in a chaotic system, displaying
positive-feedback) to a certain Kind of variation in the
Poincaré variable (the Poincaré variation), which
inexorably leads, via a certain Kind of sensitive processor, to a
certain Kind of output variation. No indeterminacy exists in the
external world - yet the output shows a characteristic quality,
identifiable by a superbeing, and such that limited humans are unable
to predict specific outcomes. Each specific outcome is
deter*mined* by the initial conditions, but it is not humanly
deter*minable* .

The world can thus be fully determinate, in the sense that each
individual outcome is determined, governed by determinate laws acting
on a system with certain initial conditions. At the same time, a
feature of the world determines that such a system, repeatedly
tested, would generate the characteristic short and long-term
behaviour. Chance_{1} is a successful
way of describing the outputs from this kind of system.

There is a reappearance. However, the onto-semantic analysis of
chance_{1} is complete, *before*
the epistemological analysis of degree of belief is undertaken.
Therefore it is not vicious if
chance_{1} reappears in this second
analysis. Thus the reasonable extent of degree of belief in a
conjecture, concerning the value of a chance in a system, could be
partly based on an assessment of the chance, given structural and
relative frequency evidence, that people get such conjectures right.
This would need to follow the same rough criteria of such
assessments. This is *consistent* rather than a vicious circle.
If evidence began to lead us to think that our methodological
guidelines were unsound, we would need to reconsider both basic and
meta-assessments simultaneously.

**If the degrees of belief in each outcome 1-5 all
fall below 1/6, then they don't add up to 1. Does this matter?
**There are

(a)

(b) C

C

So our question is: Can we consistently accept both of the following claims:

(i) I am sure that either outcome 1, 2, 3, 4, or 5, will appear; my degree of belief in this composite outcome is 1.

(ii) I have very little evidence that my conjecture as to the value of Cc is true. I could easily have wrongly assessed the system. For all I know, the chance of 5 appearing is 0.99, or 0.11.

All that consistency requires, as summarised by the probability calculus, is that a

Our problem was to provide an organised, summary of the human uses,
and intuitions, associated with the word 'probable' - to summarise
explicitly and truthfully the implicit ideas which are guiding the
usage. We were to suppose that people have some coherent ideas when
they use this word, but we were not to assume that a single principle
(concept) would suffice

. We were to assess our theory by its (a) consistency (b) accordance
with our uses and intuitions. We were not to presume that these human
uses, even when regarded as typically reasonable, were justified -
but instead to assess the extent of justifiability, be it high or
low.

In this Dual description of aspects of
probability_{x} we have firstly
explained chance, as a Physical property of a system. Degree of human
*ignorance* , we have seen, is irrelevant to the description of
the system; a superbeing would note exactly the same characteristic
features of the system.

Secondly, we have described how humans obtain a reasonable degree of
belief in the truth of any conjecture, and hence, in particular, in
the chance_{1 }of an outcome - using
rough consensus guidelines on evidential support for conjectures. We
have not tried to justify these guidelines.

We propose that this description solves many soluble extant problems
in the Philosophy of probability.

Philip Thonemann

For general references, consult pp.28-32 of Howson's excellent survey
article (Howson, C. (1995) }

**References:
**Howson, C. (1995)

Howson, C. and Urbach, P. (1993)

Poincaré, H. (1905)

Engel, E. (1992)

Von Mises, R. (1939)

Hopf

Von Plato, J.

Gillies, D.A. (1973)

Popper, K. (1959)

Popper, K.

Harré, R. and

1 I am using the subscript 'x' to mean that the word is significantly vague, or ambiguous. Hence the subscript '1' means that I am now using the word in a more specific sense.

2 The unitary, perhaps linguistically essentialist, conjecture 'There is a single idea, unifying all human uses of the word 'probability

3 Like a Physical theorist suggesting that an observation is mistaken, because it does not fit with her theory.

4 This is Complete Justificationism - a long-standing curse of Philosophy.

5 Howson, in his (1995) review, reckons that the key contemporary players are Bayesian theory of epistemic probability, and limiting relative frequency, propensity, prequentialist, and chance, theories of objective probability. Of these, I am not including the last two as sub-theories. Encouragingly, he writes (p.21): "a legitimate role for Von Mises' theory is that, combined with the Bayesian apparatus for constructing posterior distributions, it provides the final link between the model and reality".

6 Accepting (i) the certainty of border-region cases that it does not cover (ii) vagueness in various of the key terms in the description.

7 It will not, however, enable the alien to use the word 'chance

8 This is where we invoke a property, a propensity, to partner the relative frequency.

9 The series can be a function of time such that the relative frequency will not display this value. In this case, 'this machine or this being 1+ rest of system s' produces a system S1 which is not chancy, when 'another machine or another being 2+ rest of system s' produces a system S2 which is chancy.

10 Without this condition, there is no identifiable system, as an invariant, to have the various outcomes in tests (eg. the shape of the die).

11 eg. the air currents; the velocity of the throw.

12 We can be more specific, and say that if we have 100 tests, then the experimental ratio will lie in the range1/6 ± some small value; if we have 1000 tests, it will lie in the range 1/6 ± some smaller value; and so on. Indeed, we can specify how often, in such a test run, the resulting ratio will lie outside these ranges.

13 These commonsense descriptions were clearly expressed by Richard Von Mises, who developed (i) the definition of the collective (vaguely: chance as determining limiting relative frequencies) (ii) the idea that a physical system can be conjectured to have the property of tending to generate the collective (vaguely: chance as a propensity) (iii) the idea that this property can be initially defined any way we wish, because, like any other conjectured physical property, its appropriateness will be tested by experience of Nature (vaguely: 'chance as a theoretical entity'). We could alter the second description so that it merely refers to "1997 human inability to find a pattern". This would still define a perfectly respectable property of Nature - whose full description requires reference to a particular sensing being, just as 'whiteness' does. I am unsure if this gives any advantage, but it is a coherent option.

14 Thus preempting the criticism that Bayesians fail to provide justification for their helpful principled description (Miller (1994))

15 Howson, I suggest, uncharacteristically slips up when he writes (1995 p.16): "almost every hypothesis of use to Statistics is a priori declared false by it {Cournot's Rule}". This would only be true if the rule was interpreted non-methodologically, as a restriction within the onto-semantics (the model) on the possible consequences of the chance conjecture - as the claim that 2 5s is not a possible consequence of C1. Such an interpretation would indeed be incoherent interference with the model - which is why we have not considered it.

16 Howson tells us that the weak and strong laws of large numbers are logical consequences of Von Mises' axioms of convergence and randomness. They will not help us in our problem in this section. Howson, separately, hopes to prove, using Bayesian arguments, that (op.cit. p.18): "despite their infinitary character, Von Mises collectives satisfy a criterion of empirical significance". This enterprise, we can now see, is circular, because the presupposition IP that underpins the crucial theorem is the very one that we are trying to justify.

17 Or, possibly, 5 appears 360 000 times in a row - in which case the system would appear to be of no interest at all, being like looking at a die on a table which had 5 uppermost, recording its display, looking away, looking back, recording its display again, and continuing for a couple of months.

18 In other words, a + 6e produces output 1 again.

19 It may not be determinable by 1997 humans, but our extent of evidence - our degree of ignorance - is irrelevant.

20 As we have discovered in weather forecasting, tiny variations in the initial conditions lead to massive variations in the later state of the system.

21 Rather than always refer to intervals, we will return to referring only to particular values. This does not affect the argument.

22 He describes this property in terms of the analyticity of the probability function, where 'analytic' means that the slope of the function always exists, and varies continuously. We avoid this formulation, but retain the concept.

23 We assume that a is the voltage associated with some value of pressure around atmospheric.

24 The consciousness of the being, the mind, may feel that something much freer is going on. This could be a delusion. Just as consciously undetermined, free, actions, (random slips of the tongue) may be the result of unconscious determined processes, so the self-consciously random button presses, could be the result of a process in the neurons of the brain, which does not generate some kind of real randomness, but is just the kind of process which is being modelled by the second system.

25 As our Physical understanding of the brain develops, we might be able to substitute a better model here.

26 Which is interesting, but of no Philosophical significance.

27 We are assuming that she is fully aware of what every outcome of a test will be. This does not affect the usefulness of the natural classification 'chancy

28 The existence of these depends on the existence of gases, of rivers, of people, behaving as they behave on the planet Earth. If nothing displayed Poincaré variation, the concept of chance

29 Thanks to Rom Harré for making this point.

30 Again we follow Von Mises.

31 ie. no gambler will be able to devise a system to beat the odds.

32 Von Mises makes this clear. He also makes clear his fear that it is difficult to understand - because of the hypnotic effect of everyday language.

33 Until such information causes people's behaviour to start changing, as it would presumably do.

34 It is open to individuals to try to change the view of the consensus. Thus Galileo's view of the degree of belief afforded by the evidence for the Copernican theory may have been inconsistent with that of the consensus at that time. His task, then, was to persuade the consensus to change. If he had failed to do so, then we would now regard him as a crank. But he succeeded, so we regard him as a great man. Whether we now judge him to have been 'reasonable' or not, is a measure of our own consensus view on the weight of his evidence.

35 We could, with some danger of regress, conjecture a value for the chance of us getting such a conjecture right, given such amounts of evidence. This takes us to the meta-level, at which we need to appeal to the consensus to judge what meta-evidence we have, from the history of investigations, as to the success-rate of such conjectures (when they had been made with this amount of evidence) - and hence, what truth-credit to assign to such a conjecture, what degree of belief or confidence.

36 ie. not just 'reasonable, given the amount of evidence we have', but 'never fail'.

37

38 He has not got much evidence.

39 'Reasonable' in the sense that consensus meta-evidence roughly suggests that conjectures of chances, based on that rough amount of evidence, tend to be right about 1 time in 10. This 'reasonable' is a matter of consensus human judgment in a situation of very limited information.

40 These numbers for

41 A human who is judged, by a consensus, to have wildly overestimated the truth-credit supplied by the evidence, would produce a final subjective personal probability which would be larger than the reasonable one.

42 Donald Gillies writes (An Objective Theory Of Probability) that "probability theory is quite compatible with determinism". We can explain probabilities with an underlying deterministic theory. He rightly says (p. 136-7) that Knitchine's work, loosely following Poincaré, only shows how macro-random processes can be an amplification of micro-random ones. The question of what we mean by randomness is not answered by such work; nor is the question of how randomness originally arises.

**Changes from v 2.8 (circulated June 1996)
**1. Removed mistaken claim that Howson opposed a dual theory.

2. Added familiar 'long term' and 'short term' terminology (Rom Harré)

3. Included reference to Hopf and Engel on arbitrary functions (Brian Skyrms)

4. Restructured whole paper as Outline, 12 hurdles, various beings, and conclusion, reducing emphasis on Physical basis for chanciness in systems, since this is not essential to the dual theory. Removed two examples - horse racing and die tossing.

5. Removed all comparisons with contemporary theories, due to length.

2. Removed the potentially misleading terms 'objective' and 'subjective', substituting 'less subjective' and 'more subjective' (John Welch)

3. Corrected a serious inconsistency in, and considerably clarified, Element 2: Reasonable Degrees of Belief. The suggestion that chance and degree of belief could co-exist in Bayes' theorem is removed. In the process, I clarified how the Inductive Presupposition links both the chance of a consequence to belief in a consequence, and belief in a consequence to belief in a theory. (John Welch)

4. Changed 'probability is vague' to 'ambiguous' (Jane Hutton and John Welch)

5. Corrected an inconsistency in hurdle 11 on Ignorance and Equiprobability

Return to Home Page