Return to Inductive Presuppositions.  On to Quarantining Sceptical Doubt



   This argument attempts to justify the inductive methods Mg and Mp which physicists use, on the basis of only the characteristic aim of Truth. The pattern of the argument is interesting, and it does cut down the options for possible types of generalisations. But it does not justify the specific kind of generalisation that physicists and everyday people confidently make; it therefore fails to justify inductive methods.
If you would like to download a pdf version of this essay, click here. 

   The weak justification: we might as well behave as though our evidence gives us guidance, in the absence of any evidence against it, since if it is misleading we have no chance of achieving our aim of generalised truth.
   For example, a woman whose car has stopped working in the middle of the Australian desert might sit in the car considering the possible causes for the break-down: dirty spark-plugs, a flat battery, a loose electrical connection to the battery. Suppose she could think of no way of checking if the battery is flat. Then she is unjustified in making the statement "Either the spark plugs are dirty or there is a loose connection". But if she realises that she is going to die of heat and thirst unless she gets the car going, then she is justified in behaving as though the statement is true; she is justified in behaving as though a statement, that she is not justified in making, is true.
   The weak justificationw is this:
(i) We wish to achieve our aims by making successful claims concerning aspects of the world we have not experienced
(ii) Unless we generalise in a simple way from our experience, we cannot see any way of successfully predicting the unexperienced; without this kind of generalising, we are definitely doomed to failure
(iii) So we are justified in adopting it.
   Put in another way: The primary aim of true generalisations provides no guidance for the decision; but perhaps we can show that if we do not prefer simple generalisations, there is definitely a negligible chance of achieving the aims, while if it is adopted, there is possibly a good chance of achieving the aims.
   Put in another way: A robot is investigating the surface of a mysterious planet called 'Primus'.  It proposes that a particular value of one experienced variable (time - but it could equally be position, or motion, or force, or temperature) is associated with a change in basic sensory data from one value to another, from grey to black.  It knows that there was no information in its experience so far which led it to this suggestion; it knows that it could as easily generate an infinity of other suggestions of this type, and that it would then have no justification for choosing between them. If, then, Primus is so constructed that the simplest universal generalisations of basic sensory data tend always to be false (ie. if all true generalisations involve complex relational properties between several aspects of the basic data) then hitting on the true generalisations will be infinitely unlikely. Therefore the robot is justified, given Ag, in starting its investigations using the method: "Prefer generalisations which are claims of universal links between just two basic sensory data classification sets, with no causal link with other basic sensory data".
   The robot could perfectly well choose any of the possible generalisations, relational in the most complex way, by using a random number generator. But it knows that as soon as it is reduced to choosing possible generalisations at random, its chance of hitting on a true one suddenly reduces to definitely negligible. Any alternative which holds out the possibility of increasing this chance is therefore justified.



   The failure of this argument is caused by a concealed assumption about the possible structures of nature.
   In step (ii) of the argument, we claimed that all other approaches will give us a definitely negligible chance of getting true generalisations. This bold claim is false. We only considered random number generators as ways of choosing alternative generalisations. We then assumed that the true one was just as likely as any of the others to be chosen, in which case the definite chance of hitting on the correct one was indeed 1/infinity which was zero. This is true, but it depends on giving up to such an extent that we make no attempt at all to guess at another way in which nature might have organised its generalisations, and hence another way in which we might have a finite chance of finding them.
   Between these two extremes are other ways that Primus could be constructed. If the lady in the desert knows that she can only fix the car if one thing has gone wrong, then she is justified in assuming that precisely that one thing has gone wrong and setting about behaving as though it is that thing that is wrong. But if she could fix N things, and definitely could not fix M things, then she is justified in rejecting the M, but she still has the problem of choosing between the N.
   This is OK. But does it help in the case of unbiassed robot investigating Primus? It means that if the robot assumes that Primus is the kind of planet which does not modify random number generators in accordance with the structure of the planet's generalisations then the chance of using random numbers to pick true generalisations from an infinite collection is definitely 1/infinity which is zero; it means that the italicised assumption, if made, makes the approach unreasonable.
   It also means that the robot would be unreasonable to make, without being forced to by evidence, the assumption that Primus is the kind of planet such that there is no procedure that the robot can adopt which will give it some (unknown, but not definitely zero) chance of finding true generalisations (This is equivalent to making the assumption, as in Maxwell (1989) that the universe is not humanly understandable).
   This is - slight - progress. What assumptions could the robot make? 

(i) It could assume that Primus is so constructed that if a robot records the experiences it has on any randomly chosen day, expresses them in words referring to classification sets taken from basic sensory data, and generalises them regardless of all values of all other variables, it will tend to hit on the truth.
(ii) It could assume that Primus is so constructed that if the robot selects generalisations, mathematical laws, and theories, based on the indefinable desirable quality of 'simplicity' and 'elegance', it will tend to hit on the truth; it could assume that Primus is simple.
(iii) It could assume the same, based on the indefinable desirable quality of 'understanding'; in other words, it could assume that Primus is understandable.
(iv) It could assume that Primus is so constructed that if a robot selects generalisations using a random number generator it will tend to hit on correct ones. (The reaction that this is ridiculous is irrelevant; it feels implausible because we do not happen to think that the world is constructed like this; but we could think so.  An equivalent assumption on Earth would be that "If we choose the first generalisation that comes into our head, it will tend to be correct".  Could there not be a pre-established harmony between us and the creator, or a genetically determined predisposition, which made this possible?)
(v) It could assume any other method of selection, such as choosing the direction of the Pole star, selecting a point in this direction at a distance equal to its number of sensors in cm, writing all the possible generalisations on pieces of paper, throwing them up, and choosing the one that lands closest to the point. Maybe Primus is constructed in such a way that this procedure tends to give correct generalisations.
   How can the robot reasonably choose between these possibilities, each one of which would enable it to proceed, and seems, other things being equal, to give it some unknown but not definitely zero chance of obtaining truth?
   It cannot. Without further aims to provide a basis for a selection the robot will proceed on all the infinite number of possibilities simultaneously.
   So the weak justification, although it eliminates certain possibilities, does not provide a reason for humans to prefer the kind of generalisation they use on Earth to another.


   We do not have to program the robot to act on the basis of the generalisations it is proposing. And this programming decision has no connection with the robot's listed aims.
   In other words, the robot does not have to act. It could simply remain at the place, and in the position, which it landed on Primus. But suppose that we wish to program into it the aim (not characteristic of the investigation) of Survival (ASu) - giving a new hierarchy of aims. We get:

Seek true generalised statements about your experience (Ag)
Record true experiences (Ae)
Seek your own convenience; minimise effort (Ac)
Try to survive (Asu)

   It is now justified in considering potential hazards to its well-being and avoiding them. Since every aspect of Primus is initially in this category, every experience is a potential basis for action; every experience needs assessing on the basis of its potential advantage or disadvantage to the robot.
   We will need to program some basis on which it acts. What are the possibilities?
(i) Program the robot to act on the basis of the generalisations it is proposing.
(ii) Program the robot to act at random, moving its sensors and its tracks on the basis of random numbers generated by the computer.
(iii) Program the robot to eliminate the possibility of acting in accordance with the generalisations it has proposed, and then to act at random.
   Can we justify choosing one of these methods for the robot? Remember that we have no idea whether the generalisations which the robot has so far come up with are likely to be true; if we had, there would be no problem. The three possibilities - converted to statements describing three possible ways in which Primus could be about to behave - are equally likely, in that we have no evidence for any of them.
   If Primus truly behaves according to the second or third possibilities, then the robot has a negligible chance of successfully predicting events; the possible future events are infinitely various in both cases, so the probability of hitting on the true prediction is (1/infinity ) which is 0. This does not look good. The robot will have extreme difficulty in surviving.
   But if Primus truly behaves according to the first possibility, such that generalisations made from experiences at this time, in this region of the planet, by a robot with these sensors, are truly fairly reliable in that broad region of the planet over the medium-term future, then the robot has a significant, even perhaps an excellent, chance of successfully predicting events, manipulating them, and surviving.
   Could we argue that the random methods might have a finite or even good chance of success, because Primus could be so constructed that generalisations chosen at random tend to be true?  Why shouldn't Primus be so constructed that true generalisations are best obtained by writing all the possible generalisations on pieces of paper (Why should the truth be constrained by our puny practical difficulties?), dropping them from an aeroplane, and choosing the one that lands closest to a particular stone?  With statements we can prefer ones which we like the look of, find convenient to work with, or whatever, with impunity, even if we have no particular reason to think that they are nearer to the truth than alternatives we are rejecting. But actions based on Asu and Ap (true predictive ability) are less of a game; if they do not turn out truly as we predict, then we will not survive. For actions, in other words, only the truth matters.
   Do we have any justification for claiming that generalisations, consistent with experience, chosen on the basis of Ac, As, or Asu, are any more likely to be true than ones chosen at random, or using the aeroplane method?  We are justified in preferring to work on, think about, write down, these generalisations, because we like this kind of generalisation. But do we have any justification for acting in a way which assumes that the world truly is the way that we would like it to be? At the intellectual level the robot can sit on the fence, preferring one generalisation to an infinity of other possibilities for amusing reasons of its own, while accepting that all are equally likely to be true, given the evidence. But at the practical level the robot who wishes to survive wants the generalisation which is most likely to be true.
   The robot is not justified by the evidence. We would like to be able to program the robot so that it could act on the basis of generalisations which are true. But unfortunately we have no idea how to find such generalisations.  All we seem to be able to do is to eliminate false generalisations and then choose between the remainder - which are all equally likely to be true - on a basis which has nothing to do with truth.  I suggest that in this desperate situation the robot might as well act as though the generalisations it intellectually preferred are also more likely to be true.  By doing so it will retain a consistency between its intellectual preferences and its actions (Hookway (1992)). Perhaps this is merely an application of Ac; the intellectually preferred generalisation is to hand, so it is convenient to use it.
   I conclude that we are (very weakly) justified in programming the robot to use Mg both to produce generalisations (intellectually) and as a basis for action.
   Depression could be a danger for the robot when so many of its early generalisations turn out false - Ag, given At, might begin to seem unattainable. But fortunately we have programmed boundless natural confidence into the robot, so that no set-backs can dishearten it - hope springs eternal in its silicon breast. We, back at base, could be anxious on its behalf. But such anxiety would not be cause for self-criticism, or for guilt if the robot was destroyed. We did our best - no-one can do more. True, we came unstuck, but through no fault of our own. We are justified in being phlegmatic, indeed philosophical, about our failure.



   On Earth we prefer generalisations, laws, and theories, which make our experiences as high-chance as possible.
   Without Mp the robot will be submerged in an infinity of alternative generalisations making alternative predictions. It is another method which is justifiable only because without it we have negligible chance of obtaining true generalisations and making true predictions. It is justified as a method relative to At.
   Suppose that we have not programmed Mp into the robot. The robot experiences a sensation of hardness and support from the grey material it sees beneath it, at t = 0 and at t = 0.000 1 s (E ).  It records the true experience E .  Now, following Mg, it intends to generalise from the experience. It could generalise in two ways:
(i) The grey material always tends to be hard and to support robots (G1).
(ii) 99.99 % of the time the grey material does not tend to be hard and support robots but instead does other unspecified things; E was one of 0.01 % of times when it is hard (G2).
G2 is the generalisation which defies Me.  The robot's experience E has been virtually useless in guiding it towards the truth about Primus.
   We could reject G2 because it does not give clear positive predictions (as to what will happen) (Ap); it lacks content.  We could reject it because it is complicated (As). We could reject it because it is inconvenient (Ac).  But none of these justifications feels completely correct.
   We reject G2 because we know that if we accept it, and the other similar generalisations which will follow from other experiences, we will have no chance of hitting on the truth about Primus (At).  Primus could be constructed according to one of the following:
(i) More often than not, the experiences of the robot are typical of events on Primus; the true generalisations are the ones obtained by using Mp.
(ii) The experiences of the robot are unfortunately untypical of events on Primus; they provide the robot with no guide to other events.
Note that (ii) does not provide an alternative prediction; it provides no prediction at all. This is an important asymmetry.
   If Primus is constructed according to (ii) then all the robot's efforts to obtain true generalisations are doomed; the chance of hitting on a true generalisation about the behaviour of the grey material is negligible (At and Ag).  But if Primus - by some stroke of luck - is truly constructed according to (i) then we can use Mp to lead us to the truth.
   This does not mean that Primus has to be so constructed that it is perfectly uniform. It certainly would be exceptionally convenient for us if Primus was an illustration of the principle "nature is uniform", but this is very extreme, and likely to be disproved in the first few milliseconds of experience, as it is on Earth.  All we are presupposing is that Primus possesses some degree of uniformity, of a kind which can be discovered by assuming that the experiences one is generalising are typical, probable, examples of the behaviour of things on Primus.
   We are not requiring perfect uniformity, and hence total success for our generalisations. We are prepared for some failure, for imperfections, for some blind alleys. But we are not prepared to investigate on the basis of an expectation of virtually certain failure, of a negligible chance of success.  Investigation on this basis would be unjustifiable.
   We are justified in programming the robot to use Mp to sift its generalisations, because if we do not use it, we assess the chance of achieving true generalisations (At and Ag) as (1/ ) which is zero. If we do use it, we have no idea what our chance of achieving true generalisations is; it could still be zero (which doesn't matter, since it was zero anyway), but it could be much higher. It could be 1 (certainty), a perfectly uniformly behaving planet - we should be so lucky.
   By programming the robot with Mp, are we presuming anything about the nature of Primus?  Surely it is not justifiable to presume aspects of the planet which the robot is supposed to be investigating? We did not presume anything about Primus when we agreed to use Mg. We merely noticed that we would never obtain generalisations if we did not try to make them.  But in the case of Mp we do seem to be presuming that Primus is constructed in a certain way. After all, if Primus is so constructed that the robot will actually pick up experiences which are systematically misleading and improbable, then all the generalisations chosen on the basis of Mp will be wrong.
   I think that we are presuming this.  The proposal that our investigation should begin with no presumptions needs justification.  What is wrong with presumptions?  This presumption is necessary if our investigation is to have any chance of a more than negligible chance of success.  We are not presuming that Primus is so arranged that Mp will succeed. But we are presuming that it is not so arranged that Mp is bound to fail.
   The presumption does not ensure a non-negligible chance of success. But the alternative does ensure it. So we are justified by our ignorance, which is preferable to the virtual certainty of failure.
Summarising, the weak justification for programming the robot with Mp is :
(i) We have no prior evidence of the overall nature of Primus.
(ii) Not using Mp will ensure that our chance of achieving At + Ag is negligible.
(iii) Using Mp cannot reduce our chance of success, since it was already negligible.
(iv) If Primus is so constructed that Mp works, even to some extent, then we have a non-negligible chance of success.
(v) So we are justified in programming the robot to use Mp.

   This is the same argument I used above for Mg, and it is flawed in exactly the same way.  Primus could be so constructed that the events the robot experiences are very improbable, and not direct guides to typical behaviour, but that instead the robot should eliminate this possibility, and then use one of the other above methods for obtaining generalisations, or any other one that it thinks of.
   The flaw is to devise an artificial, 'set-up', situation, in which the choice is between, on the one hand, a suggested way in which our particular experience could be linked with general truth in nature (which, surprise, surprise, is the way we seem to think it is patterned on Earth), and, on the other hand, a suggestion that our particular experience is not linked at all with the general truth.  The flaw is that there are other possibilities for ways of using particular experience, or even no experience, and still obtain true generalisations.


Isn't there a justification based on the lack of any other option for limited human beings?

   This is a weak justification, because it gives no reason for thinking that the methods will lead us towards the aim of {True X}.  It is the suggestion that we can reasonably assume that certain things are true about nature, because if they are false, then our aim is definitely unachievable. There are two immediate condition for such an assumption to be reasonable:
(i) we must not have reason to think that the assumption is false
(ii) since the truth could our aim must include. This argument probably has a long history; it has recently been supported by, for example, Maxwell (1984). The assumption must be very general, because its only rôle is to establish {True X} as achievable; it must eliminate types of universe in which our aim cannot be achieved by limited human beings. A suitable candidate might be the one expressed by Burtt in his (1967) p. 179: "Both primitive and civilised attempts to explain nature reveal at least one general interest and presupposition in common.  They both confidently believe that there is intelligible order in the Universe". The presupposition is that there is some way of reducing the chaos of experience to universal laws - patterns which we human beings can understand. This presupposition seems to constrain the aim of {True X}.  We seem to be saying that we are prepared to accept the possibility of almost anything being true, but not this. The argument works best if we suppose, as Maxwell does in his (1984?), that the characteristic aim of physicists is {Explanations}, {Understanding}. If the aim of discovering some kind of order, of removing the disorder of experience, is primary in physics, then physicists would be justified in presupposing that at least some such order exists, since otherwise they are presupposing that their enterprise is doomed, yet continuing it, which is irrational.  But we suggest that the characteristic aim of physicists is {True X}, the 'standard empiricist' aim that Maxwell argues against.  The aim of {Explanations} is not specially interesting, applying as it does to Religious Believers, Astrologists, and so on; the special aim is {True explanations}.  This aim implies no presuppositions; if we could not find any true explanations, if the truth appeared to be that there was no order in the Universe, than this truth would take priority over our desire for explanations. To raise {Understanding} above {Truth} is to radically alter the nature of physics, so that evidence based on {Truth} which implied that {Understanding} was impossible, would be rationally ignored.  We should be clear that this is not 'absurd' or 'unacceptable'; it is just a different ordering of the priorities of aims.  Do human beings want {Truth} at all costs, however painful, disappointing, or unsettling?  Or are there some Truths which they are absolutely not prepared to accept, because their desire for truth is outweighed by their desire for, say, removal of the unpleasant disorder of experience?  Various discussions are possible:
(i) What are the sets of methods which rationally follow from adopting these different ordered sets of aims?
(ii) Which ordered set of aims seems to us to be the most valuablex?
(iii) Which ordered set of aims is most widely valued?
(iv) Which ordered set of aims most accurately characterises the activity of physics? If we choose to investigate the methods which follow from the aim of {True X}, then the presupposition that the Universe is ordered, is unjustified - and hence the sceptical doubts remain unanswered; this will be true in any schema in which Truth is the priority.
   Unfortunately for our task, but perhaps fortunately for humanity, the evidence that the universe is overall ordered or disordered, understandable or not understandable, simple or not simple, is incomplete and unclear.  Indeed, given our species' limited ability to collect information, it shows every sign of remaining so. A s Poincaré pointed out, each level of complexity discovered seems to have eventually exposed simplicity, but each level of simplicity seems to expose complexity.  Since we do not - cannot? - know how many levels there are in nature, since we cannot investigate the distant past, or the far future, our very ignorance enables us to hope that each new discovery of complexity is only temporary, the result of our own incompetence.  This makes it possible for physicists to continue to seek for Understanding, however discouraging the evidence, while at the same time insisting - if they wish to - that their primary aim is {Truth}.  In other words, what ensures that we are not put on the spot and forced to choose between our desire for Truth and our desire for understanding, is our ignorance.  The development of chaos theory perhaps indicates that physicists are prepared to accept limitations to order in the interests of Truth; "If the world is truly governed by non-linear equations, such that its observable state is essentially unpredictable and disordered, then that is the truth" is what they have been prepared to say.  The appearance tomorrow of beings displaying super-powers, who told us, with demonstrations, that they had been playing with us for a few thousand years, concealing from us the truth that the actual universe is infinitely more complex and disordered than we could possibly comprehend, would put us on the spot. Would we accept the Truth, or would be insist in continuing our quest for understanding regardless?  This is the sharp question, because the latter option would not be irrational, it would merely demonstrate that humanity in general, or physicists in particular, give understanding priority over Truth - which they are free to do.  (It is not an objection to {Understanding} as an aim, to be unable to specify exactly what it is.  A designer of games can rationally try to devise a new game, while being quite unable to define what a game is.  We recognise typical cases of disorder when we experience them, and we recognise some ways of removing this unpleasant experience, such as using mathematical laws, ascribing of human desires (other minds), and proposing externally existing objects)
   We do not think that this presupposition helps us to justify inductive methods.  Any method of generalising experience, and methods not involving experience, would do, as long as they represented some kind of order.  Suppose that the true order was such that every ten thousand years almost all the simple regularities observed on planets like Earth change, in accordance with a grand wave-like fluctuation in superficial laws, itself an inevitable consequence of more fundamental laws; suppose that the next fluctuation is due, though we do not know it, tomorrow.  The universe has order, which we may, with luck, eventually discover; but inducing generalisations from our past experience is largely doomed, since the order is at a much deeper level.  We could be more specific; we could insist on presuming that the universe is such that we have a reasonable chance of finding the order in it.  We would be trying to eliminate universes in which order is there, but is concealed in massive superficial disorder; overall order, for example, to which the Earth and solar system happen to be most unusual exceptions, and which is only perceptible if observations are made which our particular sense organs and spatio-temporal location make us exceptionally unlikely to manage.  Suppose that we had reason to think that this presumption was false: maybe we experienced increasing failures of our predictions, tending to get worse; maybe we are visited by massive space-crafts, clearly demonstrating, as evidence of superiority, technological mastery far beyond any we can imagine, whose occupants tell us that we are pathetically limited in our understanding, for various reasons, and will sadly therefore be unable to grasp the true structure of the universe.  In these circumstances we could either plod gloomily on with our efforts, or abandon the attempt.  Yet we would continue with our use of inductive methods in our ordinary life, and in our science and technology.
   Why?  Because the alternative was too awful to consider.

Yet Another try at the Weak Argument!

(i) Suppose that we have as our aim {Truth} about every aspect of nature, including, if possible, not only immediate experiencing, but also truth about generalisations, and about low-observability aspects of nature.  We see that we cannot prove such claims certainly true, using evidence.  But we want to be able to learn about such claims from our experiencing - to use it as indirect evidence.  We need a method, or methods, to link experiencing to the truth-credit of both generalisations (induction), and low-observability claims (abduction).
(ii) If Tp is true, we have methods which will give truth credit to such claims.
(iii) But if Tp is not true, we can invent other theories T1-n, any one of which, if true, would give truth credit to such claims. (We could, for example, eliminate the claims that Tp guides us to, and then choose whichever of the remainder feels best)
(iv) Suppose that we have no reason for judging that Tp or any of its alternatives is true, and no reason for judging it false.  They are all equally unsupported.  Worse, since they provide methods for supporting general claims, and yet they are general claims, we suspect that they may all be unsupportable .
(iii) Which T shall we choose?  We have, unfortunately, a completely free choice. Tp has two advantages: (a) it happens to be a method which fits with our natural instinct , so that we can adopt it with no cognitive dissonance (b) in the case of generalisations, it happens to give very convenient information compression .
(iii) So we might as well suppose that Tp is true. We have nothing to lose

   This is a weak justification because it gives us no confidence in the results of using Tp. We would understand intellectually why we were using it - but our confidence in its results would be unjustifiable.

Response : The flaw in the previous weak justifications was that they set up an unfair opposition between available alternatives.  Consider convenience: what could be more convenient than the theory that whatever I now think of, in the way of generalisations, and low-observability claims, will be true?
   We could reply that this does not fit with the experiencing that we have already recorded.
The response is that perhaps that is the way the world behaves - that previous experiencing was misleading, uncharacteristic, caused by unusual circumstances, but now it is going to proceed in the characteristic way.
   All that this leaves, though the 'all' is quite a lot, is cognitive dissonance : our minds, instincts, natural behaviour, is entirely based on custom and habit - whether it supports the most convenient claims or not  (This does not apply to the robot on Primus - which therefore is left with no argument at all).

Return to Inductive Presuppositions