Map, Metamethodology, Aims, Inductive Presuppositions, Quarantining Sceptical Doubt, Methods

Abstract: We assume that there is no definite observation-theory line, that sceptical doubts are quarantined, and that metamethodology is descriptive. A sensible form of Conjectural Realism can be constructed on this basis, which provides satisfactory answers to classic question. Exciting, extreme, anti-realist criticism is flawed.

If you would like to download a pdf version of this essay, click here.

CONJECTURAL REALISM AND ANTI-REALISM


11 234 words; 10.4.1997; v.4


Introduction



What is our problem?

Are there areas of Nature in which investigators cannot find what exists, because their methods are inadequate?



Part 1: Three Assumptions



We begin with three assumptions, for which we argue only briefly:

(A1) There is no definite observation/theory line

(A2) Sceptical arguments, not convincingly countered, are best quarantined

(A3) Our metamethodology is descriptive

A1: The Observation-Theory Line

A1 is almost part of the philosophical consensus. We propose this on the basis of uncritical references to it in various elementary text-books. All claims are located somewhere on a continuous line, an axis , which runs between two extremes: at one are claims such as "Red patch experiencing now (PHT)"; at the other, claims such as "The Universe began with a Big Bang" and "All matter is made of the 24 entities in the Standard Model: Quarks, leptons, neutrinos etc". There is no natural dividing line at any point on the axis; there is no natural classification of claims into two qualitatively different types or kinds ; there is just one kind , with a large quantitative variation of degree (see Fig.1).




   There is a very big difference between entities at the right hand end, and entities at the left hand end. We are not suggesting that there is no difference between a hillock and a mountain, or no difference between touching your mother's leg and incest.
   I apologise if the reader becomes exasperated at the consequences of this assumption. But we cannot have our cake and eat it. If there is no line, then there is no definite point at which theory stops and evidence (facts) start. This is not to suggest that these traditional words can no longer be used, as long as we use them for nothing more than simple communication of roughly where the claim lies on the axis: "The pointer on the meter on the bench in front of me is pointing to 3.1" can helpfully be called a 'fact', and "There are three kinds of quark in Nature" a 'theory', to distinguish them, roughly - as on very different parts of the axis. This is in the same way that 'chair' can be usefully distinguished from 'stool', or 'jazz' from 'pop' and 'classical'. But if someone now argues that jazz is a type of music which suffers from some unique nasty problem, which popx and classicalx do not have, and should be banned, we must then look back to the reality behind the words, and remember that there was no reason to judge that the classification reflected a fault-line in Nature - a natural kind, and therefore no reason to think that the things so classified had anything else in common. As music gets more jazzy it may get more and more nasty - and beyond some rough extent of nastiness we may judge that it should be banned; but this is very different. This is now clearly a judgement that we are imposing on a continuous reality; it is our choice of line, not one which exists in Nature. We cannot slough off the responsibility for the choice of line onto Nature - it becomes our choice, which we must justify on independent grounds.
   We therefore refuse to state our problem as: "Is it reasonable to claim that theoretical entities exist?", because there is no natural classification of entities into 'theoretical' and 'observable'. It would be like writing an essay on the problem: "How many mountains exist on Earth?". The widely accepted, digital, language of 'theory' and 'fact' is to blame for confusions(1) (These are footnotes). The phrase 'theoretical entity' is therefore not used again in this paper(2).

A2: Quaranting Sceptical Doubt

   A2 is less often explicitly stated - perhaps because, given how deeply attempts to solve it are entangled in Epistemology and philosophy of Science, it carries a stigma of failure. Nonetheless, some thinkers accept the use of, say, Induction, without attempting to justify it(3). Popper claimed to solve the problem of induction by assuming that it is obviously insoluble. In Conjectures And Refutations he explicitly proposes IP as an unsupported claim which expresses the prejudice which unifies physicists' attitude to sceptical doubt. Longer ago, authors such as Bacon, Herschel, Mill, Whewell, and Duhem, ignored sceptical doubt as sterile. Peter Lipton, in his Inference To The Best Explanation , uses the same strategy of isolating sceptical doubts.
This is not the place to consider all the attempts to defend knowledge against sceptical doubt(4). I suggest that, following Lipton, if the reader is aware of a successful argument, she simply slots it into the essay at the appropriate places, thus justifying a procedure which we otherwise regard as a human prejudice.
 We use the ending /IP to indicate that the word is being used conditional on the Inductive Presupposition. Thus 'justification/IP' and 'support/IP' mean 'justification given that inductive sceptical doubt is quarantined.
For more detail, see the chapter on 'Quarantining Scepticism '.

A3: Metamethodology

  
Our investigative approach is Descriptive Methodology, which may be understood partly by what it is not .. Descriptive Methodologists make no attempt to obtain results by (i) First philosophy; a priori reasoning, intuition (ii) Analytical philosophy; we do not attempt to achieve anything contentful by unpacking such concepts as 'scientific' or 'justified' (iii) Naturalistic philosophy; we are neither assuming that the results or the methods of physicists can be deployed in our philosophy, nor that Darwinian claims applied to humans can supply any otherwise missing justifications.
   Many philosophers are firmly committed to the analytical approach. Some, indeed, are so committed that they find it difficult to accept that alternative approaches are either possible, or worthwhile. Nonetheless, we propose that any analytical unpacking of concepts (i) is preliminary to our work (ii) concerns language, and its relation to the world, rather than the world itself. Some problems may be resolved by such careful unpacking, but not this one.
   Our aim is to describe the true aims and methods of physicists. In the process we lay bare the extent of justification both for the methods, and for the claims that have followed from their use. If the justification is less complete than we might have hoped, so much the worse for our hopes. We lay down no preconditions, we make no presuppositions about the form of this description - in particular, we do not expect, and certainly we do not require, that the methods will be foolproof. We make no precondition that, for example, our success requires that physicists' claims have been undeniably proved to be progressing towards the truth.
   If we find the results of this description unsatisfactory, but cannot think of any better way of investigating Nature, then we are left sadly, but pointlessly, wringing of our hands. "What a pity that we are not living in a world where there are more certainties!", we cry(5). As philosophers, we should regularly ask ourself if we are still seeking the truth, or if we have drifted into trying to force experiences to fit our private obsessions - in particular obsessions with precision, and with certainty.
   To suggest that this approach is not philosophy is merely to use persuasive definition to conceal a personal preference for an alternative approach(6).
   To illustrate this metamethodology, consider our philosophical investigation of the aims of humans, investigating Nature - scientists. We may investigate what aims characterise these people. We may investigate whether these aims are rational - whether, for example, we have evidence that they are impossible to achieve. What we may not do is vaguely to ask "What aim does scientific activity have?" (Van Fraassen p.18). Such a question is an invitation to confusion.
   Our Meta methodology is descriptive. But the methods identified by this meta-methodological investigation will be, at their own level, prescriptive . This is consistent. In other words, the methods used by physicists, which we try to describe, are judged, with reasons/IP, to be the best/IP methods available. They are therefore not optional for those who seek the truth, they are normative.
   For more detail, see my chapter on The Metamethodology Of Science .

   We now describe the conjectural realist account of physics. We then show how it negotiates some classic hurdles. Finally we consider extreme anti-realist criticisms of it.



Part 2: Conjectural Realism



   Using A3: We choose to disentangle, from the web of history, the collection of strands in which people have aimed to establish the truth about Nature. We make this choice because we are interested in this primary aim - we value it. (The strands locate the broad area usually labelled 'physics')
   The only investigations in which we are interested are those that share this primary aim; we choose to ignore the rest. We are interested in the aim of deriving - being able to predict - high-observability events. We are interested, therefore, in the aim of finding truths about Nature which are general (laws; natural kinds).
Investigators, while characterised by these primary aims, have had different preferences. Some have aimed to conjecture low-observability entities as true aspects of Nature (e.g.. atoms). Others have preferred to conjecture abstract mathematical structures (e.g.. thermodynamics). (Both subsidiary aims can be labelled with the word 'explanation'). But, given the primary aim, all aspects of the truth need to be sought, including the truth about very-low observability entities - however difficult this is, and however low our reasonable degree of belief is in our claims about them. Only conclusive arguments that in some area this truth is completely unavailable to humans can justify an investigator in systematically restricting his existential claims in that area to higher-observability entities. We shall consider below whether there any such arguments.
   The best available methods for achieving this aim can be disentangled(7). Of these, we specially emphasise the indirect methods M10-16 (from the chapter on Methods ) using high-observability claims as evidence. Using A2, these are justified/IP methods of checking whether general claims, low-observability entities, and abstract mathematical structures, either have some truth in them, or are entirely human fictions. In the case of entities, the hypothesis that they exist enables us to make very low-chance high-observability predictions that have turned out true; the chance of such success, if our hypothesis had no truth in it, is very small; so we reasonably/IP conclude that the entities are not just human fictions. It is this method which conjectural realists claim gives them the ability to reach far out, and far in, and far back, into Nature, and bring back information on which conjectures are human fiction, and which have some truth in it.
   Using A1, the lower the observability of a conjectured entity, the less our reasonable/IP degree of belief in the entity's existence . If, according to present well-supported/IP knowledge, both of human senses and of Nature, an entity could never , under any circumstances, be experienced by the unaided human senses, and it is very different from anything that has been so experienced, then our reasonable degree of belief in its existence may remain quite low.
   This is a form of empiricism, broadly defined:
(i) a belief that observation is the solid ground of investigation of Nature, and that if you go too far from it, you risk fruitless, aimless, speculation
(ii) a concern that speculations very far from observations are sometimes unjustifiably claimed to be true.

   This is a boring, moderate, descriptive account of truth-seeking investigation of Nature. We propose that it is the default view of physics; it is the view which, if explained to lay people, physicists, and students, tends to be accepted unquestioningly. We suggest that the onus is therefore on critics to explain what is wrong with it.

   CR is a modest view. It makes no attempt to justify what seems so hard to justify. It defends itself by retreat .. Discretion is the better part of valour. The basic reaction of the conjectural realist to demonstration of his investigation's inadequacy, unreliability, and poor track-record, is "Yes. Oh dear! It is awful, isn't it? Physicists seem to be doing their best, but investigating Nature is so difficult. What do you recommend as an improvement?"  For example, the low-observability claims we philosopher-physicists are making may not definitely be true; never mind. Sometimes such claims are very risky; but a calculated risk is OK. We are doing the best we can to discover the truth.

   The move from CR to an exciting view (such as Van Fraassen's anti-realism) is equivalent to moving from viewing physics from inside , as though we are physicists(8) to viewing it from outside , as being the battleground over which certain philosophical battles can be fought (especially ones concerning complete justification for knowledge). An indicator of this rather practical attitude is our decision to quarantine sceptical doubts (A2). Ordinary people quarantine them; physicists quarantine them; philosophers often do not quarantine them; but we do. Our justification is in our focussed descriptive aim.

   We now consider how CR copes with some classic hurdles - failure at any of which could be viewed as disqualifying the account.

Hurdle 1: The Duhem-Quine problem: To what extent can investigators judge when a low-observability entity does not exist, given unfavourable evidence?
Hurdle 2: Verisimilitude: To what extent can investigators compare their low-observability claims concerning entities with the true nature of the external world?
Hurdle 3: The Pessimistic Induction: Meta-evidence from the history of physics indicates that claims for the existence of entities below a certain degree of observability are too unreliable to be valuable.
Hurdle 4 : How can physicists decide when a research programme has degenerated so far that it is no longer worth continuing?
Hurdle 5 : Are there limits beyond which our best methods cannot reach - beyond which entity claims become pure speculation?

Hurdle 1: The Duhem-Quine problem: To what extent can investigators judge when a low-observability entity does not exist, given unfavourable evidence?

  
To what extent, if at all, can investigators justify their judgements as to when a research programme with a low-observability entity at its core has reached the end of road - when, either relative to a competitor, or on its own terms, it is so unlikely to have captured the truth that it should be abandoned?  To what extent must a programme be assessed holistically in its contact with high-observability claims, such that no particular low-observability entities within it can reasonably be claimed either to exist, or not to exist.
   Our strategy in jumping this hurdle is:
(a) We start by using A2 to argue that some truth-credit can feed back, via indirect use of evidence, to low-observability claims.
(b) Then we argue that, over time, investigators can establish the parts of growing network of such claims, piecemeal, so that the only parts of the network being assessed against new evidence are the parts that are new, with all the rest of the network regarded as Background Knowledge .
   Thus far we will have argued that some assignment of truth-credit, and of falsity, is possible. But it will always be a difficult, and unreliable, business. So:
(c) Finally we invoke A3, arguing that this difficulty and unreliability comes with the territory - it is inescapable. Who is this a problem for? It is not a problem for the philosopher of science, it is a description of a difficulty in the investigation of Nature. We are not complete justificationists, trying to prove that physicists decisively discover the truth, and eliminate falsehoods, in regions of high generality and low observability (this being presumed to be true). If we were, being unable to prove this would be a problem. But we are descriptive methodologists, not presuming what investigators of Nature can achieve. "Investigating Nature is surprisingly tricky", we remark, "That's interesting".
   We now look at this argument in more detail:
   The A2 card: 'justification' means 'justification/IP'. A research programme which contains many aspects which separately have earned truth-credit by surviving severe tests - successfully predicting novel facts, or, more generally, achieving successes against the odds - is increasingly gaining truth-credit, which cannot be taken away from it. Though its force is weakened by A2; investigators can try to build up their structures of generalisations, and low-observability claims, in a gradual way, step by step, doing their best to check that each step has its own truth-credit. This dynamic description helps to explain some of the judgements that Physicists make.

   Considering a static time-slice of the theory, and its evidence, makes this hurdle unrealistically high. At time t the situation is static, the theory and entities are conjectured, and the evidence fits round the edges. Logically the evidence underdetermines the theory; something is true about the theory, but who knows what.
This static view of theory-support is a misleading simplification . We should consider the time-dependency - the history - of a theory, the entities in it, and the evidence involved. Hypotheses are proposed gradually; a final theory, which may consist of many hypotheses, is the result of the gradual stitching-together of many pieces - even when the result is perhaps apparently seamless. One of the individual pieces may well have been accepted as having some truth-credit, before the creation of the complete theory, on the basis of separate evidence individually relevant to it. But once the theory is complete, the association of this evidence with this hypothesis may easily be lost to view.


   Figure 2 indicates this graphically. When the theory confronts the evidence, later, the fact that Evidence 1 was, and remains, unique support for Hypothesis 1 is lost.
   Thus the relationship between a completed theory and its evidence is misleading, in that the actual support for the individual parts of it is not as visible as it was during the process of its devising. This is as far as investigators have got in tackling the general inverse Duhem-Quine problem(9).
   Investigators since the time of Francis Bacon have taken his advice, in proceeding gradually in their construction of theories - notably Galileo. They are, we suggest, sensitive to the problem; if they are not, they should be. They try to ensure that new aspects of a theory are not taken seriously unless experiments can be devised which target the new aspect - in combination only with well-supported background hypotheses. The more that a theory is built up step by step, from hypotheses which are each individually tested before they are stitched into the fabric, the less this problem will arise.
   If, instead, as Bacon emphasised, investigators devise wholesale, intricate, comprehensive, theories, involving many interlinking hypotheses, in such a way that the combination fitted the evidence, then the ascription of truth-credit is extremely difficult. If this was the situation in an area of, for example, Physics, then we would be fully justified in criticising methodologically careless claims that the evidence supports every aspect of the theory, every hypothesis within it - including, for instance, the existence of very low-observability entities.
The step-by-step, dynamic method avoids this trap. We now illustrate, using anecdotal history of atomic theory merely to flesh out the abstract bones, this method:
   By the time physicists had travelled as far as detailed kinetic theory explanations of the Ideal Gas Equation, and as much further as Rutherford, Bohr, and Schrodinger, the links between their low-observability claims and the data were very complex - like the right-hand side of Fig.2. Take the Bohr atom. It is a set of linked hypotheses concerning the nature of matter, including, amongst others, the following:
(i) Matter - solids, liquids, and gases - consists of many, low-observability, particles, of approximate diameter 10-10 m. A few grams of matter contains about 1023 of these particles.
(ii) These particles, which we call atoms, are very closely related to the smallest units that take part in chemical reactions - Dalton's atoms. In chemistry they are conjectured to be indivisible.
(iii) The particles have a electrically positive core, a nucleus, of approximate diameter 10-15 m. This contains about 99.95% of the mass of the particle.
(iv) Around this nucleus are electrons, which carry negative charge.
   Suppose that we take (iv), considered like hypothesis 1, in our Fig. 2 above. Bohr knew that there was specific evidence (Evidence 1) for this, back in time: Thomson, for example, had showed that cathode rays, which were conjectured to be electrons, had the same properties, regardless of the elements of which the gas, and the cathode, were made. He argued that this was evidence that the electrons were in the matter, and that the same electrons were in all kinds of matter(10). He also showed that the direction of deflection was that of negative particles - as was the direction of acceleration to the anode. Bohr therefore can argue that Thomson's evidence shows that matter has electrons in it, so if matter is made up of atoms, then the atoms have electrons in it.
   If we take (iii), then Bohr knew of the earlier specific evidence of Rutherford's experiments (Geiger and Marsden) on the deflection of alpha particles by gold foil, which indicated(11) that the atom contained a small, positive, massive, core (rather than the smeared-out pudding of Thomson's Plum-pudding atom).
   If we take (ii), then Bohr knew of the evidence from chemistry investigations, constant proportions of mass of matter in chemical reactions, electrochemistry, and developments since Dalton.
   If we take (i), Bohr knew of the evidence of Brownian motion, as explained quantitatively by Einstein. He also knew of the success of the kinetic theory of gases, not only in leading to the Ideal Gas Equation, but also in making successful novel fact predictions, such as that the viscosity of a gas is independent of its pressure.
In other words, Bohr was building his theory on foundations already laid by previous investigators. This commonplace observation has relevance to the inverse Duhem-Quine problem: when a theory consists of many pieces, each of which has its own independent line of evidential support, there is no support problem. Investigators know which bits of evidence support which bits of the theory.

   Meanwhile, a programme which began with little truth-credit, and has then developed, in the face of further high-observability agreed claims, by a series of adjustments none of which, by being against the odds, have gained any further truth-credit, has only the little truth-credit it originally had. No particular part of the programme, and certainly not the low-observability entities, have gained truth-credit.
   And now we play the descriptive methodology card A3:  If the methods we are describing investigators as using don't provide them with decisive algorithms for locating the truth or falsity of low-observability claims, then such claims will just not be very reliable. Investigators will make mistakes. Don't hold the mistakes against them. We can only hope that they place no more than the justified/IP amount of confidence in their claims.
Instead of the critic's opponent being a philosopher trying to interpret the achievements of physics, she is now the physicists' opponent - Nature.  The critic is recruited as a potential ally in this quest for victory - he is being asked not to interpret physics from the outside, but to do physics - at a highly theoretical, methodological, level, perhaps, but physics nonetheless.
   From this perspective, a traditional methodological problem, such as the Duhem-Quine problem, is not a weakness in a philosophical account of human investigation of Nature (conjectural realism) - a puzzle which must be solved, because our account must be made consistent with the true solidity of this investigation. It is a weakness in the best available human methods of investigation - it exposes the true shakiness of the game.

Hurdle 2: How can we hope to compare our low-observability claims with the true nature of the external world? (Verisimilitude)

   The correspondence interpretation of truth requires that we are checking on the link between our claims and the world; we look at the claim, we look at the world, and we see if they are related such that the property 'true' can be ascribed to the claim. 'Snow is white' is true, if and only if snow is white. Checking this relationship may be OK for high-observability claims, but if we have no high-observability check on how the world is, one side of the relationship is missing.
   True, of course. We have to fall back on indirect evidence , which is where the trouble starts.
The critic proposes that we can offer no justification for claimed degrees of verisimilitude for low-observability claims. We can offer no justification for claiming that low-observability claims make realistic progress towards the truth.
   This is a key problem for Popper and Lakatos - the criticism that their methodologies have no rational force - no right to claim to be normative. Feyerabend, amongst others, claimed that Lakatos was using 'degeneration' and 'progress', just as Popper had used 'severe test', without justifying the approving and disapproving connotations of these words(12). In other words, they had provided no justification for claiming that theories or research programmes that developed in certain ways were closer to, or further from, the truth, than others.
   Two problems with Conjectural Realism are here raised:
(i) Can we explain precisely what 'closer to the truth' means?
(ii) Can we justify the suggestion that a low-observability claim is close to the truth - or perhaps that one such claim is closer to the truth than another?
Response :
   Problem (i) is an analytical problem, of a kind which has not been solved in many everyday areas. We cannot explain precisely what 'closer to the truth' means, but we cannot explain what 'belief', 'game', 'chair', and 'mirror' mean either - yet we are able to use these words, and the associated vague ideas, successfully in thought and communication.
   Like any investigator seeking the truth, we only need to use words which are sufficiently precise to communicate our purpose. The onus is not on us to be more precise, but on the reader to explain why they have a problem understanding what we are saying. Suppose that the world truly consists of many localised objects, each made of a solid sphere about 0.1 nm across. We consider two claims: (i) the world is continuous and indivisible (ii) the world consists of many localised objects, about 0.1 nm across, each an orbiting system of smaller objects. We want to say that the second claim is 'closer to the truth'. Everyone agrees that this is correct; they understand what is being said. If we cannot establish, using other words, necessary and sufficient conditions for the correct use of this phrase, this does not represent some kind of black mark against it; it merely indicates that a particular aim, characteristic of analytic philosophy, has not yet been achieved.
We want to use the phrase to describe a certain relationship met often in everyday life. Suppose that the truth is that we went to the Yorkshire Dales for our family holiday in 1995. Is the reader actually unable to use the phrase 'closer to the truth' in connection with the following two claims: (i) we had our holiday in the Lake District in 1995 (ii) we had no holiday in 1995?  They are both false, but (i) is closer to the truth. It is very difficult to find an algorithm, in alternative, more precise, words, which could be substituted for all such uses of the phrase 'closer to the truth', and retain the same meaning in the sentences in normal English. So what?  Is the reader actually claiming not to understand what we mean?
   Absolute precision here is an unreasonable request. We are suggesting that this is an obsessive demand that we solve another problem , one that we have not set ourself. We have not aimed for absolute precision in our language; we were always prepared to accept borderline cases. If describing the truth could be done effectively with somewhat vague words - fine.
   We note that the most severe problems with approximate truth arise for philosophers who are logicist, determined to consider propositions related to other propositions, rather than considering the models, the images of reality, that the propositions are about. Aronson, Harré, and Way, make this point strongly in Realism Rescued .
   We now play cards A2 and A3. As Descriptive Methodologists, we have one considerable advantage over complete justificationists, be they First philosophers, naturalists, or analysts. We are not attempting to defend some specific amount of justification. We will be satisfied with whatever extent of justification we find. In other words, what we are seeking is the truth about physics. If the true extent of justification is very slight, if we have remarkably little reason for thinking that our best low-observability claims are any more reliable than random guesses, so it goes. The truth is the truth - and it is pointless to rail against it.
   If physicists are doing the best they can , such that we cannot actually suggest any better methods that they could be using - and if our only criticism of them turns out to be that they ignore sceptical doubts, and therefore display varying degrees of belief, varying degrees of confidence, in their low-observability claims, which are unjustifiable, then we have no reason for criticising them(13).
   We are here considering evidence, as opposed to meta-evidence; we postpone discussion of historical evidence for and against the claim that our best-supported claims have a good track-record.
   Feyerabend's criticisms are sceptical, as are most of the arguments that give realists that sick feeling in their stomachs. So, if sceptical doubts are quarantined, then these criticisms can be noted, located, accepted, and then effectively ignored.
   Our conjectural realism thus becomes the true account of physics by being bolstered by a "whiff of inductivism" - by the unjustified (perhaps unjustifiable, but it does not matter to us) inductive presupposition (IP), the claim that true generalisations, the true low-observability nature of things, make recorded high-observability events ones which have a high chance of occurring(14). It is the claim that our experiences give us a fair sample of Nature. It is the claim that our experiences are not systematically deceiving us as to the world's true general, or more hidden, nature.
   Consider some examples of the application of IP to claims with various degrees of observability:
Example 1 : Thomson finally obtains high-observability evidence that a cathode ray has been deflected by an electric field. He has no evidence, no background claims, to suggest that one cathode ray is different from another. He concludes that "all cathode rays can be deflected by electric fields". He thus discounts the possibility that 99.99% of cathode rays cannot be so deflected, but that, by extreme ill-luck, he just happens to have been experimenting on one that can.
   In this example, IP supports an inductive generalisation.
Example 2 : Newton discovers that the low-observability claim of universal gravitation not only predicts Kepler's laws, given certain assumptions, but also successfully predicts the precession of the equinoxes and the tides, cases for which it was not designed. Later it also predicts successfully the existence of a new planet, Neptune. He concludes that such successes are not due to chance - the claim has some truth in it.
Example 3 : Young and Fresnel use the low-observability claim that light is a wave, in some unspecified medium, to explain various forms of interference, and diffraction. They show that the wavelength of the wave comes out the same, whichever phenomena they choose. They also explain colours, reflection, refraction, and rectilinear prorogations. They explain double refraction in Iceland spar crystals, by the idea of light as a transverse wave, which can be polarised. They successfully predict that the shadow of a small spherical object has a faint dot of light at its centre. They conclude that such successes are not due to chance - the claim has some truth in it.
Example 4 : Boltzmann and Maxwell use the low-observability claim of the existence of atoms to obtain the Ideal Gas Equation. Using the idea, they successfully predict that the viscosity of a gas is independent of its pressure. Chemists have been using the idea successfully for years. Einstein uses it to explain successfully, in mathematical detail, Brownian motion. They conclude that such successes are not due to chance - the claim that matter is made up of atoms has some truth in it.
   In this case, and the previous one, IP supports, via the successful prediction of novel facts, a very low-observability claim.
    These are cases sometimes described as abduction, or as Inference To The Best/IP Explanation (where 'best' is taken to mean 'that which makes the events most probable).
   How much truth, exactly, do these physicists think their low-observability claims gain from these successes? It would be nice if the investigators could put a numerical measure to it, but it is very difficult. They are not using any form of calculation; they are using impression, judgement. It is not the fault of the descriptive methodologist that we cannot answer this question; it is also not the fault of the investigators, who are doing their human best. It is disappointing, but we have to live with it - unless our critic comes up with a bright idea. It is no-one's fault, it is just the case.
   Inability to answer to this question does not damn the Conjectural Realist.
   Bayes' theorem approximately describes the assignment of 'probability of containing truth' to a claim.

   Which part, exactly, of the low-observability claims is supposed to have gained the truth-credit?  Again it would be nice to be able to report that investigators had found a way of decisively locating the source of the truth-credit, but they have not. The question has no precise answer, more's the pity.
   The same applies to the Duhem-Quine problem, of which this is the inverse.
   Once again, the inability of the conjectural realist to answer this question more decisively is no discredit to him, since the investigators cannot answer it either.
   If investigating Nature was easy, if nice tidy, precise, algorithms gave the extent of truth-credit, and where it was located, there would be far less controversy in investigation. The existence of physics controversy is evidence for the non-existence of definitive answers to these questions. What the critic is asking for is a methodological system for investigating Nature of a quality which does not exist - because no human being knows how to invent one. If he thinks that this is not good enough, he is welcome to point this out to humanity - though he may receive a dusty answer.
   The critic might respond that what these feeble answers demonstrate is that the investigators do not know for sure where the truth is in the low-observability area of claims. The investigators agree.
   The critic might persist that this means that investigators should not pretend that claims like "Therefore light is a wave-motion in the aether" can made with confidence. The investigators agree - and if they have been overconfident in the past, they apologise, and promise to try harder.
   The critic continues that claims like " There are quarks" are so far from high-observability claims that they do not deserve to be held with much confidence. The investigators re-run the available evidence; they judge that the deep scattering of electrons by protons, and the hadron jets, are pretty convincing, but they accept that the evidence is hardly decisive. They agree that they would not, for example, be prepared to wager their life savings, or their children's' life, on the truth of a further untested consequence of the quark hypothesis.
Finally the exasperated critic insists that the whole game is too shaky(15). This uncertainty, vagueness, and lack of justification, is unacceptable. A proper, positively testable, investigation, worthy of the name of a 'Science', would reject all this nonsense, and settle on properly observable evidence, and theories for predicting it. We'd give up pretending that we had any serious idea what was going on in the unobservable regions of Nature, and be satisfied with empirical adequacy.
   It is an understandable response. But, beyond being a plea that investigators should be careful - it is impossible to sustain. It is merely a denial of the messy business of investigating Nature. It is just a plea for a better, simpler, world. A1 ensures that it remains just a dream; there is no line marking a natural classification into observable and unobservable aspects of Nature. There are just claims that gradually become harder and harder to test, as they move further from immediate experience.

Hurdle 3: The Pessimistic Induction: Meta-evidence from the history of physics indicates that realistic claims are too dangerous.

   Meta-evidence does provide support/IP, because of A2. Note that any user of this argument is automatically accepting that the method of indirect evidence provides support/IP for low-observability claims - at the meta-level; he is therefore implicitly committed to the very methods which uniquely support/IP low-observability claims at the physics investigation level. This is not an inconsistency - though if he concluded that the meta-evidence supported the meta-claim that high-observability claims by physicists, despite being apparently supported/IP by evidence at the time, tend to be judged very unreliable, and not likely to be true, on the basis of later high-observability claims, he will have to explain that this failure of the method is localised at the base level of physics, for some reason; otherwise the failure will equally operate at the meta-level, making him unable to support/IP the meta-claim with meta-evidence(16).
   Bill Newton-Smith, for example, uses meta-evidence to support/IP the claim that low-observability claims have often been successful.
   How dangerous is too dangerous?  The only level of support/IP at which low-observability claims become unacceptable in an investigation aimed at {Truth} is that at which the claims are no better supported/IP than a guess. Otherwise any level of support/IP is safe, as long as it is correctly and openly assessed - with no pretence that there is more support/IP than there is.
   Critics suggest that the evidence indicates (as marshalled by Laudan, for example) that modern physicists, placed in some historical situations, would have made low-observability existential claims, with some considerable degree of confidence; but, they continue, after later events - in particular high-observability results - they would now deny these claims, with equal, or greatly increased, confidence. If they accept that they would have been wrong in the past, why should we believe them now, when they make similar claims?
   Historical significant evidence of realistic mistakes is primarily an argument against provable realism - which is already fatally flawed. The impact of one definitive failure, one conclusive historical case where Nature has deceived us, foiled our best efforts at uncovering low-observability aspects of it, is disastrous only for the view that our methods are foolproof , that we are definitely making progress.
Response : A few significant mistakes are predictable. Why should we expect that our best investigative methods, applied to Nature, should lead us always towards the truth, with no journeys into blind alleys?  Is it surprising that some of Nature is constructed such that sometimes we have been deceived?  Random kinetic energy spreads from the hot end of an iron bar remarkably similarly to the flow of a fluid; anyone cold be forgiven for having supposed that heat was a fluid. The planets are very nearly on circular orbits; anyone can be forgiven for supposing that they were circles. All the waves we experience normally are disturbances in a medium; anyone can be forgiven for having supposed that light waves were also in a medium. The speed of light is so much faster than all the speeds we ordinarily observed; anyone can be forgiven for having supposed that mass was independent of speed. And our ordinary experiences, are in a reference frame dominated by apparent stability - of the Earth, the Sun, or the 'fixed stars'; anyone can be forgiven for having supposed that there was a unique, privileged, Absolute, reference frame for position and time.
   It would be more surprising if our local experiences always gave us infallible indications as to the low-observability aspects of Nature.
   We must fear that these kind of mistakes will happen again, and again. All we can do is to guard against them as best we can . Unimpressive, perhaps, but no-one can do more.
   If realists have accepted that do not know various of the lower-observability claims for sure, then it will hardly be a revelation to them that mistakes have been made in the past.
   The critic will only have significant evidence of Natural deception if:
(i) she has provided a decisive assessment of the extent of confidence of the consensus in, say, crystal spheres.
(ii) she establishes that the best available methods, if applied at that time, would still have led a modern consensus to the mistake.
   If, say, Fresnel argued that the waves explanation for optical phenomena only made it "highly likely" that there was a hidden, material, substance, the aether, in which the waves occurred, then a later, decisive, high-observability demonstration that there is no aether would not defy his claim. Only certainty would lead to the most severe criticism.
   Did Newton, and the consensus, actually think that they had firmly established/IP that light was truly a beam of particles, or were they just working on it as the best available hypothesis?
   Methods have improved, perhaps. Present claims should not therefore automatically be tarred with the brush of past errors. Was Nature deceptive , in the sense that it would have deceived the present consensus, or did our predecessors not apply proper critical methods?
   If, say, there was no equivalent of the modern consensus, no dissemination by journals with refereeing, then the degree of confidence indicated by Ptolemy in his crystal spheres is irrelevant as evidence that Nature would have misled physicists operating present methods.
   Did Newton have arguments that Space and Time were absolute - arguments that would have satisfied the modern consensus, knowing what Newton knew?  Did he, for example, have successful novel fact predictions which followed specifically from this claim, as opposed to those that followed from it in combination with his laws?
   Some number of mistakes, some evidence of the inadequacy - or at least imperfection - of our current methods, is predictable and harmless to our justification.
  
We are not claiming that the present methods of physics are infallible - able, if carefully used, to spot truths at a hundred metres. We are claiming that they are the best - the least bad - methods we can devise. All we need to show is that there are no known better methods, and that the methods are reasonably/IP likely to be better than guesswork.
   We suggest that the historical record, mixed as it is, contains enough lasting success for past low-observability claims which were initially judged to have truth-credit on the basis of indirect evidence, to justify/IP investigators in using indirect evidence to assess the truth-credit of broadly similar claims today. There are enough failures to make us cautious, but not enough to damn the method overall.


Hurdle 4: How can Physicists decide when a research programme has degenerated so far that it is no longer worth continuing?

Response:
The conjectural realist jumps this classic hurdle by insisting, using A3, that it is not a hurdle. Methods M10-16 provide guidelines for the consensus judgement. That is all. If anyone can think of any better way of deciding, they should let Physicists know.


Hurdle 5 : Are there limits beyond which our best methods cannot reach - beyond which entity claims become pure speculation?

(We intend this as a serious hurdle. The associated anti-realist criticism, which is based on unsound arguments, is discussed in the next section)
Response: The most promising area for which this case could be argued is the one with which Duhem was primarily concerned - and which we should let overshadow his understandable, but unwise, rejection of the somewhat ridiculous mechanical atoms of Kelvin. He argued that physics, by its nature, despite its best methods, could not penetrate beyond the veil of appearance to things in themselves. Only by either mysticism or religion could we understand the true nature of the world.
   Human physics may indeed, because of limitations in its methods, be unable to capture some aspects of Nature. Perhaps it will always be stuck with appearances, with properties rather than substance - generating Harré's epistemological slide, where of any entity, however fundamental, we seem able to ask: "But what is it made of?  What is inside it?". Perhaps humans are fundamentally limited in their ability to describe things, stuck with the stock of concepts that macroscopic experience provides for them (Bohr's argument concerning quantum theory); beyond these, all we can hope to do is to devise mathematical structures for our explanations. Perhaps our particular sense organs permanently limit our ability to appreciate the existence of areas of Nature which they do not detect. Perhaps physicists will be permanently unable to integrate consciousness, the sensation of blueness, and the sensation of free will, into their physical picture of Nature, because, following Locke, Boyle, Galileo, and Newton, they have systematically excluded all secondary qualities from Nature - and there is now no way that they can derive them from the primary-quality model they have projected out into Nature.
   Only a prejudiced extreme realist would refuse to consider the possibility that such arguments may exist, for some regions of Nature. There may be weighty arguments for the existence of regions to which, even with the use of our very best investigative methods, our physics cannot penetrate; if the arguments are valid, then the game is up for entity-realism in these regions. Once again, the case-by-case approach is the correct one. We consider the arguments, and we make a judgement in each case. (I think that at least two of the above possibilities are worth serious consideration)
   These arguments for limits to entity-realism may be valid. As we will see, those presented by Van Fraassen are not.



Part 3: Criticisms of Conjectural Realism



Criticism 1 : You can conjecture that low-observability entities exist, but you cannot prove that they do.

Response
: True. A2, and our descriptive response to the three hurdles, are fatal to a provable realist. He thinks that the low-observability claims of physicists can be proved beyond doubt to be true, using indirect evidence; he may be a mythical beast. But the methods that physicists use are fallible; they become more so, the less observable the claims are. Even conditional on the inductive presupposition, the methods are still inconclusive. They establish/IP that a low-observability claim is not entirely a human fiction, but they do not prove which aspect resembles the truth.
   Conjectural realists - viewing themselves as defenders of the common-sense attitude taken by physicists to their claims - have been put on the defensive by the correct accusation that M are unjustified. They have been tarred with the provable realists brush. But since they do not pretend that very low-observability hypotheses are provably true, they can watch the collapse of provable realism with equanimity.

Criticism 2: Physics investigators can only reasonably claim knowledge of entities in the high-observability area of Nature (Anti-realism).

  
It is suggested that in the low-observability area, physicists should be restricted to claiming that high-observability events occur as if the low-observability entities truly exist, since the available evidence, plus M , give no truth credit to the entities. Such entities are therefore purely speculative, human fictions; they are metaphysics masquerading as positive science.
   This criticism captures exciting anti-realism, of which we will take Bas Van Fraassen's The Scientific Image to be an example. It firmly contradicts the conjectural realist's argument, that we can hope to gain some indirect evidence, using M , that entities exist in all areas. Examples of the entities with which Van Fraassen (VF) is uneasy include space time, magnetic fields, and elementary particles (perhaps including atoms and molecules). Of these, it is the particles that provide the simplest battle-ground.
Response 1 : Aronson, Harré, and Way, (AHW) provide an important defence of CR against this criticism. They do not use A1 to force linguistic adaptation; they accept the traditional words and associated classifications (Kinds) provided by Van Fraassen; they accept, for instance, three kinds of entities: directly observable ones, ones accidentally unobservable because of time and technology, and ones wholly unobservable.
   They argue that investigators, having (Step 1) already induced within each kind that the methods M work (with meta-inductive evidential support), can then (Step 2) reasonably induce across the Kinds that the methods M which are known to work for the first two kinds, will also work for wholly unobservable entities.
In slightly more detail: (Step 1) As time has passed, and as technology has advanced, entities such as bacteria have passed from being conjectured on the basis of M alone - as the best explanation of the high-observability phenomena - to being seen through microscopes. We have meta-inductive evidence, in other words, that M are able truly to locate both observable, and accidentally unobservable entities.
(Step 2) What reason can VF give for saying that M can establish, indirectly, the true existence of an accidentally unobservable bacterium, but cannot establish the existence of a wholly unobservable electron? Does VF have any evidence that ease in observing putative entities is a significant factor linked to their existence ? The key idea is that human observability is a quality that some entities have - but we have no reason to think that it is linked to any other quality - it is not a natural classification. Therefore there is no reason to doubt the effectiveness of M in establishing the true existence of unobservable entities, because in going from observable, to accidentally unobservable, to wholly unobservable, we have no reason to judge that we are changing natural kind.

   This powerful AHW argument can be simplified if, firstly (A1) we abandon the traditional language of observables and unobservables, and secondly (A2), we regard M as justified/IP, so that inductions within, and over, kinds, are not necessary.

Response 2 : Investigators seeking the truth - the whole truth - about Nature, must , for consistency, try to make claims about very low-observability entities. They can only avoid this necessity by arguing either:
(i) that there are no such entities to be found, or
(ii) that, according to present background knowledge, the best available methods could not provide any degree of support/IP for speculations concerning such entities.
VF does not attempt to argue for (i).
   To establish (ii) he needs to criticise M . But in criticising M , he faces two unattractive alternatives:
(i) Deny the general effectiveness of M . This is a doomsday weapon. Certainly it destroys the case for conjectural realism. It destroys almost everything. It does uncontrolled damage to the structure of human knowledge. He does not accept this alternative.
(ii) Justify what is particular about the area where M do not provide truth-credit for entities. This is the alternative VF wants to adopt, but because of A1, he is unable to provide the justification.
   Instead of doing so, at the key point he merely restates that physicists could be interpreted as just claiming that it is as-if low-observability entities exist. This is true, but the question is: Are they - would they be -justified in claiming that these entities exist?
   In more detail:

Step 1 : Language : VF cannot have his cake and eat it. He accepts A1; he accepts that there is a continuum of entities, for which our evidence gradually gets more and more indirect; he accepts that there is no observation/theory line. For our part, we agree that there is a big difference between the extremes - worth coining words like 'observable' and 'unobservable' for vague communication. This is the burden of the Sorites paradox; there is no clear line between a mountain and a hillock, yet there is a very big difference.
   Since philosophers are attempting precise work, we need to use language which helps us to remember the continuum; we must not use the digital words , because of the danger that they coax our thought back into the presupposition that there are natural kinds to which the words refer. If we continue to use the word 'unobservable', we will tend - forgetting A1- to assume that it refers to a kind of entity, which therefore has other common aspects - such as 'being impossible to establish as truly existing' or 'being only able to be established by indirect methods'. {Since AHW accept the digital words, they end up with the associated Kinds - which do not, they argue, have differences significant with respect to M . It is simpler not to accept the words in the first place; then the kinds never appear, and the "induction over types" is unnecessary}
   The only prophylactic is to use a single, appropriately qualified, word for the whole continuum. In the case of geographical protuberances, we can use 'protuberance(250)', where the number gives its height in metres; in our case we should use 'observability' - with appropriate qualifiers such as 'low observability' and 'high observability'. Van Fraassen's claims must be translated into this language. Any that lose their plausibility were surreptitiously trading on the digital aura of the word 'observable', 'unobservable', 'theoretical', 'factual'.
   For example, the proposal that investigators should restrict their claims concerning the existence of entities to those that are observable, translates into ...those that are more observable . But this is no restriction at all - it now provides no guideline. Investigators are merely being asked to be more cautious in their claims, the less observable the conjectured entity is - conjectural realism with the usual health warning.
   With the removal of the linguistic prop of 'observables' and 'unobservables', VF's criticism collapses.

Step 2 : If the agreed aim is {Truth about Nature}, then this implies a search for the whole truth. Unless we have evidence that, as we move to lower observability regions of Nature, there are no further entities, investigators must do their best to probe these regions.

Step 3: So VF has to argue that in these regions investigators doing their best can obtain nothing of value; the best methods M are powerless to establish any degree of confidence in the existence of these entities. He must argue that, despite A1, there are regions, identifiable by arguments A, where M are unable to provide any truth-credit. He needs to argue this, despite the fact that the indirect methods M used to establish the existence of entities remain the same through the border region; they just steadily reduce in effectiveness as the evidence becomes weaker - there is no sudden transition from using satisfactory methods M1 to using unsatisfactory ones M2 .
   Arguments A would show that as we pass across the loosely demarcated border region between the higher and lower observability entities, there is a radical reduction in the reasonable degree of confidence our methods M provide, to a level where all entities beyond the border region are merely unsupportable/IP speculations.
VF provides no arguments A .

Step 4 : His only alternative would be to deny M in general. But this is impossible, as he accepts. To do so would be to part company with too much human knowledge.
   In summary, either M work, or they do not. If it works, it works for entity-claims in all areas, however low-observability. If it doesn't work, human knowledge falls apart. VF has no way out.

   VF says that to propose that in general (p.71) "the {low-observability} evidence never warrants a conclusion that goes beyond it...is already quite unacceptable" and would (p.70) "lead to a self-defeating scepticism". He accepts Step 5. Yet he still wants to propose that, in a particular area of cases, high-observability evidence does not warrant a low-observability conclusion; (p.71) "when the theory has implications about what is not observable, the {high-observability} evidence does not warrant the conclusion that it {low-observability claim} is true". So he needs to defend the view that although, in general, high-observability claims do warrant conclusions that go beyond themselves, in some areas they do not. He needs to provide a justification for denying step 4.
   He agrees that a physicist will, at least sometimes, (p.71) "choose to accept that theory {set of low-observability claims} which is the best explanation {of the higher-observability evidence}" (My translations). Then he reminds us of his favoured alternative, that physicists could just be accepting the set of claims "as empirically adequate", and not claiming that the entities exist. True. But they could be accepting them, and claiming that the entities do truly exist. He needs to give a reason why the second case - which is what they say they are doing - is not justified. In the next sentence he reminds us of his alternative again. Then he reminds us of it again (p.72). Then ... he proceeds to a discussion of epistemology. He has provided no argument; he has merely stated three times that physicists could be agreeing with him.
   The reader should check on pp. 70-72, to see if I have missed something.
   VF is consistent in his inconsistency, since earlier in the book he criticises the miracle argument, which is again based on M . He wants to achieve the impossible: to deny the use of M when they are being used to support low-observability entities, but to retain their use in the ordinary cases that support the rest of human knowledge.

   Van Fraassen fatally fails to disprove that physicists are justified/IP, if they wish, in making inferences to the best explanation - in claiming that even very low-observability entity claims have some truth-credit. In this, they are not doing anything special - they are doing what comes naturally to all human beings.




Criticism 3: The aim of physics is the search for empirical adequacy, not for realistic entities.
The problem, this critic proposes, shouldn't have been that mealy-mouthed 'Can investigators reasonably carry on making certain low-observability claims, as long as they do not display inappropriate confidence in their truth?'; the problem is "What is the nature of science?". And the answer is either "It is an investigation to uncover the hidden entities in Nature", or "It is an investigation to obtain empirically adequate devices". One of these is right, and the other is wrong. Which is it?
Response: This formulation of the problem is misleading. And because it is misleading, a solution has been elusive - leading some commentators close to despair(17). The request to "find the nature of science" is an invitation to argument by persuasive definition, masquerading as conceptual analysis. We can argue about the reasonableness of conjecturing that entities in some area are real, regardless of whether any physicist has ever made such a conjecture. Appeals to 'what science is' are valueless in these arguments; we can ask what is reasonable, what is possible, and what is valued - but we cannot appeal to 'the true nature of physics' to support our case.
   Physics is not given to us, as an enterprise for us to investigate; it is not a concept presented for us to analyse. We have to highlight, from a complex interwoven network of human activities, certain strands that interest us. Using A3, we argued that we were interested in highlighting a type of investigation based on the primary aim of seeking the truth about Nature. This is what we are choosing physics to be. We could have chosen something else. We could, in particular, have chosen to highlight strands based on the primary aim of obtaining empirically adequate descriptions of high-observability phenomena (defined somehow). We did not do so, because we are not interested in this aim (and nor is Van Fraassen, and nor was Duhem).

Conclusion


   Conjectural Realism, a boring, moderate, view, supported by three assumptions, satisfactorily describes physics. It can be satisfactorily defended against criticism. All that survives of the anti-realist criticism is the boring caution, that as investigators' claims become lower-observability, the evidence for their truth becomes more indirect, and the justifiable/IP degree of confidence in them reduces; eventually the risk of falsity - the difficulty of checking against the evidence - is so great that claims should be labelled with a methodologists' health warning.
   We finish with the Thonemann-Levy diagram, indicating this boring situation: Investigators are welcome to conjecture entities; but as they become further from observation, their existence becomes less reliably supported/IP.
   The vertical dotted lines indicate the arbitrary limits hopefully imposed by extremists. No such limits exist.



   Philosophical views, like vampires, have a tendency to refuse to die. Andre Kukla (BJPS 1994) was surprised that (p.958) "there is very little to show for a generation of extensive examination of the realism issue".

   Chopping the top of the weed is not enough; it will grow again; we need to expose the roots. We need to be tough on intellectual crime, but also tough on the causes of crime. We need to understand the motives behind the anti-realist criticism - because it is a rationalisation of these motives. Compte, for example, though not an exciting anti-realist, worried about the low-observability extremes of investigation because he was determined to exclude realistically-interpreted religion from positive science. Van Fraassen's anti-realism is motivated by his desire to justify an interpretation of a realistically apparently unsatisfactory quantum theory, to which he repeatedly refers, just as Osiander's was by his desire to justify an interpretation of realistically unsatisfactory cosmological theory.
   Boring conjectural realists will continue to go about their investigations of Nature. Whenever they strike serious problems, moderate thinkers will wonder whether humanity has finally reached an area into which its methods of investigation cannot realistically penetrate. They will adduce reasons, which must be taken seriously. Meanwhile, some extreme thinkers will rediscover the exciting idea that this area is nothing special, because actually humanity - despite appearances - does not truly penetrate beyond 'observation statements', into a whole larger area, labelled 'unobservable' or 'theoretical'

Philip Thonemann

 

Footnotes:


1 It is ironic that philosophers of science should suffer from a desire to do what is not only typically human, but characteristic of the investigation that they are investigating. physicists - and philosophers - have looked for natural kinds, types of objects which, once identified by one quality, then have other deeply linked qualities (the proprium of classic taxonomy). True generalisations have then, they claim, been made about these abstracted kinds, because of the linked qualities. The difference is that physicists seem to have successfully found natural classifications, while philosophers of science have not. Without them, room for abstract claims, and then for symbolic representations of these claims, is limited. In particular, logical notation becomes misleading - the inescapable implicit presumption that T1 and T2 (two theories) are two particular examples of a general, abstract, natural kind which possesses some characteristic properties. Being unable to deploy the powerful tools that work on abstractions - Formal Logic and Mathematics - is disappointing.
2 The idea that the use of everyday language implies acceptance of a set of theoretical presuppositions - and that these may blur thought, and obstruct change - is familiar in philosophy (see, for example Paul Churchland's "Neolithic legacy" p.35). We are merely suggesting that philosophical language displays the same feature.
3 Bill Newton-Smith, for example, in his The Rationality Of Science , uses Induction in a meta-argument from the history of physics to support Realism, arguing that if it is OK for physicists to use it, it is OK for him. The implication is, more or less, to dare the reader to object. To refuse to allow its use opens the reader to the charge of hypocrisy, since she is undoubtedly using it all the time in her everyday life.
4 I am persuaded by, for example, Christopher Hookway's book Scepticism that the arguments of the Greek philosophers remain unanswered.
5 Is the history of philosophy the story of people trying to prove things that cannot be proven - people never prepared to take "No" for an answer?
6 Several philosophers have been Descriptive Methodologists. For example, John Mill makes the approach explicit in the opening of his System of Logic , and Peter Lipton expresses it clearly in his Inference To The Best Explanation .
7
See, for example, the 25 methods in my essay on 'Demarcation '.
8 This attitude is very similar to Arthur Fine's.
9 Which is not to suggest that every part of a theory is linked to its own individual piece of confirming evidence. This is as false as its mirror-image extreme. Sometimes, doubtless, there are serious problems for investigators in judging which parts of theory are being confirmed.
10 Our point is not whether, in making these claims, Thomson was justified.
11 Did the investigators have undue confidence in these claims? Our historical reading suggests that they did not; they were sensibly doubtful of the truth -credit of their claims. Thomson, for example, would not have suggested that he had sufficient evidence to claim that the atom was definitely like his plum-pudding model. It was his best guess; it was a speculation. Still, this is not relevant to our investigation, but only to the assessment of the extent of rationality of past physics.
12 It was pointed out, for instance, that Falsification depends on A2; otherwise a counter-example to a generalisation could be the one contrary instance, which we have unluckily come across, to a law which otherwise will work perfectly until the end of the Universe.
13 Given the agreed sterility, and paralysis, consequent on taking Sceptical Doubt seriously.
14 This assumption has, of course, been spotted by many philosophers. Poincaré refers to it; Popper makes it, somewhat unwillingly, explicit in his Conjectures And Refutations.
15 Arthur Fine's phrase.
16 This is the inverse of the circular argument which purports to show that a methodologist can use the success of abduction at the base level as meta-evidence for the claim that physics methods in general can uncover true low-observability claims. It is method-circular because it is using abduction at the meta-level - that this is the best explanation of the historical success, which is otherwise improbable - to try to justify the claim that abduction at the base level is justified. Arthur Fine, for example, criticises this argument pp. 112-115 of The Shaky Game . It - an Inductive justification of Induction - continues to tempt people; cp. David Papineau in Philosophical Naturalism - though he presents it carefully as not persuasive to someone who is not already accepting abduction. We do not feel that provides any helpful justification of A2.
17 See, for example, Kukla.