Inductivism is the traditional and still commonplace philosophy of scientific method to develop scientific theories.James Ladyman, Understanding Philosophy of Science (London & New York: Routledge, 2002), pp 51–58Alan Chalmers, What is this Thing Called Science?, 3rd edn (St Lucia: University of Queensland Press, 1999), pp 49–58, particularly 49–50, 53–54 & 58. Inductivism aims to neutrally observe a domain, infer laws from examined cases—hence, inductive reasoning—and thus objectively discover the sole naturally true theory of the observed.John Pheby, Methodology and Economics: A Critical Introduction (Armonk, NY: M. E. Sharpe, 1988), p 3.
Inductivism's basis is, in sum, "the idea that theories can be derived from, or established on the basis of, facts". Evolving in phases, inductivism's conceptual reign spanned four centuries since Francis Bacon's Novum Organum of such against Western Europe's prevailing model, scholasticism, which reasoned deductively from preconceived beliefs.
In the 19th and 20th centuries, inductivism succumbed to hypotheticodeductivism—sometimes worded deductivism—as scientific method's realistic idealization. Yet scientific theories as such are now widely attributed to occasions of inference to the best explanation, IBE, which, like scientists' actual methods, are diverse and not formally prescribable.
Near 1740, David Hume, in Scotland, identified multiple obstacles to inferring causality from experience. Hume noted the formal illogicality of enumerative induction—unrestricted generalization from particular instances to all instances, and stating a universal law—since humans observe sequences of sensory events, not cause and effect. Perceiving neither logical nor natural necessity or impossibility among events, humans tacitly postulate uniformity of nature, unproved. Later philosophers would select, highlight, and nickname Humean principles—Hume's fork, the problem of induction, and Hume's law—although Hume respected and accepted the empirical sciences as inevitably inductive, after all.
Immanuel Kant, in Germany, alarmed by Hume's seemingly radical empiricism, identified its apparent opposite, rationalism, in Descartes, and sought a middle ground. Kant intuited that necessity exists, indeed, bridging the ding-an-sich to human experience, and that it is the mind, having innate constants that determine space, time, and substance, and thus ensure the empirically correct physical theory's universal truth.Hans Reichenbach, , in Maria Reichenbach & R. S. Cohen, eds., Vienna Circle Collection, Vol. 4B: Hans Reichenbach Selected Writings 1909–1953 (Dordrecht: Springer, 1978). Thus shielding Newtonian physics by discarding scientific realism, Kant's view limited science to tracing appearances, mere phenomena, never unveiling external reality, the noumena. Kant's transcendental idealism launched German idealism, a group of speculative metaphysics.
While philosophers widely continued awkward confidence in empirical sciences as inductive, John Stuart Mill, in England, proposed five methods to discern causality, how genuine inductivism purportedly exceeds enumerative induction. In the 1830s, opposing metaphysics, Auguste Comte, in France, explicated positivism, which, unlike Bacon's model, emphasizes predictions, confirming them, and laying scientific laws, irrefutable by theology or metaphysics. Mill, viewing experience as affirming uniformity of nature and thus justifying enumerative induction, endorsed positivism—the first modern philosophy of science—which, also a political philosophy, upheld scientific knowledge as the only genuine knowledge.
The logical positivists arose in the 1920s, rebuked metaphysical philosophies, accepted hypotheticodeductivist theory origin, and sought to objectively vet scientific theories—or any statement beyond emotive—as provably false or true as to merely empirical facts and logical relations, a campaign termed verificationism. In its milder variant, Rudolf Carnap tried, but always failed, to find an inductive logic whereby a universal law's truth via observational evidence could be quantified by "degree of confirmation". Karl Popper, asserting since the 1930s a strong hypotheticodeductivism called falsificationism, attacked inductivism and its positivist variants, then in 1963 called enumerative induction "a myth", a deductive inference from a tacit theory, explanatory. In 1965, Gilbert Harman explained enumerative induction as a masked IBE.
Thomas Kuhn's 1962 book, a cultural landmark, explains that periods of normal science as but of science are each overturned by revolutionary science, whose radical paradigm becomes the normal science new. Kuhn's thesis dissolved logical positivism's grip on Western academia, and inductivism fell. Besides Popper and Kuhn, other postpositivism philosophers of science—including Paul Feyerabend, Imre Lakatos, and Larry Laudan—have all but unanimously rejected inductivism. Those who assert scientific realism—which interprets scientific theory as reliably and literally, if approximate, true regarding nature's unobservable aspects—generally attribute new theories to IBE. And yet IBE, which, so far, cannot be trained, lacks particular rules of inference. By the 21st century's turn, inductivism's heir was Bayesianism.Nola & Sankey, Popper, Kuhn and Feyerabend (Kluwer, 2000), p xi.
Particularly after the 1960s, scientists became unfamiliar with the historical and philosophical underpinnings of their own research programs, and often unfamiliar with logic. Scientists thus often struggle to evaluate and communicate their own work against question or attack or to optimize methods and progress. In any case, during the 20th century, philosophers of science accepted that scientific method's truer idealization is hypotheticodeductivism, which, especially in its strongest form, Karl Popper's falsificationism, is also termed deductivism.Achinstein, Science Rules (JHU P, 2004), pp 127, 130.
Extending inductivism, Auguste Comte positivism explicitly aims to oppose metaphysics, shuns imaginative theorizing, emphasizes observation, then making predictions, confirming them, and stating laws.
Logical positivism would accept hypotheticodeductivsm in theory development, but sought an inductive logic to objectively quantify a theory's confirmation by empirical evidence and, additionally, objectively compare rival theories.
Inductivism
Confirmation
Scientific method cannot ensure that scientists will imagine, much less will or even can perform, inquiries or experiments inviting disconfirmations. Further, any data collection projects a horizon of expectation—how even objective facts, direct observations, are theory-laden—whereby incompatible facts may go unnoticed. And the experimenter's regress permits disconfirmation to be rejected by inferring that unnoticed entities or aspects unexpectedly altered the test conditions.Harry Collins & Trevor Pinch, The Golem: What You Should Know About Science, 2nd edn (New York: Cambridge University Press, 1998), p 3. A hypothesis can be tested only conjoined to countless auxiliary hypotheses, mostly neglected until disconfirmation.
In Popperian hypotheticodeductivism, sometimes called falsificationism, although one aims for a true theory, one's main tests of the theory are efforts to empirically refute it. Falsification's main value on confirmations is when testing risky predictions that seem likeliest to fail. If the theory's bizarre prediction is empirically confirmed, then the theory is strongly corroborated, but, never upheld as metaphysically true, it is granted simply verisimilitude, the appearance of truth and thus a likeness to truth.Achinstein, Science Rules (JHU P, 2004), pp 127, 130, 132–33.
Auguste Comte, in France of the early 19th century, opposing metaphysics, introducing positivism as, in essence, refined inductivism and a political philosophy. The contemporary urgency of the positivists and of the neopositivists—the logical positivists, emerging in Germany and Vienna in World War I's aftermath, and attenuating into the logical empiricists in America and England after World War II—reflected the sociopolitical climate of their own eras. The philosophers perceived dire threats to society via metaphysical theories, which associated with religious, sociopolitical, and thereby social and military conflicts.
In Novus Organum, Bacon uses the term hypothesis rarely, and usually uses it in pejorative senses, as prevalent in Bacon's time.McMullin, ch 2 in Lindberg & Westman, eds, Reappraisals of the Scientific Revolution (Cambridge U P, 1990), p 48. Yet ultimately, as applied, Bacon's term axiom is more similar now to the term hypothesis than to the term law. By now, a law are nearer to an axiom, a rule of inference. By the 20th century's close, historians and philosophers of science generally agreed that Bacon's actual counsel was far more balanced than it had long been stereotyped, while some assessment even ventured that Bacon had described falsificationism, presumably as far from inductivism as one can get. In any case, Bacon was not a strict inductivist and included aspects of hypotheticodeductivism, but those aspects of Bacon's model were neglected by others, and the "Baconian model" was regarded as true inductivism—which it mostly was.McMullin, ch 2 in Lindberg & Westman, eds, Reappraisals of the Scientific Revolution (Cambridge U P, 1990), p 54.
In Bacon's estimation, during this repeating process of modest axiomatization confirmed by extensive and minute observations, axioms expand in scope and deepen in penetrance tightly in accord with all the observations. This, Bacon proposed, would open a clear and true view of nature as it exists independently of human preconceptions. Ultimately, the general axioms concerning observables would render matter's unobservable structure and nature's causal mechanisms discernible by humans.McMullin, ch 2 in Lindberg & Westman, eds, Reappraisals of the Scientific Revolution (Cambridge U P, 1990), p 52: "Bacon rejects atomism because he believes that the corollary doctrines of the vacuum and the unchangeableness of the atoms are false (II, 8). But he asserts the existence of real imperceptible particles and other occult constituents of bodies (such as 'spirit'), upon which the observed properties of things depend (II, 7). But how are these to be known? He asks us not to be 'alarmed at the subtlety of the investigation', because 'the nearer it approaches to simple natures, the easier and plainer will everything become, the business being transferred from the complicated to the simple...as in the case of the letters of the alphabet and the notes of music' (II, 8). And then, somewhat tantalizingly, he adds: 'Inquiries into nature have the best result when they begin with physics and end with mathematics'. Bacon believes that the investigator can 'reduce the non-sensible to the sensible, that is, make manifest things not directly perceptible by means of others which are' (II, 40)". But, as Bacon provides no clear way to frame axioms, let alone develop principles or theoretical constructs universally true, researchers might observe and collect data endlessly. For this vast venture, Bacon's advised precise record keeping and collaboration among researchers—a vision resembling today's research institutes—while the true understanding of nature would permit technological innovation, heralding a New Atlantis.
In 1666, Isaac Newton fled London from the bubonic plague. Isolated, he applied rigorous experimentation and mathematics, including development of calculus, and reduced both terrestrial motion and celestial motion—that is, both physics and astronomy—to one theory stating Newton's laws of motion, several corollary principles, and law of universal gravitation, set in a framework of postulated absolute space and absolute time. Newton's unification of celestial and terrestrial phenomena overthrew vestiges of Aristotelian physics, and disconnected physics from chemistry, which each then followed its own course.Stahl et al., Webs of Reality (Rutgers U P), ch 2 "Newtonian revolution". Newton became the exemplar of the modern scientist, and the Newtonian research program became the modern model of knowledge. Although absolute space, revealed by no experience, and a force acting at a distance discomforted Newton, he and physicists for some 200 years more would seldom suspect the fictional character of the Newtonian foundation, as they believed not that physical concepts and laws are "free inventions of the human mind", as Einstein in 1933 called them, but could be inferred logically from experience.Roberto Torretti, The Philosophy of Physics (Cambridge: Cambridge University Press, 1999), p 436. Supposedly, Newton maintained that toward his gravitational theory, he had "framed" no hypotheses.
For Hume, humans experience sequences of events, not cause and effect, by pieces of sensory data whereby similar experiences might exhibit merely constant conjunction— first an event like A, and always an event like B—but there is no revelation of causality to reveal either necessity or impossibility. Although Hume apparently enjoyed the scandal that trailed his explanations, Hume did not view them as fatal,Flew, Dictionary (St Martin's, 1984), "Hume", p 156. and interpreted enumerative induction to be among the mind's unavoidable customs, required in order for one to live.Gattei, Karl Popper's Philosophy of Science (Routledge, 2009), pp 28–29. Rather, Hume sought to counter Copernican displacement of humankind from the Universe's center, and to redirect intellectual attention to human nature as the central point of knowledge.Flew, Dictionary (St Martin's, 1984), "Hume", p 154: "Like Kant, Hume sees himself as conducting an anti-Copernican counter-revolution. Through his investigations of the heavens, Copernicus knocked the Earth, and by implication man, from the centre of the Universe. Hume's study of our human nature was to put that at the centre of every map of knowledge".
Hume proceeded with inductivism not only toward enumerative induction but toward unobservable aspects of nature, too. Not demolishing Newton's theory, Hume placed his own philosophy on par with it, then.Schliesser, "Hume's Newtonianism and anti-Newtonianism", § intro, in SEP. Though skeptical at common metaphysics or theology, Hume accepted "genuine Theism and Religion" and found a rational person must believe in God to explain the structure of nature and order of the universe.Redman, Rise of Political Economy as a Science (MIT P, 1997), p 183. Still, Hume had urged, "When we run over libraries, persuaded of these principles, what havoc must we make? If we take into our hand any volume—of divinity or school metaphysics, for instance—let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion".Flew, Dictionary (St Martin's, 1984), "Hume's fork", p 156.
Kant sorted statements, rather, into two types, analytic versus synthetic. The analytic, true by their terms' syntax and semantics, are tautologies, mere logical truths—thus true by logical truth—whereas the synthetic apply meanings toward factual states, which are contingent. Yet some synthetic statements, presumably contingent, are necessarily true, because of the mind, Kant argued.McWherter, The Problem of Critical Ontology (Palgrave, 2013), p 38: "Since Hume reduces objects of experience to spatiotemporally individuated instances of sensation with no necessary connection to each other (atomistic events), the closest they can come to a causal relation is a regularly repeated succession (constant conjunction), while for Kant the task of transcendental synthesis is to bestow unity and necessary connections upon the atomistic and contingently related contributions of sensibility". Kant's synthetic a priori, then, buttressed both physics—at the time, Newtonian—and metaphysics, too, but discarded scientific realism. This realism regards scientific theories as literally true descriptions of the external world. Kant's transcendental idealism triggered German idealism, including G F W Hegel's absolute idealism.
According to Comte, scientific method constrains itself to observations, but frames predictions, confirms these, rather, and states laws—positive statements—irrefutable by theology and by metaphysics, and then lays the laws as foundationalism.Antony Flew, A Dictionary of Philosophy, 2nd edn (New York: St Martin's Press, 1984), "positivism", p 283. Later, concluding science insufficient for society, however, Comte launched Religion of Humanity, whose churches, honoring eminent scientists, led worship of humankind. Comte coined the term altruism, and emphasized applied science for humankind's social welfare, which would be revealed by Comte's spearheaded science, sociology. Comte's influence is prominent in Herbert Spencer of England and in Émile Durkheim of France, both establishing modern empirical, functionalist sociology. Influential in the latter 19th century, positivism was often linked to evolutionary theory, yet was eclipsed in the 20th century by neopositivism: logical positivism or logical empiricism.
Before Germany's lead in science, France's was upended by the first French Revolution, whose Reign of Terror beheaded Lavoisier, reputedly for selling diluted beer, and led to Napoleon's Napoleonic Wars. Amid such crisis and tumult, Auguste Comte inferred that society's natural condition is order, not change. As in Saint-Simon's industrial utopianism, Comte's vision, as later upheld by modernity, positioned science as the only objective true knowledge and thus also as industrial society's secular spiritualism, whereby science would offer political and ethics guide.
Positivism reached Britain well after Britain's own lead in science had ended. British positivism, as witnessed in Victorian era ethics of utilitarianism—for instance, J S Mill's utilitarianism and later in Herbert Spencer's social evolutionism—associated science with moral improvement, but rejected science for political leadership. For Mill, all explanations held the same logical structure—thus, society could be explained by natural laws—yet Mill criticized "scientific politics". From its outset, then, sociology was pulled between moral reform versus administrative policy.
Herbert Spencer helped popularize the word sociology in England, and compiled vast data aiming to infer general theory through empirical analysis. Spencer's 1850 book Social Statics shows Comtean as well as Victorian concern for social order. Yet whereas Comte's social science was a social physics, as it were, Spencer took biology—later by way of Darwinism, so called, which arrived in 1859—as the model of science, a model for social science to emulate. Spencer's functionalist, evolutionary account identified social structures as functions that adapt, such that analysis of them would explain social change.
In France, Comte's sociology influence shows with Émile Durkheim, whose Rules for the Sociological Method, 1895, likewise posed natural science as sociology's model. For Durkheim, social phenomena are functions without psychologism—that is, operating without consciousness of individuals—while sociology is antinaturalist, in that social facts differ from natural facts. Still, per Durkheim, social representations are real entities observable, without prior theory, by assessing raw data. Durkheim's sociology was thus realist and inductive, whereby theory would trail observations while scientific method proceeds from social facts to hypotheses to causal laws discovered inductively.
Also optimistic, some of the appalled German and Austrian intellectuals were inspired by breakthroughs in philosophy,Crucial influences were Wittgenstein's philosophy of language in Tractatus Logico-Philosophicus, Bertrand Russell's logical atomism, and Ernst Mach's phenomenalism as well as Machian positivism. mathematics,NonEuclidean geometries—that is, geometry on curved surfaces or in "curved space"—were the first major advances in geometry since Euclid in ancient Greece. symbolic logic,In the 1870s, through vast work, Peirce as well as Gottlob Frege independently resolved deductive inference, which had not been developed since antiquity, as equivalent to mathematical proof. Later, Gottlob Frege and Bertrand Russell launched the program logicism to reconstruct mathematics wholly from logic—a reduction of mathematics to logic as the foundation of mathematics—and thereby render irrelevant such idealism or Platonic realism suppositions of independent mathematical truths, abstract objects real and yet nonspatial and nontemporal. Frege abandoned the program, yet Russell continued it with Whitehead before they, too, abandoned it. and physics,In particular, Einstein's general theory of relativity was their paradigmatic model of science, although questions provoked by emergence of quantum mechanics also drew some focus. and sought to lend humankind a transparent, universal language competent to vet statements for either logical truth or empirical truth, no more confusion and irrationality. In their envisioned, radical reform of Western philosophy to transform it into scientific philosophy, they studied exemplary cases of empirical science in their quest to turn philosophy into a special science, like biology and economics.According to an envisioned unity of science, within the empirical sciences—but not the formal sciences, which are abstract—there is fundamental science as fundamental physics, whereas all other sciences—including chemistry, biology, astronomy, geology, psychology, economics, sociology, and so on—are the special sciences, in principle derivable from as well as reducible to fundamental science. The Vienna Circle, including Otto Neurath, was led by Moritz Schlick, and had converted to the ambitious program by its member Rudolf Carnap, whom the Berlin Circle's leader Hans Reichenbach had introduced to Schlick. Carl Hempel, who had studied under Reichenbach, and would be a Vienna Circle alumnus, would later lead the movement from America, which, along with England, received emigration of many logical positivists during Hitler's regime.
The Berlin Circle and the Vienna Circle became called—or, soon, were often stereotyped as—the logical positivists or, in a milder connotation, the logical empiricists or, in any case, the neopositivists. Rejecting Immanuel Kant's synthetic a priori, they asserted Hume's fork.Concerning metaphysics, the logical truth is a state true in all possible worlds—mere logical validity—whereas the contingent hinges on the way the particular world is.
Pursuing both Bertrand Russell's program of logical atomism, which aimed to deconstruct language into supposedly elementary parts, and Russell's endeavor of logicism, which would reduce swaths of mathematics to symbolic logic, the neopositivists envisioned both natural language and mathematics—thus physics, too—sharing a logical syntax in symbolic logic. To gain cognitive meaningfulness, would be translated, via correspondence rules, into observational terms—thus revealing any theory's actually empirical claims—and then operationalism would verify them within the observational structure, related to the theoretical structure through the logical syntax. Thus, a logical calculus could be operated to objectively verify the theory's truth value. With this program termed verificationism, logical positivists battled the Marburg school's neo-Kantianism, Edmund Husserl phenomenology, and, as their very epitome of philosophical transgression, Martin Heidegger's "existential hermeneutics", which Rudolf Carnap accused of the most flagrant "pseudostatements".Godfrey-Smith, Theory and Reality: (U Chicago P, 2003), pp 24–25.
Popper accepted hypotheticodeductivism, sometimes termed it deductivism, but restricted it to denying the consequent, and thereby, refuting verificationism, reframed it as Falsifiability. As to law or theory, Popper held confirmation of probable truth to be untenable, as any number confirmations is finite: empirical evidence approaching 0% probability of truth amid a universal law's predictive run to infinity. Popper even held that a scientific theory is better if its truth appears most improbable.Mary Hesse, "Bayesian methods and the initial probabilities of theories", pp 50–105, in Maxwell & Anderson, eds (U Minnesota P, 1975), p 100: "There are two major contending concepts for the task of explicating the simplicity of hypotheses, which may be described respectively as the concepts of content and economy. First, the theory is usually required to have high power or content; to be at once general and specific, and to make precise and detailed claims about the state of the world; that is, in Popper's terminology, to be highly falsifiable. This, as Popper maintains against all probabilistic theories of induction, has the consequence that good theories should be in general improbable, since the more claims a theory makes on the world, other things being equal, the less likely it is to be true. On the other hand, as would be insisted by inductivists, a good theory is one that is more likely than its rivals to be true, and in particular it is frequently assumed that simple theories are preferable because they require fewer premises and fewer concepts, and hence would appear to make fewer claims than more complex rivals about the state of the world, and hence be more probable". Logical positivism, Popper asserted, "is defeated by its typically inductivist prejudice".Karl Popper, The Two Fundamental Problems of the Theory of Knowledge (Abingdon & New York: Routledge, 2009), p 20.
Rather than validate enumerative induction—the futile task of showing it a deductive inference—some sought simply to vindicate it. Herbert Feigl as well as Hans Reichenbach, apparently independently, thus sought to show enumerative induction simply useful, either a "good" or the "best" method for the goal at hand, making predictions.Grover Maxwell, "Induction and empiricism: A Bayesian-frequentist alternative", in pp 106–65, Maxwell & Anderson, eds (U Minnesota P, 1975), pp 111–17. Feigl posed it as a rule, thus neither a priori nor a posteriori but a fortiori. Reichenbach's treatment, similar to Pascal's wager, posed it as entailing greater predictive success versus the alternative of not using it.
In 1936, Rudolf Carnap switched the goal of scientific statements' verification, clearly impossible, to the goal of simply their confirmation. Meanwhile, similarly, ardent logical positivist A J Ayer identified two types of verification— strong versus weak—the strong being impossible, but the weak being attained when the statement's truth is probable.Wilkinson & Campbell, Philosophy of Religion (Continuum, 2009), p 16. Ayer, Language, Truth and Logic, 2nd edn (Gollancz/Dover, 1952), pp 9–10. In such mission, Carnap sought to apply probability theory to formalize inductive logic by discovering an algorithm that would reveal "degree of confirmation". Employing abundant logical and mathematical tools, yet never attaining the goal, Carnap's formulations of inductive logic always held a universal law's degree of confirmation at zero.Murzi, "Rudolf Carnap", IEP.
Kurt Gödel's incompleteness theorem of 1931 made the neopositivism' logicism, or reduction of mathematics to logic, doubtful. But then Alfred Tarski's undefinability theorem of 1934 made it hopeless.Hintikka, "Logicism", in Philosophy of Mathematics (North Holland, 2009), pp 283–84. Some, including logical empiricist Carl Hempel, argued for its possibility, anyway. After all, nonEuclidean geometry had shown that even geometry's truth via axioms occurs among postulates, by definition unproved. Meanwhile, as to mere formalism, rather, which coverts everyday talk into formal logic, but does not reduce it to logic, neopositivists, though accepting hypotheticodeductivist theory development, upheld symbolic logic as the language to justify, by verification or confirmation, its results. But then Hempel's paradox of confirmation highlighted that formal logic confirmatory evidence of the hypothesized, universal law All ravens are black—implying All nonblack things are not ravens—formalizes defining a white shoe, in turn, as a case confirming All ravens are black.Bechtel, Philosophy of Science (Lawrence Earlbaum, 1988), pp 24–27.
Once one observes the facts, "there is introduced some general conception, which is given, not by the phenomena, but by the mind". Whewell this called this "colligation", uniting the facts with a "hypothesis"—an explanation—that is an "invention" and a "conjecture". In fact, one can colligate the facts via multiple, conflicting hypotheses. So the next step is testing the hypothesis. Whewell seeks, ultimately, four signs: coverage, abundance, consilience, and coherence.
First, the idea must explain all phenomena that prompted it. Second, it must predict more phenomena, too. Third, in consilience, it must be discovered to encompass phenomena of a different type. Fourth, the idea must nest in a theoretical system that, not framed all at once, developed over time and yet became simpler meanwhile. On these criteria, the colligating idea is naturally true, or probably so. Although devoting several chapters to "methods of induction" and mentioned "logic of induction", Whewell stressed that the colligating "superinduction" lacks rules and cannot be trained. Whewell also held that Bacon, not a strict inductivist, "held the balance, with no partial or feeble hand, between phenomena and ideas".
Amid increasingly apparent contradictions in neopositivism's central tenets—the verifiability principle, the analytic/synthetic division, and the observation/theory gap—Hempel in 1965 abandoned the program a far wider conception of "degrees of significance". This signaled neopositivism's official demise.Fetzer, "Carl Hempel", §3 "Scientific reasoning", in SEP: "The need to dismantle the verifiability criterion of meaningfulness together with the demise of the observational/theoretical distinction meant that logical positivism no longer represented a rationally defensible position. At least two of its defining tenets had been shown to be without merit. Since most philosophers believed that Quine had shown the analytic/synthetic distinction was also untenable, moreover, many concluded that the enterprise had been a total failure. Among the important benefits of Hempel's critique, however, was the production of more general and flexible criteria of cognitive significance in Hempel (1965b), included in a famous collection of his studies, Aspects of Scientific Explanation (1965d). There he proposed that cognitive significance could not be adequately captured by means of principles of verification or falsification, whose defects were parallel, but instead required a far more subtle and nuanced approach.
In 1958, Norwood Hanson's book Patterns of Discovery subverted the putative gap between observational terms and theoretical terms, a putative gap whereby direct observation would permit neutral comparison of rival theories. Hanson explains that even direct observations, the scientific facts, are theory-laden, which guides the collection, sorting, prioritization, and interpretation of direct observations, and even shapes the researcher's ability to apprehend a phenomenon.Caldwell, Beyond Positivism (Routledge, 1994), p 47–48. Meanwhile, even as to general knowledge, Quine's thesis eroded foundationalism, which retreated to modesty.Poston, "Foundationalism", § intro, in IEP: "The Otto Neurath–Moritz Schlick debate transformed into a discussion over nature and role of observation sentences within a theory. Quine (1951) extended this debate with his metaphor of the semantic holism in which observation sentences are able to confirm or disconfirm a hypothesis only in connection with a larger theory. Peter Sellars (1963) criticizes foundationalism as endorsing a flawed model of the cognitive significance of experience. Following the work of Quine and Sellars, a number of people arose to defend foundationalism (see section below on modest foundationalism). This touched off a burst of activity on foundationalism in the late 1970s to early 1980s. One of the significant developments from this period is the formulation and defense of reformed epistemology, a foundationalist view that took, as the foundations, beliefs such as there is a God (see Alvin Plantinga (1983)). While the debate over foundationalism has abated in recent decades, new work has picked up on neglected topics about the architecture of knowledge and justification".
Structure explains science as puzzle-solving toward a vision projected by the "ruling class" of a scientific specialty's community, whose "unwritten rulebook" dictates acceptable problems and solutions, altogether normal science. The scientists reinterpret ambiguous data, discard anomalous data, and try to stuff nature into the box of their shared paradigm—a theoretical matrix or fundamental view of nature—until compatible data become scarce, anomalies accumulate, and scientific "crisis" ensues. Newly training, some young scientists defect to revolutionary science, which, simultaneously explaining both the normal data and the anomalous data, resolves the crisis by setting a new "exemplar" that contradicts normal science.
Kuhn explains that rival paradigms, having incompatible languages, are incommensurable. Trying to resolve conflict, scientists talk past each other, as even direct observations—for example, that the Sun is "rising"—get fundamentally conflicting interpretations. Some working scientists convert by a perspectival shift that—to their astonishment—snaps the new paradigm, suddenly obvious, into view. Others, never attaining such switch, remain holdouts, committed for life to the old paradigm. One by one, holdouts die. Thus, the new exemplar—the new, unwritten rulebook—settles in the new normal science. The old theoretical matrix becomes so shrouded by the meanings of terms in the new theoretical matrix that even philosophers of science misread the old science.
And thus, Kuhn explains, a revolution in science is fulfilled. Kuhn's thesis critically destabilized confidence in foundationalism, which was generally, although erroneously, presumed to be one of logical empiricism's key tenets.Friedman, Reconsidering Logical Positivism (Cambridge, 1999), p 2. As logical empiricism was extremely influential in the ,Novick, That Noble Dream (Cambridge U P, 1988), p 546. Kuhn's ideas were rapidly adopted by scholars in disciplines well outside of the , where Kuhn's analysis occurs.Novick, That Noble Dream (Cambridge U P, 1988), pp 526–27. Kuhn's thesis in turn was attacked, however, even by some of logical empiricism's opponents. In Structure's 1970 postscript, Kuhn asserted, mildly, that science at least lacks an algorithm. On that point, even Kuhn's critics agreed.Okasha, Philosophy of Science (Oxford U P, 2002), pp 91–93, esp pp 91–92: "In rebutting the charge that he had portrayed paradigm shifts as non-rational, Kuhn made the famous claim that there is 'no algorithm' for theory choice in science. What does this mean? An algorithm is a set of rules that allows us to compute the answer to a particular question. For example, an algorithm for multiplication is a set of rules that when applied to any two numbers tells us their product. (When you learn arithmetic in primary school, you in effect learn algorithms for addition, subtraction, multiplication, and division.) So an algorithm for theory choice is a set of rules that when applied to two competing theories would tell us which we should choose. Much positivist philosophy of science was in effect committed to the existence of such an algorithm. The positivists often wrote as if, given a set of data and two competing theories, the 'principles of scientific method' could be used to determine which theory was superior. This idea was implicit in their belief that although discovery was a matter of psychology, justification was a matter of logic. Kuhn's insistence that there is no algorithm for theory choice in science is almost certainly correct. Lots of philosophers and scientists have made plausible suggestions about what to look for in theories—simplicity, broadness of scope, close fit to the data, and so on. But these suggestions fall far short of providing a true algorithm, as Kuhn well knew. Reinforcing Quine's assault on logical empiricism, Kuhn ushered American and English academia into postpositivism or postempiricism.
Kuhn's influential thesis was soon attacked for portraying science as irrational—cultural relativism similar to religious experience. Postpositivism's poster became Popper's view of human knowledge as hypothetical, continually growing, always tentative, open to criticism and revision. But then even Popper became unpopular, allegedly unrealistic.
Popper's conceptual model of theory evolution is a superficially stepwise but otherwise cyclical process: Problem1 → Tentative Solution → Critical Test → Error Elimination → Problem2. The tentative solution is improvised, an imaginative leap unguided by inductive rules, and the resulting universal law is deductive, an entailed consequence of all, included explanatory considerations. Popper calls enumerative induction, then, "a kind of optical illusion" that shrouds steps of conjecture and refutation during a problem shift. Still, debate continued over the problem of induction, or whether it even poses a problem to science.
Some have argued that although inductive inference is often obscured by language—as in news reporting that experiments have proved a substance is safe—and that enumerative induction ought to be tempered by proper clarification, inductive inference is used liberally in science, that science requires it, and that Popper is obviously wrong.Okasha, Philosophy of Science (Oxford U P, 2002), p 23, virtually admonishes Popper: "Most philosophers think it's obvious that science relies heavily on inductive reasoning, indeed so obvious that it hardly needs arguing for. But, remarkably, this was denied by philosopher Karl Popper, whom we met in the last chapter. Popper claimed that scientists only need to use deductive inferences. This would be nice if it were true, for deductive inferences are much safer than inductive ones, as we have seen.
"Popper's basic argument is this. Although it is not possible to prove that a scientific theory is true from a limited data sample, it is possible to prove that a theory is false. . . . So if a scientist is only interested in demonstrating that a given theory is false, she may be able to accomplish her goal without the use of inductive inferences."The weakness of Popper's argument is obvious. For scientists are not only interested in showing that certain theories are false. When a scientist collects experimental data, her aim might be to show that a particular theory—her arch-rival's theory, perhaps—is false. But much more likely, she is trying to convince people that her own theory is true. And in order to do that, she will have to resort to inductive reasoning of some sort. So Popper's attempt to show that science can get by without induction does not succeed".And yet immediately before this, pp 22–23, Okasha explains that when reporting scientists' work, news media ought to report it correctly as attainment of scientific evidence, not proof: "The central role of induction is science is sometimes obscured by the way we talk. For example, you might read a newspaper report that says that scientists have found 'experimental proof' that genetically modified maize is safe for humans. What this means is that the scientists have tested the maize on a large number of humans, and none of them have come to any harm that. But strictly speaking, this doesn't prove that maize is safe, in the same sense in which mathematicians can prove Pythagoras' theorem, say. For the inference from the maize didn't harm any of the people on whom it was tested to the maize will not harm anyone is inductive, not deductive. The newspaper report should really have said that scientists have found extremely good evidence that the maize is safe for humans. The word proof should strictly be used only when we are dealing with deductive inferences. In this strict sense of the word, scientific hypotheses can rarely, if ever be proved true by the data".
Likewise, Popper maintains that properly, nor do scientists try to mislead people to believe that whichever theory, law, or principle is proved either naturally real (ontic truth) or universally true (epistemic truth).
In a 1965 paper now classic, Gilbert Harman explains enumerative induction as a masked effect of what C. S. Peirce had termed abduction, that is, i nference to the best explanation, or IBE.Poston, "Foundationalism", §b "Theories of proper inference", §§iii "Liberal inductivism", in IEP: "Strict inductivism is motivated by the thought that we have some kind of inferential knowledge of the world that cannot be accommodated by deductive inference from epistemically . A fairly recent debate has arisen over the merits of strict inductivism. Some philosophers have argued that there are other forms of nondeductive inference that do not fit the model of enumerative induction. C S Peirce describes a form of inference called 'abduction' or 'inference to the best explanation'. This form of inference appeals to explanatory considerations to justify belief. One infers, for example, that two students copied answers from a third because this is the best explanation of the available data—they each make the same mistakes and the two sat in view of the third. Alternatively, in a more theoretical context, one infers that there are very small unobservable molecules because this is the best explanation of Brownian motion. Let us call 'liberal inductivism' any view that accepts the legitimacy of a form of inference to the best explanation that is distinct from enumerative induction. For a defense of liberal inductivism, see Gilbert Harman's classic (1965) paper. Harman defends a strong version of liberal inductivism according to which enumerative induction is just a disguised form of inference to the best explanation". Philosophers of science who espouse scientific realism have usually maintained that IBE is how scientists develop, about the putative mind-independent world, scientific theories approximately true. Thus, calling Popper obviously wrong—since scientists use induction in effort to "prove" their theories true—reflects conflicting semantics. By now, enumerative induction has been shown to exist, but is found rarely, as in programs of machine learning in artificial intelligence. Likewise, machines can be programmed to operate on probabilistic inference of near certainty.Gauch, Scientific Method in Practice (Cambridge, 2003), p 159. Yet sheer enumerative induction is overwhelmingly absent from science conducted by humans.Gillies, in Rethinking Popper (Springer, 2009), p 111: "I argued earlier that there are some exceptions to Popper's claim that rules of inductive inference do not exist. However, these exceptions are relatively rare. They occur, for example, in the machine learning programs of AI. For the vast bulk of human science both past and present, rules of inductive inference do not exist. For such science, Popper's model of conjectures which are freely invented and then tested out seems to me more accurate than any model based on inductive inferences. Admittedly, there is talk nowadays in the context of science carried out by humans of 'inference to the best explanation' or 'abductive inference', but such so-called inferences are not at all inferences based on precisely formulated rules like the deductive rules of inference. Those who talk of 'inference to the best explanation' or 'abductive inference', for example, never formulate any precise rules according to which these so-called inferences take place. In reality, the 'inferences' which they describe in their examples involve conjectures thought up by human ingenuity and creativity, and by no means inferred in any mechanical fashion, or according to precisely specified rules". Although much talked of, IBE proceeds by humans' imaginations and creativity without rules of inference, which IBE's discussants provide nothing resembling.
Logical empiricists indeed conceived the unity of science to network all special sciences and to reduce the special sciences' laws—by stating boundary conditions, supplying bridge laws, and heeding the deductivenomological model—to, at least in principle, the fundamental science, that is, fundamental physics.Bem & de Jong, Theoretical Issues in Psychology (SAGE, 2006), pp 45–47. And Rudolf Carnap sought to formalize inductive logic to confirm universal laws through probability as "degree of confirmation". Yet the Vienna Circle had pioneered nonfoundationalism, a legacy especially of its member Otto Neurath, whose coherentism—the main alternative to foundationalism—likened science to a boat that scientists must rebuild at sea without ever touching shore.Poston, "Foundationalism", § intro, in IEP: "The debate over foundationalism was reinvigorated in the early part of the twentieth century by the debate over the nature of the scientific method. Otto Neurath (1959; original 1932) argued for a view of scientific knowledge illuminated by the raft metaphor according to which there is no privileged set of statements that serve as the ultimate foundation; rather knowledge arises out of a coherentism among the set of statements we accept. In opposition to this raft metaphor, Moritz Schlick (1959; original 1932) argued for a view of scientific knowledge akin to the pyramid image in which knowledge rests on a special class of statements whose verification doesn't depend on other beliefs". And neopositivists did not seek rules of inductive logic to regulate scientific discovery or theorizing, but to verify or confirm laws and theories once scientists pose them.Torretti, Philosophy of Physics (Cambridge U P, 1999), p 221: "Twentieth-century positivists would maintain, of course, that the rules of inductive logic are not meant to preside over the process of discovery, but to control the validity of its findings". Practicing what Popper had preached— conjectures and refutations—neopositivism simply ran its course. So its chief rival, Popper, initially a contentious misfit, emerged from interwar Vienna vindicated.
Popper's prime example, already made by the French classical physicist and philosopher of science Pierre Duhem decades earlier, was Kepler's laws of planetary motion, long famed to be, and yet not actually, reducible to Newton's law of universal gravitation.Oberheim, Feyerabend's Philosophy (Walter de Gruyter, 2006), pp 80–82. For Feyerabend, the sham of inductivism was pivotal. Feyerabend investigated, eventually concluding that even in the natural sciences, the unifying method is Anything goes—often rhetoric, circular reasoning, propaganda, deception, and subterfuge—methodological lawlessness, scientific anarchy. At persistent claims that faith in induction is a necessary precondition of reason, Feyerabend's 1987 book bids Farewell to Reason. Against Method (1975/1988/1993)
A research programme stakes a hard core of principles, such as the Cartesian rule No action at a distance, that resists falsification, deflected by a protective belt of malleable theories that advance the hard core via theoretical progress, spreading the hard core into new empirical territories.
Corroborating the new theoretical claims is empirical progress, making the research programme progressive—or else it degenerates. But even an eclipsed research programme may linger, Lakatos finds, and can resume progress by later revisions to its protective belt.
In any case, Lakatos concluded inductivism to be rather farcical and never in the history of science actually practiced. Lakatos alleged that Newton had fallaciously posed his own research programme as inductivist to publicly legitimize itself.
Newton
Hume
Kant
Positivism
Comte
Mill
Social
Logical
Concerning epistemology, the a priori is knowable before or without, whereas the a posteriori is knowable only after or through, relevant experience.
Concerning propositions, the analytic is true via terms' syntax and semantics, thus a tautology—true by logical necessity but uninformative about the world—whereas the synthetic adds reference to a state of facts, a contingency.
In 1739, David Hume cast a fork aggressively dividing "relations of ideas" from "matters of fact and real existence", such that all truths are of one type or the other. Truths by relations among ideas (abstract) all align on one side (analytic, necessary, a priori). Truths by states of actualities (concrete) always align on the other side (synthetic, contingent, a posteriori). At any treatises containing neither, Hume orders, "Commit it then to the flames, for it can contain nothing but sophistry and illusion".
Flew, Dictionary (St Martin's, 1984), p 156
Mitchell, Roots (Wadsworth, 2011), pp 249–50. Staking it at the analytic/synthetic gap, they sought to dissolve confusions by freeing language from "pseudostatements". And appropriating Ludwig Wittgenstein's verifiability criterion, many asserted that only statements logically or empirically verifiable are cognitively meaningful, whereas the rest are merely emotively meaningful. Further, they presumed a semantic gulf between observational terms versus theoretical terms.Fetzer, "Carl Hempel", §2 "The critique of logical positivism", in SEP: "However surprising it may initially seem, contemporary developments in the philosophy of science can only be properly appreciated in relation to the historical background of logical positivism. Carl Hempel himself attained a certain degree of prominence as a critic of this movement. Language, Truth and Logic (1936; 2nd edition, 1946), authored by A J Ayer, offers a lucid exposition of the movement, which was—with certain variations—based upon the analytic/synthetic distinction, the observational/theoretical distinction, and the verifiability criterion of meaningfulness". Altogether, then, many withheld credence from science's claims about nature's unobservable aspects.Challenges to scientific realism are captured succinctly by Bolotin, Approach to Aristotle's Physics (SUNY P, 1998), p 33–34, commenting about modern science, "But it has not succeeded, of course, in encompassing all phenomena, at least not yet. For it laws are mathematical idealizations, idealizations, moreover, with no immediate basis in experience and with no evident connection to the ultimate causes of the natural world. For instance, Newton's first law of motion (the law of inertia) requires us to imagine a body that is always at rest or else moving aimlessly in a straight line at a constant speed, even though we never see such a body, and even though according to his own theory of universal gravitation, it is impossible that there can be one. This fundamental law, then, which begins with a claim about what would happen in a situation that never exists, carries no conviction except insofar as it helps to predict observable events. Thus, despite the amazing success of Newton's laws in predicting the observed positions of the planets and other bodies, Einstein and Infeld are correct to say, in The Evolution of Physics, that 'we can well imagine another system, based on different assumptions, might work just as well'. Einstein and Infeld go on to assert that 'physical concepts are free creations of the human mind, and are not, however it may seem, uniquely determined by the external world'. To illustrate what they mean by this assertion, they compare the modern scientist to a man trying to understand the mechanism of a closed watch. If he is ingenious, they acknowledge, this man 'may form some picture of a mechanism which would be responsible for all the things he observes'. But they add that he 'may never quite be sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and he cannot even imagine the possibility or the meaning of such a comparison'. In other words, modern science cannot claim, and it will never be able to claim, that it has the definite understanding of any natural phenomenon". Thus rejecting scientific realism,Chakravartty, "Scientific realism", §1.2 "The three dimensions of realist commitment", in SEP: "Semantically, realism is committed to a literal interpretation of scientific claims about the world. In common parlance, realists take theoretical statements at 'face value'. According to realism, claims about scientific entities, processes, properties, and relations, whether they be observable or unobservable, should be construed literally as having truth values, whether true or false. This semantic commitment contrasts primarily with those of so-called instrumentalist epistemologies of science, which interpret descriptions of unobservables simply as instruments for the prediction of observable phenomena, or for systematizing observation reports. Traditionally, instrumentalism holds that claims about unobservable things have no literal meaning at all (though the term is often used more liberally in connection with some antirealist positions today). Some antirealists contend that claims involving unobservables should not be interpreted literally, but as elliptical for corresponding claims about observables". many embraced instrumentalism, whereby scientific theory is simply useful to predict human observations, while sometimes regarding talk of unobservables as either metaphoricalOkasha, Philosophy of Science (Oxford U P, 2002), p 62: "Strictly we should distinguish two sorts of anti-realism. According to the first sort, talk of unobservable entities is not to be understood literally at all. So when a scientist pus forward a theory about electrons, for example, we should not take him to be asserting the existence of entities called 'electrons'. Rather, his talk of electrons is metaphorical. This form of anti-realism was popular in the first half of the 20th century, but few people advocate it today. It was motivated largely by a doctrine in the philosophy of language, according to which it is not possible to make meaningful assertions about things that cannot in principle be observed, a doctrine that few contemporary philosophers accept. The second sort of anti-realism accepts that talk of unobservable entities should be taken at face value: if a theory says that electrons are negatively charged, it is true if electrons do exist and are negatively charged, but false otherwise. But we will never know which, says the anti-realist. So the correct attitude towards the claims that scientists make about unobservable reality is one of total agnosticism. They are either true or false, but we are incapable of finding out which. Most modern anti-realism is of this second sort". or meaningless.Chakravartty, "Scientific realism", §4 "Antirealism: Foils for scientific realism", §§4.1 "Empiricism", in SEP: "Traditionally, instrumentalism maintain that terms for unobservables, by themselves, have no meaning; construed literally, statements involving them are not even candidates for truth or falsity. The most influential advocates of instrumentalism were the logical empiricists (or logical positivists), including Rudolf Carnap and Carl Hempel, famously associated with the Vienna Circle group of philosophers and scientists as well as important contributors elsewhere. In order to rationalize the ubiquitous use of terms which might otherwise be taken to refer to unobservables in scientific discourse, they adopted a non-literal semantics according to which these terms acquire meaning by being associated with terms for observables (for example, 'electron' might mean 'white streak in a cloud chamber'), or with demonstrable laboratory procedures (a view called 'operationalism'). Insuperable difficulties with this semantics led ultimately (in large measure) to the demise of logical empiricism and the growth of realism. The contrast here is not merely in semantics and epistemology: a number of logical empiricists also held the neo-Kantian view that ontological questions 'external' to the frameworks for knowledge represented by theories are also meaningless (the choice of a framework is made solely on pragmatic grounds), thereby rejecting the metaphysical dimension of scientific realism|realism (as in Carnap 1950)".
Opposition
Problems
Early criticism
Whewell
Peirce
Inductivist fall
"Hempel suggested multiple criteria for assessing the cognitive significance of different theoretical systems, where significance is not categorical but rather a matter of degree: 'Significant systems range from those whose entire extralogical vocabulary consists of observation terms, through theories whose formulation relies heavily on theoretical constructs, on to systems with hardly any bearing on potential empirical findings' (Hempel 1965b: 117).
"The criteria Hempel offered for evaluating the 'degrees of significance' of theoretical systems (as conjunctions of hypotheses, definitions, and auxiliary claims) were (a) the clarity and precision with which they are formulated, including explicit connections to observational language; (b) the systematic—explanatory and predictive—power of such a system, in relation to observable phenomena; (c) the formal simplicity of the systems with which a certain degree of systematic power is attained; and (d) the extent to which those systems have been confirmed by experimental evidence (Hempel 1965b). The elegance of Hempel's study laid to rest any lingering aspirations for simple criteria of 'cognitive significance' and signaled the demise of logical positivism as a philosophical movement.
"Precisely what remained, however, was in doubt. Presumably, anyone who rejected one or more of the three principles defining positivism—the analytic/synthetic distinction, the observational/theoretical distinction, and the verifiability criterion of significance—was not a logical positivist. The precise outlines of its philosophical successor, which would be known as 'logical empiricism', were not entirely evident. Perhaps this study came the closest to defining its intellectual core. Those who accepted Hempel's four criteria and viewed cognitive significance as a matter of degree were members, at least in spirit. But some new problems were beginning to surface with respect to Hempel's covering-law explication of explanation, and old problems remained from his studies of induction, the most remarkable of which was known as 'the paradox of confirmation'". Neopositivism became mostly maligned,Misak, Verificationism (Routledge, 1995), p viii. while credit for its fall generally has gone to W V O Quine and to Thomas S Kuhn, although its "murder" had been prematurely confessed to by Karl Popper in the 1930s.
Fuzziness
"Indeed, on the assumption that a sentence S is meaningful if and only if its negation is meaningful, Hempel demonstrated that the criterion produced consequences that were counterintuitive if not logically inconsistent. The sentence, 'At least one stork is red-legged', for example, is meaningful because it can be verified by observing one red-legged stork; yet its negation, 'It is not the case that even one stork is red-legged', cannot be shown to be true by observing any finite number of red-legged storks and is therefore not meaningful. Assertions about God or The Absolute were meaningless by this criterion, since they are not observation statements or deducible from them. They concern entities that are non-observable. That was a desirable result. But by the same standard, claims that were made by scientific laws and theories were also meaningless.
"Indeed, scientific theories affirming the existence of gravitational attractions and of electromagnetic fields were thus rendered comparable to beliefs about transcendent entities such as an omnipotent, omniscient, and omni-benevolent God, for example, because no finite sets of observation sentences are sufficient to deduce the existence of entities of those kinds. These considerations suggested that the logical relationship between scientific theories and empirical evidence cannot be exhausted by means of observation sentences and their deductive consequences alone, but needs to include observation sentences and their inductive consequences as well (Hempel 1958). More attention would now be devoted to the notions of testability and of confirmation and disconfirmation as forms of partial verification and partial falsification, where Hempel would recommend an alternative to the standard conception of scientific theories to overcome otherwise intractable problems with the observational/theoretical distinction".
Revolutions
Falsificationism
such as ones conventionally predicted to fail.
Postpositivism
Problem of induction
"This example is by no means isolated. In effect, scientists use inductive reasoning whenever they move from limited data to a more general conclusion, which they do all the time. Consider, for example, Newton's principle of universal gravitation, encountered in the last chapter, which says that every body in the universe exerts a gravitational attraction on every other body. Now obviously, Newton did not arrive at this principle by examining every single body in the whole universe—he couldn't possibly have. Rather, he saw that the principle held true for the planets and the Sun, and for objects of various sorts moving near the Earth's surface. From this data, he inferred that the principle held true for all bodies. Again, this inference was obviously an inductive one: the fact that Newton's principle holds true for some bodies doesn't guaranteed that it holds true for all bodies".
Some pages later, however, Okasha finds enumerative induction insufficient to explain phenomena, a task for which scientists employ IBE, guided by no clear rules, although parsimony, that is, simplicity, is a common heuristic despite no particular assurance that nature is "simple" pp. Okasha then notes the unresolved dispute among philosophers over whether enumerative induction is a consequence of IBE, a view that Okasha, omitting Popper from mention, introduces by noting, "The philosopher Gilbert Harman has argued that IBE is more fundamental" p 32. Yet other philosophers have asserted the converse—that IBE derives from enumerative induction, more fundamental—and, although inference could in principle work both ways, the dispute is unresolved p.
Logical bogeymen
Scientific anarchy
Science in a Free Society (1978)
Farewell to Reason (1987).
Research programmes
Research traditions
Inductivist heir
Notes
|
|