We need to it
There was a lots of philosophy In here
Skip to Main content
Philosophy of Science
Related terms:
Epistemology
Social Sciences
History of Science
Positivism
View all Topics
Some Issues Concerning the Nature of Economic Explanation
Harold Kincaid, in Philosophy of Economics, 2012
8 Conclusion
Naturalism in the philosophy of science suggests that philosophy of science has to be continuous with science itself and that it cannot produce useful a priori conceptual truths about explanation. Issues about the nature of explanation are scientific issues, albeit ones that certainly can gain from careful attention to clarifying the claims involved. Not surprisingly, the scientific issues surrounding explanation in economics vary according to the part of economics that is under scrutiny. Clarifying claims about economic explanation in the concrete can shed both light on the economics and on our philosophical understanding of explanation.
View chapterPurchase book
How to Deal with Knowledge of Complexity Microeconomics
Wolfram Elsner, ... Henning Schwardt, in The Microeconomics of Complex Economies, 2015
18.2 Positivism and Critical Rationalism
The particular philosophy of science, which is based on the view that information derived from measuring logical and mathematical variables and relations through some objective sensory experience or test is not only possible but the exclusive source of authoritative knowledge, has been called positivism since the early nineteenth century. According to this view, there is valid knowledge ("truth") only in scientific knowledge attained that way, i.e., through strict and objective empirical evidence. In this tradition, it has always been assumed that science is indeed able to provide its theories in a form that can be strictly, if not verified, at least straightforwardly and objectively falsified, once and for all, against reality. The strength of a scientific theory, it is claimed, lies in the very fact that it is or can be made open to such falsification. This means that if a theory cannot, in principle, be falsified, it does not constitute a scientific system. For that purpose, a scientific system must demarcate the areas of application of its propositions, i.e., the specific implications of its more abstract theories, in order not to be immune against such empirical refutation.
Popper further developed this epistemology and coined the term "critical rationalism" to characterize it. Logically, he argued, no number of confirming outcomes at the level of experimental testing can finally confirm a scientific theory, as there may occur any contradicting instance in future testing; but a single counterexample would prove the theory, from which the specific implication was derived, to be false. In this view, theories are only tentative, for the time being, propositions thus just are hypotheses or "conjectures." While classical nineteenth century positivist rationalism held that it is the theory most likely to be true that one should prefer, Popper's critical rationalism held that it is the least likely, or most easily falsifiable, the simplest and at the same time most general, theory that explains known facts and that one should prefer and put to test in order to generate scientific progress through falsification and subsequent theory improvement. In this view, it is more important to make falsification as easily as possible, than to reveal truth via induction, let alone to immunize theory against such testing and potential falsification.
View chapterPurchase book
Mental Models and the Mind
Gottfried Vosgerau, in Advances in Psychology, 2006
2.2 THE RELATION BETWEEN A MODEL AND ITS REPRESENTED
Following Craik (1943) and Johnson-Laird (1983), a model preserves the structure of its represented. It is able to react to changes in the way the representandum would when undergoing the according changes. A prerequisite for this ability is that the model contains parts that represent parts of the modeled situation. These parts have to be connected in the same way as their "real" counterparts. This approach to structure preservation remains quite intuitive.
In the philosophy of science, scientific theories are often viewed as models. Although there is a debate on whether models can be characterized as being isomorphic to reality, many authors defend this view.11 In psychology, there is a long tradition of discussing whether mental representations can be viewed as isomorphic to their representanda or not. However, there have been quite a few attempts to define the different concepts properly (cf. Palmer 1978, Gurr 1998). Therefore, I will start with the mathematical notion of isomorphism.
In mathematics, structures are sets over which one or more functions and/or relations are defined. Two structures A and B are said to be isomorphic if there is a bijective mapping I between the ai ∈ A and the bi ∈ B, such that
–
for each function ƒ: I 〈fA (a1,…, an)〉 = fB(I 〈a1〉,…,I 〈an〉) and
–
for every relation R: I 〈RA (a1…, an)〉 iff RRB (I 〈a1〉,…,I 〈an〉).12
First of all, RA RNA
The definition requires that for each member of one set there is exactly one corresponding member in the other set. Moreover, for every function defined on one set there must be a function defined on the other set that picks out the corresponding element given the corresponding arguments, and for every relation that holds in one set, there must be a relation holding for the corresponding elements of the other set. Now, one of the two structures can be a certain part of the world, for example a house. In the architect's model of the house (which is then the other structure), every piece of the house can be assigned a corresponding piece of the model, and every relation between those elements of the house will correspond to some relation in the model. However, since there are more elements and more relations in the world than in the model, this example does not satisfy the definition: Not every single brick is modeled. I will return to this matter shortly. Nevertheless, taking isomorphism as a requirement for models, it follows that if X is a suitable model of Y, then for every element of Y there must be exactly one element of X corresponding to it. Johnson-Laird expresses this requirement by the idea that mental models represent each individual taking part in a situation by a single part of the model. The appropriate model for the sentence "The apple is on the left of the banana" hence involves two tokens, one for the apple and one for the banana (see Figure 1).

Sign in to download full-size image
Fig. 1. The isomorphism between the world and the model in the example (see page 257)
However, the mathematical notion of isomorphism is too strong a requirement for most models. It is obvious, that, for example, the architect's model of a house does not have as many elements as the real house. Similarly, there are many relations between the apple and the banana in the real situation (concerning their color or size, for example) which are very unlikely to be contained in a mental model used in reasoning about the spatial relations. It is thus useful to introduce the notion 'relevant part of a structure,' which is determined by the usage of the representation. If I want to reason about the spatial relation of fruits, a mental model containing spatial relations will suffice. On the other hand, if I want to decide which fruit to eat, there certainly will be more relevant relations to represent (for example, is sweeter than). More technically, the relevant part of a structure is determined by the function in which the representandum is involved (see page 261) based on what relations and functions are taken as arguments.13 If A = 〈A, RA, fA〉 is the structure of a situtation, the relevant part of the structure A′ will consist of the same set A, a subset of the relations RA and a subset of the functions fA. These subsets are the sets of relations and function which are taken as arguments by the function in which the model plays its role. We can therefore speak of a partial isomorphism which holds between the relevant part of the represented structure and the full model.14
According to this definition, models are structures that are isomorphic to the relevant part of the structure of the represented object; the relevant part is determined by the function in which the represented object plays its role. Although often proposed, the weaker criterion of homomorphism cannot do the job for two reasons. Firstly, homomorphism does not involve a bijective mapping. Therefore, although the mapping of the representandum's parts to parts of the model may be unequivocal, the inverse mapping may be not. Hence, the output of the function would not be guaranteed to be applicable to the represented object. Secondly, since homomorphism does not specify parts of structures, even very small parts can establish homomorphism. Therefore, for each structure there are many trivially homomorphic structures that are much too unspecific to be called models. The second point applies as well to partial isomorphism (as introduced by Bueno 1997, French & Ladyman 1999), unless the parts are otherwise specified (as I have done above). Moreover, partial isomorphism in the sense presented above allows for an evaluation of models: A perfect model is a model which is isomorphic to the whole relevant structure of the represented. If it is isomorphic to more or to less of the relevant structure (or even contains parts that are not shared by the represented), it is a bad model. A model containing too much will be more difficult to construct and to manipulate; therefore, it will make reasoning less effective. On the other hand, a model containing too little obviously will not be able to take over the functional role adequately, since relevant pieces of information are missing. Moreover, the 'more' and 'less' of the deviation from the perfect model can be measured: If the relevant part of the represented structure A' contains mA relations and nA functions, and the model B contains mB+ relations and nB+ functions fulfilling the conditions of the partial isomorphism, and mB− relations and nB− functions not fulfilling the conditions, then the deviation δ+ of the model from A′ can be defined as δ+=|mA+nA−mB++nB+|, and the amount of irrelevant information δ– as δ−=mB−+nB−. The adequacy ϵ of the model can then be defined as
ϵ=1−δ+mA+nA1−δ−mA+nA.
This leads at least to a relative measurement of model adequacy, i.e. it allows for an evaluation of models.15
Isomorphism is a relation between structures. Hence, a model is itself a structure, i.e. a set over which functions and relations are defined. Thus, the appropriate model of the example (see page 257) can be written as
ab,leftof=ab,rightof=ba.
The crucial point is that a model does not represent the relations involved as symbols (or labels); it itself contains relations which hold between its elements regardless of whether it is used as a model or not. Since the relations have the same logical features16 as the relations of the real situation (see the definition of isomorphism), they exhibit the same structure. This is why the isomorphism theory is so attractive: It explains straightforwardly why our conclusions are correct (given that we have a good model and no capacity limitations). Nevertheless, as argued for in section 2.1, isomorphism theories have to be embedded in a functional theory in order to explain the phenomenon of mental representation; partial isomorphism is just one part of the representation relation for models, namely their adequacy relation.
One possible objection to isomorphism addresses the representation of non-existing situations: In reasoning, I usually construct models of situations that are merely supposed to have but do not actually have any counterpart in the world. To what should these models be isomorphic? To answer this question, let me recall that isomorphism is a relation between structures. The mental model is hence not isomorphic to a situation but to the structure of a situation. Structures themselves are abstract entities (consider, for example, the structure of natural numbers with the relation '≥'). The structure of a non-actual situation is as unproblematic a notion as the set of natural numbers is. Therefore, it is possible to have an adequate model of the situation described by the sentence "There is a golden mountain in Africa," since there is a straightforward notion of a structure of this situation, even though it is not an actual situation. To illustrate this, it might be helpful to note that we can agree on structural "facts" about non-existing entities (e.g., we can agree that unicorns have four legs). Thus, the representation of non-existing situations is explained in my picture without committing myself to some problematic ontology (like realism about possible worlds, for example).
Stenning (2002) points out that mental models are not special in respect to isomorphism. Equally, other forms of deduction systems such as Euler Circles and fragments of natural deduction systems stand in this relation to their represented objects. They are all "members of a family of abstract individual identification algorithms" (Stenning & Yule 1997, 109). Therefore, structure preservation is not the crucial feature of the theory of mental models that distinguishes it from other theories of reasoning; rather, the constraint of naturalness plays the distinctive role. However, I will not go deeper into this debate but rather discuss some major implications of my analysis, particularly the use of symbols in mental models.
View chapterPurchase book
Measurement Theory: History and Philosophy
J. Michell, in International Encyclopedia of the Social & Behavioral Sciences, 2001
2.2 The Philosophy of Science
Developments in the philosophy of science have also affected measurement theory. For psychology, the most significant factor was operationism (Bridgman 1927). Operationism was developed into a theory of measurement by Stevens (1951). If, as Bridgman held, the meaning of a concept is the set of operations used to specify it, then measurement is the 'assignment of numerals to objects or events according to rules' (Stevens 1951, p. 1) and the attribute measured via any such assignment is defined by the rules used. This approach proved useful in the social and behavioral sciences before it was known how to test the hypothesis that such attributes are quantitative.
Michell (1999) contains further material on the history and philosophy of measurement in psychology.
View chapterPurchase book
Models and Modelling in Economics
Mary S. Morgan, Tarja Knuuttila, in Philosophy of Economics, 2012
2.2.4 Models as Autonomous Objects
From a naturalist philosophy of science viewpoint, the way that economists work with models suggests that they are regarded, and so may be understood, as autonomous working objects. Whereas the approaches mentioned above located the constructedness of models in relation to the assumed real or imaginary target systems, the independent nature of models can fruitfully be considered also from the perspectives of theory and data. Without doubt many models are rather renderings of theories than any target systems and some are considered as proto-theories not having yet the status of theory. On the other hand econometric models have at times been considered as versions of data.
In a more recent account, economic models are understood to be constructed out of elements of both theory and the world (or its data) and thus able to function with a certain degree of independence from both. The divide between theoretical models and econometric models seems misleading here since, from this perspective on model construction, both kinds of models are heterogeneous ensembles of diverse elements (see [Boumans, this volume]). This account understands models as autonomous objects within the "models as mediators" view of the role of models, which analyses them as means to carry out investigations on both the theoretical and the empirical sides of scientific work, particularly it treats them as instruments of investigation (see [Morrison and Morgan, 1999]). This power to act as instruments that enables the scientist to learn about the world or about their theories depends not only on their functional independence built in at the construction stage, but on another construction feature, namely models are devices made to represent something in the world, or some part of our theory, or perhaps both at once. These two features, function independence and representing quality — loosely defined, make it possible to use models as epistemic mediators (see section 3.3 below). Even the artificial world models of Lucas which are constructed as analogues to represent the outputs of the system, not the behaviour of the system, can be understood under this account, though their functions in investigations may be more limited. In this sense the models as mediators view takes also a mediating view in respect to the models as idealizations vs. the models as constructions divide — itself of course partly an idealization made up for expository reasons — since it takes a liberal attitude both as to what models are supposed to represent and also to the mode of their making via idealization and de-idealization or via a process of construction.
View chapterPurchase book
Philosophy of Econometrics
Aris Spanos, in Philosophy of Economics, 2012
3.3 The New Experimentalism
An important disconnect between philosophy of science and scientific practice was that practitioners have always known from experience that establishing e (or note) as observational facts constitutes one of the most difficult tasks in scientific research because the raw data x0 contain uncertainties, noise and are never in plenitude needed. Indeed, the raw data x0 usually need to be perceptively modeled to separate the systematic (signal) from the non-systematic (noise) information, as well as provide a measure of the reliability of inference based on x0. Such modeling is often vulnerable to numerous errors that would render e far from being 'objectively given facts'.
The first concerted effort in philosophy of science to study the process generating the raw data x0 and secure observational facts e (or not- e) was made by the "new experimentalist" tradition; [Hacking, 1983; Mayo, 1997] — see [Chalmers, 1999] for a summary. Using the piece-meal activities involved and the strategies used in successful experiments, Hacking [1983] argued persuasively against the theory-dominated view of experiment. He made a strong case that in scientific research an experiment can have a 'life of its own' that is independent of 'largescale theory', and thus alleviating the theory-dependence of observation problem. Mayo [1996] argued that scientists employ a panoply of practical step-by-step strategies for eliminating error and establishing the 'factual basis of experimental effects' without any 'tainting' from large-scale theory.
View chapterPurchase book
Case Study: Methods and Analysis
A. Bennett, in International Encyclopedia of the Social & Behavioral Sciences, 2001
3.1 Case Studies and the Philosophy of Science
With regard to the philosophy of science, the 'scientific realist' school of thought has emphasized that causal mechanisms, or independent stable factors that under certain conditions link causes to effects, are important to causal explanation (Little 1998). This has resonated with case study researchers' use of process tracing to uncover evidence of causal mechanisms at work. It has also provided a philosophical counterpoint to attempts by researchers from the statistical tradition to place 'causal effects,' or the expected difference in outcomes brought about by the change in a single independent variable, at the center of causal explanation (King et al. 1994). Case study researchers have argued that both causal mechanisms, which are more easily addressed by case studies, and causal effects, which are best assessed through statistical means, are essential to the development of causal theories and causal explanations (George and Bennett 2001).
Another relevant development in the philosophy of science has been the resurgence of interest in Bayesian logic, or the logic of using new data to update prior confidence levels assigned to hypotheses. Bayesian logic differs from that of most statistics, which eschew reliance on prior probabilities. Eckstein's crucial, most likely, and least likely case study designs implicitly use a Bayesian logic, assigning prior probabilities to the likelihood of particular outcomes (McKeown 1999). One new development here is the refinement of Eckstein's approach, taking into consideration the likelihood of an outcome not just in view of one theory, but in the presence of alternative hypotheses. If a case is 'most likely' for a theory, and if the alternative hypotheses make the same prediction, then the theory will be strongly impugned if the prediction does not prove true. The failure of the theory cannot be blamed on the influence of the variables highlighted by the alternative hypotheses. Conversely, if a theory makes only a weak prediction in a 'least likely' case, the alternative hypotheses make a different prediction, but if the first theory's prediction proves true, this is the strongest possible evidence in favor of the theory (Van Evera 1997). This helps address the central problem of a Bayesian approach—that of assigning and justifying prior probabilities—even if it does not fully resolve it.
The continuing development of the logic of hypothesis testing has also been relevant to case study methods (see Hypothesis Testing: Methodology and Limitations). On this topic, Imre Lakatos argued that a theory can be considered progressive only if it predicts and later corroborates 'new facts,' or novel empirical content not anticipated by other theories (Lakatos 1976). This criterion helps provide a standard for judging whether process tracing, the designation of new subtypes, and the proposal of new theories from heuristic case studies are being done in a progressive or regressive way. It also provides a philosophical basis for arguing that a hypothesis can be derived from one set of observations within a case and then to some extent tested against the 'new facts' or previously unexamined or unexpected data that it predicts within that same case, although independent corroboration in other cases is usually advisable as well (Collier 1993).
View chapterPurchase book
Mathematical Models in Philosophy of Science
Z. Domotor, in International Encyclopedia of the Social & Behavioral Sciences, 2001
4 Conclusion
The most immediate connection between philosophy of science and mathematics is via the structure of scientific theories. Although all three approaches examined here are employed frequently in actual scientific work, they harbor serious difficulties. The popular set-theoretical predicate approach does not draw a line between empirical and mathematical theories. A far more specialized topological state space method fails to provide sufficient support for the empirical meaning of its topological structure. Finally, the extensive complexity found in the structuralist methodology generates pessimism regarding the reconstruction of actual scientific theories. Be that as it may, close analysis of these different approaches will affirm that the semantic method has cast a good deal of light on the nature of scientific theories and therefore will stay for the present. There is, however, a strong possibility that in the foreseeable future the set-theoretic approach will be replaced gradually by the methods of category theory.
View chapterPurchase book
Pattern Matching: Methodology
W.N. Dunn, in International Encyclopedia of the Social & Behavioral Sciences, 2001
1.2 The Correspondence Theory of Truth
Until the late 1950s, philosophy of science was dominated by the correspondence theory of truth. The correspondence theory, the core epistemological doctrine logical positivism (see Ayer 1936), asserts that propositions are true if and only if they correspond with facts. The correspondence theory also requires that factually true propositions are logically validated against formal rules of logic such as modus ponens (p⊃q, p, ∴q) and modus tollens (p⊃q, ∼q, ∴∼p). To be verified, however, propositions must match facts (reality, nature). The correspondence version of pattern matching assumes a strict separation between two kinds of propositions—analytic and synthetic, logical and empirical, theoretical and observational—a separation that was abandoned after Quine (1951) and others showed that the two kinds of propositions are interdependent. Because observations are theory dependent, there is no theory-neutral observational language. Theories do not and cannot simply correspond to the 'facts.'
View chapterPurchase book
Naturalism and the Nature of Economic Evidence
Harold Kincaid, in Philosophy of Economics, 2012
1 Naturalism and Evidence
In this section I sketch a general framework for thinking about evidence, viz. naturalism, that has widespread acceptance in the philosophy of science. I describe the basic idea, trace out some of its implications, and illustrate it by discussing two philosophy of science controversies—about the role of prediction vs. accommodation and the dispute between Bayesians and frequentists — that are particularly relevant to economics.
The dominant trend in current philosophy of science is naturalized epistemology. Naturalism has various interpretations, some logically weaker and some stronger. A weak version is the claim that
1.
empirical evidence is relevant to philosophical accounts of epistemic concepts
A stronger version of naturalism claims that:
2.
philosophical accounts of epistemic concepts are to be judged solely by the methods, standards and results of the sciences.
Of course,each of these is open to multiple interpretation and (2) in particular is a slogan summarizing numerous related but distinct ideas.
A good representative of the weak form of naturalism is found in reliablism as developed in traditional analytic epistemology. Reliablism of this form holds that a belief is justified iff it is produced by a reliable process, i.e. one producing truth. Obviously on this view empirical evidence about which processes are reliable is essential. However, this view does not instantiate the second thesis. Why? The evidence for the reliablist account is the considered judgments of philosophers about what is and is not justified.1 Typically these judgments are taken to provide a priori truths about our fundamental epistemic concepts. Strong naturalism denies that 1) there are any a priori truths about epistemology that do real work in telling us what to believe (x is justified iff it is justified is perhaps an a priori truth but an unhelpful one) and 2) that the considered judgments of philosophers provides any very useful evidence about anything.2
Strong naturalism thus argues that developing accounts of good evidence and other epistemic notions is part and parcel of the scientific understanding of the world. This immediately makes psychology, the social sciences, and history of science fundamental to understanding the nature and role of evidence. This project undergirds much of the work in contemporary science studies. That research goes in a variety of different directions, some purely descriptive and others both descriptive and normative. I will not canvass all the various positions.3 However, there are some general morals common to most of this work that I want to rely on in this chapter in analyzing issues about evidence in economics. They include:
1.
looking for a logic of science — a universal, formal a priori set of inferences rules — is often misguided
2.
claims about good inferences often depend on domain specific substantive assumptions — for example, "simplicity" is often not a purely logical notion but a substantive assertion about the world4
3.
identifying the social processes involved in a given discipline can be an important part of understanding how evidence works in that field and can sometimes reveal that the rhetoric of a discipline does not match its practice
4.
evidence claims are arguments in that they rest on a variety of background assumptions which are essential for their force
I want to flesh out this general pe rspective somewhat by applying it to two standard debates in the philosophy of science which are particularly relevant to controversies in economic methodology. My general argument will be that these debates turn on presuppositions that ought to be rejected on naturalist grounds.
There is a long standing dispute in statistics between Bayesian and frequentist approaches. General positions have emerged in the philosophy of science in the last two decades refining these stances [Howson and Urbach, 2005; Mayo, 1996]. Both are entirely accounts of confirmation and have little or nothing to say about other scientific virtues such as explanation. Both I will argue still embody the logic of science ideal.
The Bayesian approach of course starts from Bayes' theorem, perhaps best put as
p(H/E)=p(h)×p(e/h)|p(h)×p(e/h)+p(h1)×p(e/h1)+...+p(hn)×p(e/hn)
which tells us that the probability of a hypothesis, given a set of data, depends on:
prior probability of the hypothesis and its competing alternatives (the priors)
the probability of the data given the hypothesis and given its competing alternatives (the likelihoods)
Probability takes a value between 0 and 1, the likelihoods being equal to 1 when the hypothesis logically entails the evidence. Given this information, we can theoretically determine if the data supports the hypothesis by asking if p(H/E) > p(H. We can determine how much the evidence supports the hypothesis over alternatives, measured as p(E/H) × p(h)∣p(E/notH) × p(not H) which is known as the likelihood ratio.
"Probability" can be read in different ways. In the starkest form probability is taken as "degree of belief" or "level of confidence," producing what is known as (misleadingly I shall argue) subjective Bayesianism. Objective Bayesianism would eschew the degree of belief talk and apply Bayes' theorem using known relative frequencies such as prevalence in a given population.
The Bayesian approach has several things going for it. Bayes' theorem is a logical truth and violating it is inconsistent. Moreover, various intuitive judgments about evidence can seemingly be understood in a Bayesian framework. For example, it provides a natural explanation for why diverse evidence is to be preferred. Since the probability of getting the same result after one test goes up, further evidence of the same sort has a decreasing marginal effect. Getting positive evidence from an entirely different case does not have this problem.
There are various other issues about confirmation that can be illuminated by looking at them through the lens of Bayes' theorem [Howson and Urbach, 2005].
There are various criticisms of Bayesian approaches to confirmation, but the most fundamental one concerns the key difference between Bayesian and frequentists — the nature of probability. Frequentists want an account of confirmation based on objective error probabilities, where objective error probabilities are defined by the relative frequencies in indefinitely many repetitions of a testing procedure. Single hypotheses thus do not have probabilities, but we can be assured that a particular claim is warranted if it passes a test that has high probability in repeated uses of accepting the hypothesis when it is true and rejecting it when it is false [Mayo, 1996].5 According to the frequentists, subjective Bayesian approaches make confirmation hopeless subjective, a problem their approach does not have.
Related to the Bayesian vs. frequentist controversy is the important question whether hypotheses that have been designed to accommodate known data are less well confirmed than those that predict novel data. Those who deny there is a difference argue that all that matters is the content of the hypothesis and evidence — when the investigator came to know the evidence is irrelevant. A scientist who advances a hypothesis that predicts data he did not know about is in no different situation than one who advances the same hypothesis knowing the evidence. Either the evidence does or does not support the hypothesis.
The argument for a difference turns on the fact that accommodation seems much easier than prediction. A compelling example is curve fitting: one can find an hypothesis that predicts the data simply by drawing a line through all the data points. That is much easier and less informative, the idea, goes than predicting a new data point you do not yet have.
The naturalist response to these debates that I would favor is essentially to curse both houses. Implicit in arguments from either side is the logic of science ideal and much appeal to intuitions. Either accommodation is always inferior or it is always equivalent. Confirmation can be captured by logical rules — described by Bayes' theorem in one case and by the deductive asymptotic traits of sampling from a frequency distribution in the other. Naturalists deny that we can get so much work out of so little. Confirmation is a complex, contextual process that involves a host of factors that cannot be captured in simple rules such as "update beliefs according to Bayes' theorem" or "minimize Type I and Type II errors."
Not surprisingly advocates on both sides resort to some unhelpful rhetoric in the face of complexity. This is illustrated by the ironic fact that Bayesians [Howson and Urbach, 2005] typically deny that facts about psychological states are relevant in the accommodation vs. prediction debate but then want to relativize confirmation entirely to an individual's prior beliefs and that frequentists accuse Bayesians of subjectivism while advocating the view that psychological states can matter because they influence the characteristics of a test [Mayo, 1996].
Let's look first at the Bayesian vs. frequentist debate. The charge of "subjectivism" against the Bayesians is overplayed. Every use of "objective" statistical measures depends on a host of prior assumptions, not all of which can be evaluated by the favored measure of long run error characteristics. The most obvious such assumption is about the possible hypotheses to consider in the first place. Bayesians are in fact much more explicit about those assumptions and can report the effects they have on the inferences drawn. In that sense they are more "objective." Moreover, they can certainly use the "objective" facts about sampling from populations to estimate likelihoods.
Perhaps the most serious issue raised by Bayesians against the frequentists concerns the propriety of making decisions about what to believe in specific instances from long term error characteristics. The problem is that the asymptotic characteristics of a test are an uncertain guide to its use in a single instance. Long-run frequencies in infinite repetitions by themselves are consistent with finite samples entirely lacking the asymptotic characteristics. It is not clear that there is an adequate frequentist response to this problem.6
A second important problem is that deciding what to believe based on a stringent frequentist test can easily be shown to result in error because it embodies the fallacy of ignoring the base rate. Consider a diagnostic test that has both very low false negative rate and false positive rate. This means that p(E/H)∣p(E/not H) is very large and that the test is stringent. Yet if the disease in question is very rare in the population and no more is known than that the patient comes from this population (no other risk factors are known), the probability that the patient has the disease, given a positive result, is extremely small and thus not at all good evidence that they have the disease. The base rate has been ignored.
What is clear is that both the Bayesians and frequentists claim more than their methods warrant. For the frequentists that is fairly obvious from the problems mentioned above resulting from the need to assess the base rate, to specify the set of competing hypotheses, and to assure that the criteria (e.g. independent draws) of the frequentist tests are satisfied.7 The urge to go beyond what your formalism can warrant is also strong in the Bayesian tradition. Something which has not generally been noted is that Bayes' theorem is silent on many fundamental methodological controversies. Inference to the best explanation [Day and Kincaid, 1994] is a prime example in this regard. It is commonly thought that explanatory power has some role to play in confirmation. If that is to mean anything more than the trivial claim the predictive power is important in confirmation, then some substantive notion of explanation needs to be invoked. Citing causes is a natural direction to go. However, factoring that kind of information requires considerable background knowledge, knowledge that will arguably be domain specific. Bayes' theorem is silent on these issues.
So the proper naturalist response is that the place of evidence in science is not going to be capatured by Bayes' theorem or by focusing on type I and II errors alone. Both of these elements must be incorporated in a complex argument employing contextual background knowledge.
Thus the naturalist approach to the accommodation/prediction debate is again to reject the terms in which it is framed. In its usual format, the debate assumes that there is one right answer to the question that holds in all epistemic situations and that the right way to decide the issues is by means of intuitions about examples and counter examples. Assuming one right answer implicitly presupposes the logic of science ideal. Deciding the issue by appeal to intuitions assumes the conceptual analysis approach that naturalism rejects in favor of an epistemology that is continuous with the sciences. Simply put, the naturalist stance on the prediction/accommodation debate is that it is an empirical issue, one that is unlikely to have a uniform answer. Some kinds of accommodating may produce reliable results, others not. Some kinds or instances of predicting unknown data may providing especially compelling evidence, some may not. It is easy enough to construct cases where the fact that the investigator knew the data in advance suggests bias and equally easy to find cases where it is irrelevant. Typically Bayesians deny that there is anything wrong with accommodation and frequentists believe there is [Howson and Urbach, 2005; Mayo, 1996]. However, in their more subtle moments, both acknowledge that context matters and that there is no universal answer to the question as generally framed.8 This is in keeping with the general naturalist view sketched above about the Bayesian-frequentist debate. I will illustrate this approach in the concrete in the next section when we discuss data mining in econometrics
data_exe
In the philosophy of science nowadays there are some models of scientific explanation: syntactic (deductive- nomologic), semantic (functional-teleologic) and pragmatic model of explanation. The last one is related with the concept of paradigm and is obviously shared by theoreticians when we accept the complexity of systems undergoing observation, especially social systems. If the pedagogy is a science of complexity, than we can assume, as working hypothesis, that certain segments of its aria like school culture, might be described and studied with the help of the pragmatic model of explanation.
Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics
Volume 52, Part B, November 2015, Pages 317-327
Symmetries and the philosophy of language
Author links open overlay panelNeilDewar
Show more
Outline
Share
Cite
https://doi.org/10.1016/j.shpsb.2015.09.004Get rights and content
Highlights
•
Only structures which are invariant under symmetry have direct physical significance.
•
Not all structures in a theory need be interpreted realistically.
•
There are useful analogies between philosophy of physics and philosophy of language.
Abstract
In this paper, I consider the role of exact symmetries in theories of physics, working throughout with the example of gravitation set in Newtonian spacetime. First, I spend some time setting up a means of thinking about symmetries in this context; second, I consider arguments from the seeming undetectability of absolute velocities to an anti-realism about velocities; and finally, I claim that the structure of the theory licences (and perhaps requires) us to interpret models which differ only with regards to the absolute velocities of objects as depicting the same physical state of affairs. In defending this last claim, I consider how ideas and resources from the philosophy of language may usefully be brought to bear on this topic.