Armchair Science and Armchair Philosophy

2019 ◽  
Vol 56 (2) ◽  
pp. 43-45
Author(s):  
Anton V. Kuznetsov ◽  

Williamson defends armchair philosophy by likening it to armchair science – they have the same echelon of results and use such a priori methods as model building and conditional analyses. More, if a priori methods are accepted within science, then they acceptable in philosophy – thus, armchair philosophy is justified. However, I am not swayed by this reasoning: there could be non-armchair philosophers who use these a priori methods. So, there are two options – revise the notion of armchair philosophy or add more details to the aforementioned reasoning.


2019 ◽  
Vol 56 (2) ◽  
pp. 53-59
Author(s):  
Vadim V. Vasilyev ◽  

In this paper I discuss Timothy Williamson’s panel paper “Armchair Philosophy”, the objections of the participants of the panel discussion and other possible reactions to it. The correspondence of the content of Williamson’s paper to the main themes of his book “Doing Philosophy” is shown, as well as the greater emphasis of his paper on the method of model building, upon which he bases his hope for the future of armchair philosophy. The analysis of the responses to the paper by Williamson received from Daniel Stoljar, Joshua Knobe, Daniel Dennett, and Anton Kuznetsov shows, however, that the version of the armchair philosophy proposed by Williamson does not raise much objections among principal opponents of the armchair approach and thus does not promote an a priori methodology that this kind of philosophy is supposed to defend and promote. More effective defense would require the use of a conceptual analysis that promises getting a priori or conceptual truths. Williamson, however, doubts the prospects for productive conceptual analysis. Nevertheless, the author of this afterword tries to show that the traditional conceptual analysis can be improved and that it is possible that such an improved analysis would perform its function of promoting the radical armchair philosophy much more effectively. Instead of clarifying some more or less interesting concepts conceptual analysis might aim at clarifying our natural beliefs, such as belief in causal dependance of ordinary events, in independent existence of the objects of our experience, in identity of some objects, in other minds, etc. In the process of such a clarifying we can also try to understand some non-trivial relations between our natural beliefs. The author provides an example of such an analysis, resulting in getting a truth which has all the marks of necessary conceptual truth, claiming there are a lot of similar truths to be found.



2019 ◽  
Vol 56 (2) ◽  
pp. 19-25
Author(s):  
Timothy Williamson ◽  

The article presents an anti-exceptionalist view of philosophical methodology, on which it is much closer to the methodology of other disciplines than many philosophers like to think. Like mathematics, it is a science, but not a natural science. Its methods are notprimarily experimental, though it can draw on the results of natural science. Likefoundational mathematics, its methods are abductive as well as deductive. As in the natural sciences, much progress in philosophy consists in the construction of better models rather than in the discovery of new laws. We should not worry about whether philosophy is a priori or a posteriori, because the distinction is epistemologically superficial.



2020 ◽  
pp. 030573561989641
Author(s):  
William J Coppola ◽  
Anita B Kumar ◽  
Joshua N Hook

The purpose of this study was to construct and validate a psychometric measure of humility in musical contexts. Using confirmatory factor analysis (CFA) ( N = 423), we demonstrated initial evidence for the validity of a theoretical model of musical humility. We used CFA to test an a priori model building from prior research, which confirmed five factors: purposeful musical engagement and collaboration, other-orientedness, lack of superiority, acknowledgment of shortcomings and learnability, and healthy pride. The resulting Musical Humility Scale is comprised of 30 items that may be further tested alongside other psychometric batteries for investigating predictors and correlates of humility in musical participation. We offer limitations and directions for future research, including strategies for refining the testing criteria and suggestions for establishing convergent and discriminant validity.



1994 ◽  
Vol 24 (1) ◽  
pp. 39-54
Author(s):  
C. Asteasu ◽  
K. Maiora ◽  
J. Etxaniz


2018 ◽  
Vol 66 (3) ◽  
pp. 303-315 ◽  
Author(s):  
Alberto Viglione ◽  
Magdalena Rogger ◽  
Herbert Pirkl ◽  
Juraj Parajka ◽  
Günter Blöschl

Abstract Since the beginning of hydrological research hydrologists have developed models that reflect their perception about how the catchments work and make use of the available information in the most efficient way. In this paper we develop hydrologic models based on field-mapped runoff generation mechanisms as identified by a geologist. For four different catchments in Austria, we identify four different lumped model structures and constrain their parameters based on the field-mapped information. In order to understand the usefulness of geologic information, we test their capability to predict river discharge in different cases: (i) without calibration and (ii) using the standard split-sample calibration/ validation procedure. All models are compared against each other. Results show that, when no calibration is involved, using the right model structure for the catchment of interest is valuable. A-priori information on model parameters does not always improve the results but allows for more realistic model parameters. When all parameters are calibrated to the discharge data, the different model structures do not matter, i.e., the differences can largely be compensated by the choice of parameters. When parameters are constrained based on field-mapped runoff generation mechanisms, the results are not better but more consistent between different calibration periods. Models selected by runoff generation mechanisms are expected to be more robust and more suitable for extrapolation to conditions outside the calibration range than models that are purely based on parameter calibration to runoff data.



Author(s):  
Harikumar Iyer ◽  
Xiao Tang ◽  
Sundar Krishnamurty

Abstract This paper deals with two major issues that are central to the development and implementation of decision-based approaches to engineering design, namely, the treatment of constraints and accurate preference representation. In this paper, a decision analysis based constraint handling technique is introduced that will be particularly useful in constrained engineering problems with multiple attributes. Recognizing the iterative nature of engineering design where design alternatives may not always be known a priori (unlike in traditional engineering design), this paper introduces the concept of knowledge-influenced attribute model building as a means to update preference representation for the purposes of accurately reflecting designer’s intent and purposes throughout. the evolving design process. These two concepts are then used to extend the TRED (Trade-off based Robust Engineering Design) framework that has been developed as a formal design strategy through the integration of utility theory, based multiattribute models into a Taguchi philosophy based design of experiments setup. Here, to find robust optimal solutions to constrained engineering design problems, this paper presents the development of a second order design space reduction technique based on Response Surface Methodology (RSM). Its application to engineering problems is illustrated through a simple case study and the results are discussed.



2001 ◽  
Vol 43 (7) ◽  
pp. 105-113 ◽  
Author(s):  
L. Van Vooren ◽  
M. Van De Steene ◽  
J.-P. Ottoy ◽  
P. A. Vanrolleghem

In this paper, buffer capacity profiles are used in the framework of automatic monitoring of water quality. The aim of the proposed methodology is to automatically and stepwise build buffer capacity models for each particular titrated sample, and to quantify the individual buffer systems that constitute the total buffer capacity. An automatic and robust model building algorithm has been developed and applied to many titration curves of effluent and river water samples. It is illustrated that the application of automatically built buffer capacity models mostly results in similar or better estimations of ammonium and ortho-phosphate in the samples compared to a priori fixed buffer capacity models. The automatic modelling approach is also advantageous for alarm generating purposes on e.g. river waters, because unexpected buffers are easily detected.



2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Paolo Fusar-Poli ◽  
Dominic Stringer ◽  
Alice M. S. Durieux ◽  
Grazia Rutigliano ◽  
Ilaria Bonoldi ◽  
...  

Abstract Predicting the onset of psychosis in individuals at-risk is based on robust prognostic model building methods including a priori clinical knowledge (also termed clinical-learning) to preselect predictors or machine-learning methods to select predictors automatically. To date, there is no empirical research comparing the prognostic accuracy of these two methods for the prediction of psychosis onset. In a first experiment, no improved performance was observed when machine-learning methods (LASSO and RIDGE) were applied—using the same predictors—to an individualised, transdiagnostic, clinically based, risk calculator previously developed on the basis of clinical-learning (predictors: age, gender, age by gender, ethnicity, ICD-10 diagnostic spectrum), and externally validated twice. In a second experiment, two refined versions of the published model which expanded the granularity of the ICD-10 diagnosis were introduced: ICD-10 diagnostic categories and ICD-10 diagnostic subdivisions. Although these refined versions showed an increase in apparent performance, their external performance was similar to the original model. In a third experiment, the three refined models were analysed under machine-learning and clinical-learning with a variable event per variable ratio (EPV). The best performing model under low EPVs was obtained through machine-learning approaches. The development of prognostic models on the basis of a priori clinical knowledge, large samples and adequate events per variable is a robust clinical prediction method to forecast psychosis onset in patients at-risk, and is comparable to machine-learning methods, which are more difficult to interpret and implement. Machine-learning methods should be preferred for high dimensional data when no a priori knowledge is available.



2018 ◽  
Vol 6 ◽  
pp. 183-203
Author(s):  
Lydia McGrew

Thomas Crisp has attempted to revive something akin to Alvin Plantinga’s Principle of Dwindling Probabilities to argue that the historical case for the resurrection of Jesus does not make the posterior probability of the resurrection very high. I argue that Crisp’s argument fails because he is attempting to evaluate a concrete argument in an a priori manner. I show that the same moves he uses would be absurd in other contexts, as applied both to our acquaintance with human beings and to evidence for divine intervention. Crisp’s attempt to relate the evidence for a specific act of God such as the resurrection to generic theism, thereby creating skepticism about the power of the evidence, is symptomatic of a larger problem in the philosophy of religion which I dub “separationism” and which has characterized the work of both advocates of classical apologetics and philosophers of science.



Sign in / Sign up

Export Citation Format

Share Document