Machine Learning for First-Order Theorem Proving

2014 ◽  
Vol 53 (2) ◽  
pp. 141-172 ◽  
Author(s):  
James P. Bridge ◽  
Sean B. Holden ◽  
Lawrence C. Paulson
1980 ◽  
Vol 3 (2) ◽  
pp. 235-268
Author(s):  
Ewa Orłowska

The central method employed today for theorem-proving is the resolution method introduced by J. A. Robinson in 1965 for the classical predicate calculus. Since then many improvements of the resolution method have been made. On the other hand, treatment of automated theorem-proving techniques for non-classical logics has been started, in connection with applications of these logics in computer science. In this paper a generalization of a notion of the resolution principle is introduced and discussed. A certain class of first order logics is considered and deductive systems of these logics with a resolution principle as an inference rule are investigated. The necessary and sufficient conditions for the so-called resolution completeness of such systems are given. A generalized Herbrand property for a logic is defined and its connections with the resolution-completeness are presented. A class of binary resolution systems is investigated and a kind of a normal form for derivations in such systems is given. On the ground of the methods developed the resolution system for the classical predicate calculus is described and the resolution systems for some non-classical logics are outlined. A method of program synthesis based on the resolution system for the classical predicate calculus is presented. A notion of a resolution-interpretability of a logic L in another logic L ′ is introduced. The method of resolution-interpretability consists in establishing a relation between formulas of the logic L and some sets of formulas of the logic L ′ with the intention of using the resolution system for L ′ to prove theorems of L. It is shown how the method of resolution-interpretability can be used to prove decidability of sets of unsatisfiable formulas of a given logic.


2021 ◽  
pp. 197140092199897
Author(s):  
Sarv Priya ◽  
Caitlin Ward ◽  
Thomas Locke ◽  
Neetu Soni ◽  
Ravishankar Pillenahalli Maheshwarappa ◽  
...  

Objectives To evaluate the diagnostic performance of multiple machine learning classifier models derived from first-order histogram texture parameters extracted from T1-weighted contrast-enhanced images in differentiating glioblastoma and primary central nervous system lymphoma. Methods Retrospective study with 97 glioblastoma and 46 primary central nervous system lymphoma patients. Thirty-six different combinations of classifier models and feature selection techniques were evaluated. Five-fold nested cross-validation was performed. Model performance was assessed for whole tumour and largest single slice using receiver operating characteristic curve. Results The cross-validated model performance was relatively similar for the top performing models for both whole tumour and largest single slice (area under the curve 0.909–0.924). However, there was a considerable difference between the worst performing model (logistic regression with full feature set, area under the curve 0.737) and the highest performing model for whole tumour (least absolute shrinkage and selection operator model with correlation filter, area under the curve 0.924). For single slice, the multilayer perceptron model with correlation filter had the highest performance (area under the curve 0.914). No significant difference was seen between the diagnostic performance of the top performing model for both whole tumour and largest single slice. Conclusions T1 contrast-enhanced derived first-order texture analysis can differentiate between glioblastoma and primary central nervous system lymphoma with good diagnostic performance. The machine learning performance can vary significantly depending on the model and feature selection methods. Largest single slice and whole tumour analysis show comparable diagnostic performance.


BJS Open ◽  
2021 ◽  
Vol 5 (1) ◽  
Author(s):  
F Torresan ◽  
F Crimì ◽  
F Ceccato ◽  
F Zavan ◽  
M Barbot ◽  
...  

Abstract Background The main challenge in the management of indeterminate incidentally discovered adrenal tumours is to differentiate benign from malignant lesions. In the absence of clear signs of invasion or metastases, imaging techniques do not always precisely define the nature of the mass. The present pilot study aimed to determine whether radiomics may predict malignancy in adrenocortical tumours. Methods CT images in unenhanced, arterial, and venous phases from 19 patients who had undergone resection of adrenocortical tumours and a cohort who had undergone surveillance for at least 5 years for incidentalomas were reviewed. A volume of interest was drawn for each lesion using dedicated software, and, for each phase, first-order (histogram) and second-order (grey-level colour matrix and run-length matrix) radiological features were extracted. Data were revised by an unsupervised machine learning approach using the K-means clustering technique. Results Of operated patients, nine had non-functional adenoma and 10 carcinoma. There were 11 patients in the surveillance group. Two first-order features in unenhanced CT and one in arterial CT, and 14 second-order parameters in unenhanced and venous CT and 10 second-order features in arterial CT, were able to differentiate adrenocortical carcinoma from adenoma (P < 0.050). After excluding two malignant outliers, the unsupervised machine learning approach correctly predicted malignancy in seven of eight adrenocortical carcinomas in all phases. Conclusion Radiomics with CT texture analysis was able to discriminate malignant from benign adrenocortical tumours, even by an unsupervised machine learning approach, in nearly all patients.


1994 ◽  
Vol 5 (3-4) ◽  
pp. 193-212 ◽  
Author(s):  
Leo Bachmair ◽  
Harald Ganzinger ◽  
Uwe Waldmann
Keyword(s):  

Author(s):  
Benedikt Knüsel ◽  
Christoph Baumberger ◽  
Marius Zumwald ◽  
David N. Bresch ◽  
Reto Knutti

<p>Due to ever larger volumes of environmental data, environmental scientists can increasingly use machine learning to construct data-driven models of phenomena. Data-driven environmental models can provide useful information to society, but this requires that their uncertainties be understood. However, new conceptual tools are needed for this because existing approaches to assess the uncertainty of environmental models do so in terms of specific locations, such as model structure and parameter values. These locations are not informative for an assessment of the predictive uncertainty of data-driven models. Rather than the model structure or model parameters, we argue that it is the <em>behavior</em> of a data-driven model that should be subject to an assessment of uncertainty.</p><p>In this paper, we present a novel framework that can be used to assess the uncertainty of data-driven environmental models. The framework uses argument analysis and focuses on epistemic uncertainty, i.e., uncertainty that is related to a lack of knowledge. It proceeds in three steps. The first step consists in reconstructing the justification of the assumption that the model used is fit for the predictive task at hand. Arguments for this justification may, for example, refer to sensitivity analyses and model performance on a validation dataset. In a second step, this justification is evaluated to identify how conclusively the fitness-for-purpose assumption is justified. In a third step, the epistemic uncertainty is assessed based on the evaluation of the arguments. Epistemic uncertainty emerges due to insufficient justification of the fitness-for-purpose assumption, i.e., if the model is less-than-maximally fit-for-purpose. This lack of justification translates to predictive uncertainty, or <em>first-order uncertainty</em>. Uncertainty also emerges if it is unclear how well the fitness-for-purpose assumption is justified. We refer to this uncertainty as “second-order uncertainty”. In other words, second-order uncertainty is uncertainty that researchers face when assessing first-order uncertainty.</p><p>We illustrate how the framework is applied by discussing to a case study from environmental science in which data-driven models are used to make long-term projections of soil selenium concentrations. We highlight that in many applications, the lack of system understanding and the lack of transparency of machine learning can introduce a substantial level of second-order uncertainty. We close by sketching how the framework can inform uncertainty quantification.</p>


Sign in / Sign up

Export Citation Format

Share Document