elimination procedure
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 9)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Savino Cilla ◽  
Gabriella Macchia ◽  
Jacopo Lenkowicz ◽  
Elena H. Tran ◽  
Antonio Pierro ◽  
...  

Abstract Objectives: Radiomics is a quantitative method able to analyze a high-throughput extraction of minable imaging features. Herein, we aim to develop a CT angiography-based radiomics analysis and machine learning model for carotid plaques to discriminate vulnerable from no vulnerable plaques.Methods: Thirty consecutive patients with carotid atherosclerosis were enrolled in this pilot study. At surgery, a binary classification of plaques was adopted (“hard” vs “soft”). Feature extraction was performed using the R software package Moddicom. Pairwise feature interdependencies were evaluated using the Spearman rank correlation coefficient. A univariate analysis was performed to assess the association between each feature and the plaque classification and chose top-ranked features. The feature predictive value was investigated using binary logistic regression. A stepwise backward elimination procedure was performed to minimize the Akaike information criterion (AIC). The final significant features were used to build the models for binary classification of carotid plaques, including logistic regression (LR), support vector machine (SVM), and classification and regression tree analysis (CART). All models were cross-validated using 5-fold cross validation. Class-specific accuracy, precision, recall and F-measure evaluation metrics were used to quantify classifier output quality.Results: A total of 230 radiomics features were extracted from each plaque. Pairwise Spearman correlation between features reported a high level of correlations, with more than 80% correlating with at least one other feature at |ρ| > 0.8. After a stepwise backward elimination procedure, the entropy and volume features were found to be the most significantly associated with the two plaque groups (p< 0.001), with AUCs of 0.92 and 0.96, respectively. The best performance was registered by the SVM classifier with the RBF kernel, with accuracy, precision, recall and F-score equal to 86.7, 92.9, 81.3 and 86.7%, respectively. The CART classification tree model for the entropy and volume features model achieved 86.7% well-classified plaques and an AUC of 0.987.Conclusion: This pilot study highlighted the potential of CTA-based radiomics and machine learning to discriminate plaque composition. This new approach has the potential to provide a reliable method to improve risk stratification in patients with carotid atherosclerosis.


Studia Logica ◽  
2021 ◽  
Author(s):  
Martin Fischer

AbstractIn this paper we discuss sequent calculi for the propositional fragment of the logic of HYPE. The logic of HYPE was recently suggested by Leitgeb (Journal of Philosophical Logic 48:305–405, 2019) as a logic for hyperintensional contexts. On the one hand we introduce a simple $$\mathbf{G1}$$ G 1 -system employing rules of contraposition. On the other hand we present a $$\mathbf{G3}$$ G 3 -system with an admissible rule of contraposition. Both systems are equivalent as well as sound and complete proof-system of HYPE. In order to provide a cut-elimination procedure, we expand the calculus by connections as introduced in Kashima and Shimura (Mathematical Logic Quarterly 40:153–172, 1994).


Author(s):  
Masahiro Hamano

Abstract We construct a geometry of interaction (GoI: dynamic modelling of Gentzen-style cut elimination) for multiplicative-additive linear logic (MALL) by employing Bucciarelli–Ehrhard indexed linear logic MALL(I) to handle the additives. Our construction is an extension to the additives of the Haghverdi–Scott categorical formulation (a multiplicative GoI situation in a traced monoidal category) for Girard’s original GoI 1. The indices are shown to serve not only in their original denotational level, but also at a finer grained dynamic level so that the peculiarities of additive cut elimination such as superposition, erasure of subproofs, and additive (co-) contraction can be handled with the explicit use of indices. Proofs are interpreted as indexed subsets in the category Rel, but without the explicit relational composition; instead, execution formulas are run pointwise on the interpretation at each index, with respect to symmetries of cuts, in a traced monoidal category with a reflexive object and a zero morphism. The sets of indices diminish overall when an execution formula is run, corresponding to the additive cut-elimination procedure (erasure), and allowing recovery of the relational composition. The main theorem is the invariance of the execution formulas along cut elimination so that the formulas converge to the denotations of (cut-free) proofs.


PLoS ONE ◽  
2020 ◽  
Vol 15 (8) ◽  
pp. e0238145
Author(s):  
Vinicius Francisco Rofatto ◽  
Marcelo Tomio Matsuoka ◽  
Ivandro Klein ◽  
Maurício Roberto Veronez ◽  
Luiz Gonzaga da Silveira

Author(s):  
Vinicius Francisco Rofatto ◽  
Marcelo Tomio Matsuoka ◽  
Ivandro Klein ◽  
Mauricio Roberto Veronez ◽  
Luiz Gonzaga Da Silveira, Jr.

In this paper we evaluate the effects of hard and soft constraints on the Iterative Data Snooping (IDS), an iterative outlier elimination procedure. Here, the measurements of a levelling geodetic network were classified according to the local redundancy and maximum absolute correlation between the outlier test statistics, referred to as clusters. We highlight that the larger the relaxation of the constraints, the higher the sensitivity indicators MDB (Minimal Detectable Bias) and MIB (Minimal Identifiable Bias) for both the clustering of measurements and the clustering of constraints. There are circumstances that increase the family-wise error rate (FWE) of the test statistics, increase the performance of the IDS. Under a scenario of soft constraints, one should set out at least three soft constraints in order to identify an outlier in the constraints. In general, hard constraints should be used in the stage of pre-processing data for the purpose of identifying and removing possible outlying measurements. In that process, one should opt to set out the redundant hard constraints. After identifying and removing possible outliers, the soft constraints should be employed to propagate their uncertainties to the model parameters during the process of least-squares estimation.


2019 ◽  
Vol 29 (8) ◽  
pp. 1009-1029 ◽  
Author(s):  
Federico Aschieri ◽  
Stefan Hetzl ◽  
Daniel Weller

AbstractHerbrand’s theorem is one of the most fundamental insights in logic. From the syntactic point of view, it suggests a compact representation of proofs in classical first- and higher-order logics by recording the information of which instances have been chosen for which quantifiers. This compact representation is known in the literature as Miller’s expansion tree proof. It is inherently analytic and hence corresponds to a cut-free sequent calculus proof. Recently several extensions of such proof representations to proofs with cuts have been proposed. These extensions are based on graphical formalisms similar to proof nets and are limited to prenex formulas.In this paper, we present a new syntactic approach that directly extends Miller’s expansion trees by cuts and also covers non-prenex formulas. We describe a cut-elimination procedure for our expansion trees with cut that is based on the natural reduction steps and shows that it is weakly normalizing.


Sign in / Sign up

Export Citation Format

Share Document