feature values
Recently Published Documents


TOTAL DOCUMENTS

318
(FIVE YEARS 131)

H-INDEX

15
(FIVE YEARS 5)

Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 165
Author(s):  
Mohamed T. Ali ◽  
Yaser ElNakieb ◽  
Ahmed Elnakib ◽  
Ahmed Shalaby ◽  
Ali Mahmoud ◽  
...  

This study proposes a Computer-Aided Diagnostic (CAD) system to diagnose subjects with autism spectrum disorder (ASD). The CAD system identifies morphological anomalies within the brain regions of ASD subjects. Cortical features are scored according to their contribution in diagnosing a subject to be ASD or typically developed (TD) based on a trained machine-learning (ML) model. This approach opens the hope for developing a new CAD system for early personalized diagnosis of ASD. We propose a framework to extract the cerebral cortex from structural MRI as well as identifying the altered areas in the cerebral cortex. This framework consists of the following five main steps: (i) extraction of cerebral cortex from structural MRI; (ii) cortical parcellation to a standard atlas; (iii) identifying ASD associated cortical markers; (iv) adjusting feature values according to sex and age; (v) building tailored neuro-atlases to identify ASD; and (vi) artificial neural networks (NN) are trained to classify ASD. The system is tested on the Autism Brain Imaging Data Exchange (ABIDE I) sites achieving an average balanced accuracy score of 97±2%. This paper demonstrates the ability to develop an objective CAD system using structure MRI and tailored neuro-atlases describing specific developmental patterns of the brain in autism.


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 60
Author(s):  
Kun Gao ◽  
Hassan Ali Khan ◽  
Wenwen Qu

Density clustering has been widely used in many research disciplines to determine the structure of real-world datasets. Existing density clustering algorithms only work well on complete datasets. In real-world datasets, however, there may be missing feature values due to technical limitations. Many imputation methods used for density clustering cause the aggregation phenomenon. To solve this problem, a two-stage novel density peak clustering approach with missing features is proposed: First, the density peak clustering algorithm is used for the data with complete features, while the labeled core points that can represent the whole data distribution are used to train the classifier. Second, we calculate a symmetrical FWPD distance matrix for incomplete data points, then the incomplete data are imputed by the symmetrical FWPD distance matrix and classified by the classifier. The experimental results show that the proposed approach performs well on both synthetic datasets and real datasets.


2022 ◽  
Vol 19 (1) ◽  
pp. 1719
Author(s):  
Saravanan Arumugam ◽  
Sathya Bama Subramani

With the increase in the amount of data and documents on the web, text summarization has become one of the significant fields which cannot be avoided in today’s digital era. Automatic text summarization provides a quick summary to the user based on the information presented in the text documents. This paper presents the automated single document summarization by constructing similitude graphs from the extracted text segments. On extracting the text segments, the feature values are computed for all the segments by comparing them with the title and the entire document and by computing segment significance using the information gain ratio. Based on the computed features, the similarity between the segments is evaluated to construct the graph in which the vertices are the segments and the edges specify the similarity between them. The segments are ranked for including them in the extractive summary by computing the graph score and the sentence segment score. The experimental analysis has been performed using ROUGE metrics and the results are analyzed for the proposed model. The proposed model has been compared with the various existing models using 4 different datasets in which the proposed model acquired top 2 positions with the average rank computed on various metrics such as precision, recall, F-score. HIGHLIGHTS Paper presents the automated single document summarization by constructing similitude graphs from the extracted text segments It utilizes information gain ratio, graph construction, graph score and the sentence segment score computation Results analysis has been performed using ROUGE metrics with 4 popular datasets in the document summarization domain The model acquired top 2 positions with the average rank computed on various metrics such as precision, recall, F-score GRAPHICAL ABSTRACT


Author(s):  
LEOPOLDO BERTOSSI

Abstract We propose answer-set programs that specify and compute counterfactual interventions on entities that are input on a classification model. In relation to the outcome of the model, the resulting counterfactual entities serve as a basis for the definition and computation of causality-based explanation scores for the feature values in the entity under classification, namely responsibility scores. The approach and the programs can be applied with black-box models, and also with models that can be specified as logic programs, such as rule-based classifiers. The main focus of this study is on the specification and computation of best counterfactual entities, that is, those that lead to maximum responsibility scores. From them one can read off the explanations as maximum responsibility feature values in the original entity. We also extend the programs to bring into the picture semantic or domain knowledge. We show how the approach could be extended by means of probabilistic methods, and how the underlying probability distributions could be modified through the use of constraints. Several examples of programs written in the syntax of the DLV ASP-solver, and run with it, are shown.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8178
Author(s):  
Irfan Azhar ◽  
Muhammad Sharif ◽  
Mudassar Raza ◽  
Muhammad Attique Khan ◽  
Hwan-Seung Yong

The recent development in the area of IoT technologies is likely to be implemented extensively in the next decade. There is a great increase in the crime rate, and the handling officers are responsible for dealing with a broad range of cyber and Internet issues during investigation. IoT technologies are helpful in the identification of suspects, and few technologies are available that use IoT and deep learning together for face sketch synthesis. Convolutional neural networks (CNNs) and other constructs of deep learning have become major tools in recent approaches. A new-found architecture of the neural network is anticipated in this work. It is called Spiral-Net, which is a modified version of U-Net fto perform face sketch synthesis (the phase is known as the compiler network C here). Spiral-Net performs in combination with a pre-trained Vgg-19 network called the feature extractor F. It first identifies the top n matches from viewed sketches to a given photo. F is again used to formulate a feature map based on the cosine distance of a candidate sketch formed by C from the top n matches. A customized CNN configuration (called the discriminator D) then computes loss functions based on differences between the candidate sketch and the feature. Values of these loss functions alternately update C and F. The ensemble of these nets is trained and tested on selected datasets, including CUFS, CUFSF, and a part of the IIT photo–sketch dataset. Results of this modified U-Net are acquired by the legacy NLDA (1998) scheme of face recognition and its newer version, OpenBR (2013), which demonstrate an improvement of 5% compared with the current state of the art in its relevant domain.


2021 ◽  
Vol 118 (49) ◽  
pp. e2025993118
Author(s):  
Francis Mollica ◽  
Geoff Bacon ◽  
Noga Zaslavsky ◽  
Yang Xu ◽  
Terry Regier ◽  
...  

Functionalist accounts of language suggest that forms are paired with meanings in ways that support efficient communication. Previous work on grammatical marking suggests that word forms have lengths that enable efficient production, and work on the semantic typology of the lexicon suggests that word meanings represent efficient partitions of semantic space. Here we establish a theoretical link between these two lines of work and present an information-theoretic analysis that captures how communicative pressures influence both form and meaning. We apply our approach to the grammatical features of number, tense, and evidentiality and show that the approach explains both which systems of feature values are attested across languages and the relative lengths of the forms for those feature values. Our approach shows that general information-theoretic principles can capture variation in both form and meaning across languages.


2021 ◽  
Vol 9 (4B) ◽  
Author(s):  
Hongliang Yu ◽  
◽  
Weiwei Wang ◽  
Shulin Duan ◽  
Peiting Sun ◽  
...  

The methane (CH4) burning interruption factor and the characteristic values characterizing the flame combustion state in the engine cylinder were defined. The logical mapping relationship between image feature values and combustion conditions in the framework of iconology was proposed. Results show that there are two periods of combustion instability and combustion stability during the combustion of dual fuel. The high temperature region with a cylinder temperature greater than 1800K is the largest at 17°CA after top dead center (TDC), accounting for 73.25% of the combustion chamber area. During the flame propagation, the radial flame velocity and the axial flame velocity are “unimodal” and “wavy,” respectively. During the combustion process, the CH4 burning interruption factor first increased and then decreased. The combustion duration in dual fuel mode is 21.25°CA, which is 15.5°CA shorter than the combustion duration in pure diesel mode.


2021 ◽  
Vol 6 (3) ◽  
Author(s):  
Vijeeta Patil ◽  
Shanta Kallur ◽  
Vani Hiremani

Face recognizable proof has drawn in numerous scientists because of its novel benefit, for example, non-contact measure for include obtaining. Varieties in brightening, posture and appearance are significant difficulties of face acknowledgment particularly when pictures are taken as dim scale. To mitigate these difficulties partially many exploration works have been completed by considering shading pictures and they have yielded better face acknowledgment rate. A strategy for perceiving face utilizing shading nearby surface highlights is depicted. Test results show that Face ID approaches utilizing shading neighborhood surface highlights astonishingly yield preferred acknowledgment rates over Face acknowledgment approaches utilizing just shading or surface data. Especially, contrasted and grayscale surface highlights, the proposed shading neighborhood surface highlights can give great coordinating with rates to confront pictures taken under extreme varieties in enlightenment and furthermore for low goal face pictures. The other biometric framework utilizes palmprint as quality for the recognizable proof and validation of people. The principal point is to extract Haralick highlights and utilization of probabilistic neural organizations for confirmation utilizing palmprint biometric quality. PolyUdatabase tests are taken from around 200 clients every client's 2 examples are gained. This palm print biometric recognizes the phony (fake) palmprint made of POP (Plaster of paris) and separates among living and non-living dependent on the entropy highlight. Test results portray that the eleven Haralick feature values are acquired in execution stage and productive precision is accomplished.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2862
Author(s):  
Dipankar Mazumdar ◽  
Mário Popolin Neto ◽  
Fernando V. Paulovich

Machine Learning prediction algorithms have made significant contributions in today’s world, leading to increased usage in various domains. However, as ML algorithms surge, the need for transparent and interpretable models becomes essential. Visual representations have shown to be instrumental in addressing such an issue, allowing users to grasp models’ inner workings. Despite their popularity, visualization techniques still present visual scalability limitations, mainly when applied to analyze popular and complex models, such as Random Forests (RF). In this work, we propose Random Forest Similarity Map (RFMap), a scalable interactive visual analytics tool designed to analyze RF ensemble models. RFMap focuses on explaining the inner working mechanism of models through different views describing individual data instance predictions, providing an overview of the entire forest of trees, and highlighting instance input feature values. The interactive nature of RFMap allows users to visually interpret model errors and decisions, establishing the necessary confidence and user trust in RF models and improving performance.


Sign in / Sign up

Export Citation Format

Share Document