scholarly journals Shapley Homology: Topological Analysis of Sample Influence for Neural Networks

2020 ◽  
Vol 32 (7) ◽  
pp. 1355-1378
Author(s):  
Kaixuan Zhang ◽  
Qinglong Wang ◽  
Xue Liu ◽  
C. Lee Giles

Data samples collected for training machine learning models are typically assumed to be independent and identically distributed (i.i.d.). Recent research has demonstrated that this assumption can be problematic as it simplifies the manifold of structured data. This has motivated different research areas such as data poisoning, model improvement, and explanation of machine learning models. In this work, we study the influence of a sample on determining the intrinsic topological features of its underlying manifold. We propose the Shapley homology framework, which provides a quantitative metric for the influence of a sample of the homology of a simplicial complex. Our proposed framework consists of two main parts: homology analysis, where we compute the Betti number of the target topological space, and Shapley value calculation, where we decompose the topological features of a complex built from data points to individual points. By interpreting the influence as a probability measure, we further define an entropy that reflects the complexity of the data manifold. Furthermore, we provide a preliminary discussion of the connection of the Shapley homology to the Vapnik-Chervonenkis dimension. Empirical studies show that when the zero-dimensional Shapley homology is used on neighboring graphs, samples with higher influence scores have a greater impact on the accuracy of neural networks that determine graph connectivity and on several regular grammars whose higher entropy values imply greater difficulty in being learned.

SLEEP ◽  
2021 ◽  
Vol 44 (Supplement_2) ◽  
pp. A164-A164
Author(s):  
Pahnwat Taweesedt ◽  
JungYoon Kim ◽  
Jaehyun Park ◽  
Jangwoon Park ◽  
Munish Sharma ◽  
...  

Abstract Introduction Obstructive sleep apnea (OSA) is a common sleep-related breathing disorder with an estimation of one billion people. Full-night polysomnography is considered the gold standard for OSA diagnosis. However, it is time-consuming, expensive and is not readily available in many parts of the world. Many screening questionnaires and scores have been proposed for OSA prediction with high sensitivity and low specificity. The present study is intended to develop models with various machine learning techniques to predict the severity of OSA by incorporating features from multiple questionnaires. Methods Subjects who underwent full-night polysomnography in Torr sleep center, Texas and completed 5 OSA screening questionnaires/scores were included. OSA was diagnosed by using Apnea-Hypopnea Index ≥ 5. We trained five different machine learning models including Deep Neural Networks with the scaled principal component analysis (DNN-PCA), Random Forest (RF), Adaptive Boosting classifier (ABC), and K-Nearest Neighbors classifier (KNC) and Support Vector Machine Classifier (SVMC). Training:Testing subject ratio of 65:35 was used. All features including demographic data, body measurement, snoring and sleepiness history were obtained from 5 OSA screening questionnaires/scores (STOP-BANG questionnaires, Berlin questionnaires, NoSAS score, NAMES score and No-Apnea score). Performance parametrics were used to compare between machine learning models. Results Of 180 subjects, 51.5 % of subjects were male with mean (SD) age of 53.6 (15.1). One hundred and nineteen subjects were diagnosed with OSA. Area Under the Receiver Operating Characteristic Curve (AUROC) of DNN-PCA, RF, ABC, KNC, SVMC, STOP-BANG questionnaire, Berlin questionnaire, NoSAS score, NAMES score, and No-Apnea score were 0.85, 0.68, 0.52, 0.74, 0.75, 0.61, 0.63, 0,61, 0.58 and 0,58 respectively. DNN-PCA showed the highest AUROC with sensitivity of 0.79, specificity of 0.67, positive-predictivity of 0.93, F1 score of 0.86, and accuracy of 0.77. Conclusion Our result showed that DNN-PCA outperforms OSA screening questionnaires, scores and other machine learning models. Support (if any):


2021 ◽  
Vol 11 (19) ◽  
pp. 9296
Author(s):  
Talha Mahboob Alam ◽  
Mubbashar Mushtaq ◽  
Kamran Shaukat ◽  
Ibrahim A. Hameed ◽  
Muhammad Umer Sarwar ◽  
...  

Lack of education is a major concern in underdeveloped countries because it leads to poor human and economic development. The level of education in public institutions varies across all regions around the globe. Current disparities in access to education worldwide are mostly due to systemic regional differences and the distribution of resources. Previous research focused on evaluating students’ academic performance, but less has been done to measure the performance of educational institutions. Key performance indicators for the evaluation of institutional performance differ from student performance indicators. There is a dire need to evaluate educational institutions’ performance based on their disparities and academic results on a large scale. This study proposes a model to measure institutional performance based on key performance indicators through data mining techniques. Various feature selection methods were used to extract the key performance indicators. Several machine learning models, namely, J48 decision tree, support vector machines, random forest, rotation forest, and artificial neural networks were employed to build an efficient model. The results of the study were based on different factors, i.e., the number of schools in a specific region, teachers, school locations, enrolment, and availability of necessary facilities that contribute to school performance. It was also observed that urban regions performed well compared to rural regions due to the improved availability of educational facilities and resources. The results showed that artificial neural networks outperformed other models and achieved an accuracy of 82.9% when the relief-F based feature selection method was used. This study will help support efforts in governance for performance monitoring, policy formulation, target-setting, evaluation, and reform to address the issues and challenges in education worldwide.


2021 ◽  
Author(s):  
Victor Fung ◽  
Jiaxin Zhang ◽  
Eric Juarez ◽  
Bobby Sumpter

Graph neural networks (GNNs) have received intense interest as a rapidly expanding class of machine learning models remarkably well-suited for materials applications. To date, a number of successful GNNs have been proposed and demonstrated for systems ranging from crystal stability to electronic property prediction and to surface chemistry and heterogeneous catalysis. However, a consistent benchmark of these models remains lacking, hindering the development and consistent evaluation of new models in the materials field. Here, we present a workflow and testing platform, MatDeepLearn, for quickly and reproducibly assessing and comparing GNNs and other machine learning models. We use this platform to optimize and evaluate a selection of top performing GNNs on several representative datasets in computational materials chemistry. From our investigations we note the importance of hyperparameter selection and find roughly similar performances for the top models once optimized. We identify several strengths in GNNs over conventional models in cases with compositionally diverse datasets and in its overall flexibility with respect to inputs, due to learned rather than defined representations. Meanwhile several weaknesses of GNNs are also observed including high data requirements, and suggestions for further improvement for applications in materials chemistry are proposed.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Haoran Zhu ◽  
Lei Lei

PurposePrevious research concerning automatic extraction of research topics mostly used rule-based or topic modeling methods, which were challenged due to the limited rules, the interpretability issue and the heavy dependence on human judgment. This study aims to address these issues with the proposal of a new method that integrates machine learning models with linguistic features for the identification of research topics.Design/methodology/approachFirst, dependency relations were used to extract noun phrases from research article texts. Second, the extracted noun phrases were classified into topics and non-topics via machine learning models and linguistic and bibliometric features. Lastly, a trend analysis was performed to identify hot research topics, i.e. topics with increasing popularity.FindingsThe new method was experimented on a large dataset of COVID-19 research articles and achieved satisfactory results in terms of f-measures, accuracy and AUC values. Hot topics of COVID-19 research were also detected based on the classification results.Originality/valueThis study demonstrates that information retrieval methods can help researchers gain a better understanding of the latest trends in both COVID-19 and other research areas. The findings are significant to both researchers and policymakers.


2021 ◽  
Author(s):  
Ramy Abdallah ◽  
Clare E. Bond ◽  
Robert W.H. Butler

<p>Machine learning is being presented as a new solution for a wide range of geoscience problems. Primarily machine learning has been used for 3D seismic data processing, seismic facies analysis and well log data correlation. The rapid development in technology with open-source artificial intelligence libraries and the accessibility of affordable computer graphics processing units (GPU) makes the application of machine learning in geosciences increasingly tractable. However, the application of artificial intelligence in structural interpretation workflows of subsurface datasets is still ambiguous. This study aims to use machine learning techniques to classify images of folds and fold-thrust structures. Here we show that convolutional neural networks (CNNs) as supervised deep learning techniques provide excellent algorithms to discriminate between geological image datasets. Four different datasets of images have been used to train and test the machine learning models. These four datasets are a seismic character dataset with five classes (faults, folds, salt, flat layers and basement), folds types with three classes (buckle, chevron and conjugate), fault types with three classes (normal, reverse and thrust) and fold-thrust geometries with three classes (fault bend fold, fault propagation fold and detachment fold). These image datasets are used to investigate three machine learning models. One Feedforward linear neural network model and two convolutional neural networks models (Convolution 2d layer transforms sequential model and Residual block model (ResNet with 9, 34, and 50 layers)). Validation and testing datasets forms a critical part of testing the model’s performance accuracy. The ResNet model records the highest performance accuracy score, of the machine learning models tested. Our CNN image classification model analysis provides a framework for applying machine learning to increase structural interpretation efficiency, and shows that CNN classification models can be applied effectively to geoscience problems. The study provides a starting point to apply unsupervised machine learning approaches to sub-surface structural interpretation workflows.</p>


2021 ◽  
Author(s):  
Chih-Kuan Yeh ◽  
Been Kim ◽  
Pradeep Ravikumar

Understanding complex machine learning models such as deep neural networks with explanations is crucial in various applications. Many explanations stem from the model perspective, and may not necessarily effectively communicate why the model is making its predictions at the right level of abstraction. For example, providing importance weights to individual pixels in an image can only express which parts of that particular image is important to the model, but humans may prefer an explanation which explains the prediction by concept-based thinking. In this work, we review the emerging area of concept based explanations. We start by introducing concept explanations including the class of Concept Activation Vectors (CAV) which characterize concepts using vectors in appropriate spaces of neural activations, and discuss different properties of useful concepts, and approaches to measure the usefulness of concept vectors. We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats. Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.


2011 ◽  
pp. 81-104 ◽  
Author(s):  
G. Camps-Valls ◽  
J. F. Guerrero-Martinez

In this chapter, we review the vast field of application of artificial neural networks in cardiac pathology discrimination based on electrocardiographic signals. We discuss advantages and drawbacks of neural and adaptive systems in cardiovascular medicine and catch a glimpse of forthcoming developments in machine learning models for the real clinical environment. Some problems are identified in the learning tasks of beat detection, feature selection/extraction, and classification, and some proposals and suggestions are given to alleviate the problems of interpretability, overfitting, and adaptation. These have become important problems in recent years and will surely constitute the basis of some investigations in the immediate future.


2021 ◽  
Author(s):  
Amit Kumar Srivast ◽  
Nima Safaei ◽  
Saeed Khaki ◽  
Gina Lopez ◽  
Wenzhi Zeng ◽  
...  

Abstract Crop yield forecasting depends on many interactive factors including crop genotype, weather, soil, and management practices. This study analyzes the performance of machine learning and deep learning methods for winter wheat yield prediction using extensive datasets of weather, soil, and crop phenology. We propose a convolutional neural network (CNN) which uses the 1-dimentional convolution operation to capture the time dependencies of environmental variables. The proposed CNN, evaluated along with other machine learning models for winter wheat yield prediction in Germany, outperformed all other models tested. To address the seasonality, weekly features were used that explicitly take soil moisture and meteorological events into account. Our results indicated that nonlinear models such as deep learning models and XGboost are more effective in finding the functional relationship between the crop yield and input data compared to linear models and deep neural networks had a higher prediction accuracy than XGboost. One of the main limitations of machine learning models is their black box property. Therefore, we moved beyond prediction and performed feature selection, as it provides key results towards explaining yield prediction (variable importance by time). As such, our study indicates which variables have the most significant effect on winter wheat yield.


Author(s):  
Dario Guidotti

Deep Neural Networks (DNNs) are popular machine learning models which have found successful application in many different domains across computer science. Nevertheless, providing formal guarantees on the behaviour of neural networks is hard and therefore their reliability in safety-critical domains is still a concern. Verification and repair emerged as promising solutions to address this issue. In the following, I will present some of my recent efforts in this area.


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
I Korsakov ◽  
A Gusev ◽  
T Kuznetsova ◽  
D Gavrilov ◽  
R Novitskiy

Abstract Abstract Background Advances in precision medicine will require an increasingly individualized prognostic evaluation of patients in order to provide the patient with appropriate therapy. The traditional statistical methods of predictive modeling, such as SCORE, PROCAM, and Framingham, according to the European guidelines for the prevention of cardiovascular disease, not adapted for all patients and require significant human involvement in the selection of predictive variables, transformation and imputation of variables. In ROC-analysis for prediction of significant cardiovascular disease (CVD), the areas under the curve for Framingham: 0.62–0.72, for SCORE: 0.66–0.73 and for PROCAM: 0.60–0.69. To improve it, we apply for approaches to predict a CVD event rely on conventional risk factors by machine learning and deep learning models to 10-year CVD event prediction by using longitudinal electronic health record (EHR). Methods For machine learning, we applied logistic regression (LR) and recurrent neural networks with long short-term memory (LSTM) units as a deep learning algorithm. We extract from longitudinal EHR the following features: demographic, vital signs, diagnoses (ICD-10-cm: I21-I22.9: I61-I63.9) and medication. The problem in this step, that near 80 percent of clinical information in EHR is “unstructured” and contains errors and typos. Missing data are important for the correct training process using by deep learning & machine learning algorithm. The study cohort included patients between the ages of 21 to 75 with a dynamic observation window. In total, we got 31517 individuals in the dataset, but only 3652 individuals have all features or missing features values can be easy to impute. Among these 3652 individuals, 29.4% has a CVD, mean age 49.4 years, 68,2% female. Evaluation We randomly divided the dataset into a training and a test set with an 80/20 split. The LR was implemented with Python Scikit-Learn and the LSTM model was implemented with Keras using Tensorflow as the backend. Results We applied machine learning and deep learning models using the same features as traditional risk scale and longitudinal EHR features for CVD prediction, respectively. Machine learning model (LR) achieved an AUROC of 0.74–0.76 and deep learning (LSTM) 0.75–0.76. By using features from EHR logistic regression and deep learning models improved the AUROC to 0.78–0.79. Conclusion The machine learning models outperformed a traditional clinically-used predictive model for CVD risk prediction (i.e. SCORE, PROCAM, and Framingham equations). This approach was used to create a clinical decision support system (CDSS). It uses both traditional risk scales and models based on neural networks. Especially important is the fact that the system can calculate the risks of cardiovascular disease automatically and recalculate immediately after adding new information to the EHR. The results are delivered to the user's personal account.


Sign in / Sign up

Export Citation Format

Share Document