Validation of the web-based LUMINA questionnaire for recruiting large cohorts of migraineurs

Cephalalgia ◽  
2011 ◽  
Vol 31 (13) ◽  
pp. 1359-1367 ◽  
Author(s):  
WPJ van Oosterhout ◽  
CM Weller ◽  
AH Stam ◽  
F Bakels ◽  
T Stijnen ◽  
...  

Objective: To assess validity of a self-administered web-based migraine-questionnaire in diagnosing migraine aura for the use of epidemiological and genetic studies. Methods: Self-reported migraineurs enrolled via the LUMINA website and completed a web-based questionnaire on headache and aura symptoms, after fulfilling screening criteria. Diagnoses were calculated using an algorithm based on the International Classification of Headache Disorders (ICHD-2), and semi-structured telephone-interviews were performed for final diagnoses. Logistic regression generated a prediction rule for aura. Algorithm-based diagnoses and predicted diagnoses were subsequently compared to the interview-derived diagnoses. Results: In 1 year, we recruited 2397 migraineurs, of which 1067 were included in the validation. A seven-question subset provided higher sensitivity (86% vs. 45%), slightly lower specificity (75% vs. 95%), and similar positive predictive value (86% vs. 88%) in assessing aura when comparing with the ICHD-2-based algorithm. Conclusions: This questionnaire is accurate and reliable in diagnosing migraine aura among self-reported migraineurs and enables detection of more aura cases with low false-positive rate.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Chiaki Kuwada ◽  
Yoshiko Ariji ◽  
Yoshitaka Kise ◽  
Takuma Funakoshi ◽  
Motoki Fukuda ◽  
...  

AbstractAlthough panoramic radiography has a role in the examination of patients with cleft alveolus (CA), its appearances is sometimes difficult to interpret. The aims of this study were to develop a computer-aided diagnosis system for diagnosing the CA status on panoramic radiographs using a deep learning object detection technique with and without normal data in the learning process, to verify its performance in comparison to human observers, and to clarify some characteristic appearances probably related to the performance. The panoramic radiographs of 383 CA patients with cleft palate (CA with CP) or without cleft palate (CA only) and 210 patients without CA (normal) were used to create two models on the DetectNet. The models 1 and 2 were developed based on the data without and with normal subjects, respectively, to detect the CAs and classify them into with or without CP. The model 2 reduced the false positive rate (1/30) compared to the model 1 (12/30). The overall accuracy of Model 2 was higher than Model 1 and human observers. The model created in this study appeared to have the potential to detect and classify CAs on panoramic radiographs, and might be useful to assist the human observers.


Author(s):  
Anil B. Gavade ◽  
Vijay S. Rajpurohit

Over the last few decades, multiple advances have been done for the classification of vegetation area through land cover, and land use. However, classification problem is one of the most complicated and contradicting problems that has received considerable attention. Therefore, to tackle this problem, this paper proposes a new Firefly-Harmony search based Deep Belief Neural Network method (FHS-DBN) for the classification of land cover, and land use. The segmentation process is done using Bayesian Fuzzy Clustering,and the feature matrix is developed. The feature matrix is given to the proposed FHS-DBN method that distinguishes the land coverfrom the land use in the multispectral satellite images, for analyzing the vegetation area. The proposed FHS-DBN method is designedby training the DBN using the FHS algorithm, which is developed by the combination of Firefly Algorithm (FA) and Harmony Search (HS) algorithm. The performance of the FHS-DBN model is evaluated using three metrics, such as Accuracy, True Positive Rate (TPR), and False Positive Rate (FPR). From the experimental analysis, it is concludedthat the proposed FHS-DBN model achieves ahigh classification accuracy of 0.9381, 0.9488, 0.9497, and 0.9477 usingIndian Pine, Salinas scene, Pavia Centre and university, and Pavia University scene dataset.


2016 ◽  
Vol 46 (4) ◽  
pp. 524-548 ◽  
Author(s):  
Shrawan Kumar Trivedi ◽  
Shubhamoy Dey

Purpose The email is an important medium for sharing information rapidly. However, spam, being a nuisance in such communication, motivates the building of a robust filtering system with high classification accuracy and good sensitivity towards false positives. In that context, this paper aims to present a combined classifier technique using a committee selection mechanism where the main objective is to identify a set of classifiers so that their individual decisions can be combined by a committee selection procedure for accurate detection of spam. Design/methodology/approach For training and testing of the relevant machine learning classifiers, text mining approaches are used in this research. Three data sets (Enron, SpamAssassin and LingSpam) have been used to test the classifiers. Initially, pre-processing is performed to extract the features associated with the email files. In the next step, the extracted features are taken through a dimensionality reduction method where non-informative features are removed. Subsequently, an informative feature subset is selected using genetic feature search. Thereafter, the proposed classifiers are tested on those informative features and the results compared with those of other classifiers. Findings For building the proposed combined classifier, three different studies have been performed. The first study identifies the effect of boosting algorithms on two probabilistic classifiers: Bayesian and Naïve Bayes. In that study, AdaBoost has been found to be the best algorithm for performance boosting. The second study was on the effect of different Kernel functions on support vector machine (SVM) classifier, where SVM with normalized polynomial (NP) kernel was observed to be the best. The last study was on combining classifiers with committee selection where the committee members were the best classifiers identified by the first study i.e. Bayesian and Naïve bays with AdaBoost, and the committee president was selected from the second study i.e. SVM with NP kernel. Results show that combining of the identified classifiers to form a committee machine gives excellent performance accuracy with a low false positive rate. Research limitations/implications This research is focused on the classification of email spams written in English language. Only body (text) parts of the emails have been used. Image spam has not been included in this work. We have restricted our work to only emails messages. None of the other types of messages like short message service or multi-media messaging service were a part of this study. Practical implications This research proposes a method of dealing with the issues and challenges faced by internet service providers and organizations that use email. The proposed model provides not only better classification accuracy but also a low false positive rate. Originality/value The proposed combined classifier is a novel classifier designed for accurate classification of email spam.


2021 ◽  
Author(s):  
Rahul B Adhao ◽  
Vinod K Pachghare

Abstract Intrusion Detection System is one of the worthwhile areas for researchers for a long. Numbers of researchers have worked for increasing the efficiency of Intrusion Detection Systems. But still, many challenges are present in modern Intrusion Detection Systems. One of the major challenges is controlling the false positive rate. In this paper, we have presented an efficient soft computing framework for the classification of intrusion detection dataset to diminish a false positive rate. The proposed processing steps are described as; the input data is at first pre-processed by the normalization process. Afterward, optimal features are chosen for the dimensionality decrease utilizing krill herd optimization. Here, the effective feature assortment is utilized to enhance classification accuracy. Support value is then estimated from ideally chosen features and lastly, a support value-based graph is created for the powerful classification of data into intrusion or normal. The exploratory outcomes demonstrate that the presented technique outperforms the existing techniques regarding different performance examinations like execution time, accuracy, false-positive rate, and their intrusion detection model increases the detection rate and decreases the false rate.


Author(s):  
Koohong Chung ◽  
Offer Grembek ◽  
Jinwoo Lee ◽  
Keechoo Choi

Two safety management tools have recently been developed for the California Department of Transportation (Caltrans). One is the continuous risk profile (CRP) approach, which is a network screening procedure, and the other is the California Safety Analyst (CASA), a web-based application designed to assist state safety engineers in conducting safety investigations and in documenting their findings. This paper provides a qualitative description of the two tools and summarizes feedback from more than 100 Caltrans safety engineers who attended demonstrations of the web-based application. Findings from both empirical analysis and the survey indicate that CRP can significantly reduce the false positive rate and that CASA can greatly improve the efficiency of traffic safety investigations. However, misunderstandings remain about the relationship between the CRP approach, other methods explained in the Highway Safety Manual, and different safety management tools. The misunderstandings create challenges for the deployment of CRP and CASA in California.


2022 ◽  
pp. 453-479
Author(s):  
Layla Mohammed Alrawais ◽  
Mamdouh Alenezi ◽  
Mohammad Akour

The growth of web-based applications has increased tremendously from last two decades. While these applications bring huge benefits to society, yet they suffer from various security threats. Although there exist various techniques to ensure the security of web applications, still a large number of applications suffer from a wide variety of attacks and result in financial loses. In this article, a security-testing framework for web applications is proposed with an argument that security of an application should be tested at every stage of software development life cycle (SDLC). Security testing is initiated from the requirement engineering phase using a keyword-analysis phase. The output of the first phase serves as input to the next phase. Different case study applications indicate that the framework assists in early detection of security threats and applying appropriate security measures. The results obtained from the implementation of the proposed framework demonstrated a high detection ratio with a less false-positive rate.


2021 ◽  
Author(s):  
Chiaki Kuwada ◽  
Yoshiko Ariji ◽  
Motoki Fukuda ◽  
Tsutomu Kuwada ◽  
Kenichi Gotoh ◽  
...  

Abstract Although panoramic radiography has a role in the examination of patients with cleft alveolus (CA), its appearances is sometimes difficult to interpret. The aims of the present study were to develop a computer-aided diagnosis system for diagnosing the CA status on panoramic radiographs using a deep learning object detection technique with and without normal data in the learning process, to verify its performance, and to clarify some characteristic appearances probably related to the performance. The panoramic radiographs of 383 CA patients with cleft palate (CA with CP group) or without cleft palate (CA only group) and 210 patients without CA (normal group) were used to create 2 learning models on the DetectNet. The models 1 and 2 were developed based on the data with and without normal subjects, respectively, to detect the CAs and classify them into the CA only and CA with CP groups. The model 2 reduced the false positive rate (1/30) compared to the model 1 (12/30). The model 2 performances were higher in almost values than those in the model 1, but no difference in the recall of CA with CP groups. The model created in the present study appeared to have the potential to detect and classify CAs on panoramic radiographs.


2017 ◽  
Vol 56 (04) ◽  
pp. 308-318 ◽  
Author(s):  
Asli Bostanci ◽  
Murat Turhan ◽  
Selen Bozkurt

SummaryObjectives: The goal of this study is to evaluate the results of machine learning methods for the classification of OSA severity of patients with suspected sleep disorder breathing as normal, mild, moderate and severe based on non-polysomnographic variables: 1) clinical data, 2) symptoms and 3) physical examination.Methods: In order to produce classification models for OSA severity, five different machine learning methods (Bayesian network, Decision Tree, Random Forest, Neural Networks and Logistic Regression) were trained while relevant variables and their relationships were derived empirically from observed data. Each model was trained and evaluated using 10-fold cross-validation and to evaluate classification performances of all methods, true positive rate (TPR), false positive rate (FPR), Positive Predictive Value (PPV), F measure and Area Under Receiver Operating Characteristics curve (ROC-AUC) were used.Results: Results of 10-fold cross validated tests with different variable settings promisingly indicated that the OSA severity of suspected OSA patients can be classified, using non-polysomnographic features, with 0.71 true positive rate as the highest and, 0.15 false positive rate as the lowest, respectively. Moreover, the test results of different variables settings revealed that the accuracy of the classification models was significantly improved when physical examination variables were added to the model.Conclusions: Study results showed that machine learning methods can be used to estimate the probabilities of no, mild, moderate, and severe obstructive sleep apnea and such approaches may improve accurate initial OSA screening and help referring only the suspected moderate or severe OSA patients to sleep laboratories for the expensive tests.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Fu-Hau Hsu ◽  
Chih-Wen Ou ◽  
Yan-Ling Hwang ◽  
Ya-Ching Chang ◽  
Po-Ching Lin

Web-based botnets are popular nowadays. A Web-based botnet is a botnet whose C&C server and bots use HTTP protocol, the most universal and supported network protocol, to communicate with each other. Because the botnet communication can be hidden easily by attackers behind the relatively massive HTTP traffic, administrators of network equipment, such as routers and switches, cannot block such suspicious traffic directly regardless of costs. Based on the clients constituent of a Web server and characteristics of HTTP responses sent to clients from the server, this paper proposes a traffic inspection solution, called Web-based Botnet Detector (WBD). WBD is able to detect suspicious C&C (Command-and-Control) servers of HTTP botnets regardless of whether the botnet commands are encrypted or hidden in normal Web pages. More than 500 GB real network traces collected from 11 backbone routers are used to evaluate our method. Experimental results show that the false positive rate of WBD is 0.42%.


2020 ◽  
Vol 10 (11) ◽  
pp. 3706 ◽  
Author(s):  
Hossam Faris ◽  
Maria Habib ◽  
Iman Almomani ◽  
Mohammed Eshtay ◽  
Ibrahim Aljarah

Nowadays, smartphones are an essential part of people’s lives and a sign of a contemporary world. Even that smartphones bring numerous facilities, but they form a wide gate into personal and financial information. In recent years, a substantial increasing rate of malicious efforts to attack smartphone vulnerabilities has been noticed. A serious common threat is the ransomware attack, which locks the system or users’ data and demands a ransom for the purpose of decrypting or unlocking them. In this article, a framework based on metaheuristic and machine learning is proposed for the detection of Android ransomware. Raw sequences of the applications API calls and permissions were extracted to capture the ransomware pattern of behaviors and build the detection framework. Then, a hybrid of the Salp Swarm Algorithm (SSA) and Kernel Extreme Learning Machine (KELM) is modeled, where the SSA is used to search for the best subset of features and optimize the KELM hyperparameters. Meanwhile, the KELM algorithm is utilized for the identification and classification of the apps into benign or ransomware. The performance of the proposed (SSA-KELM) exhibits noteworthy advantages based on several evaluation measures, including accuracy, recall, true negative rate, precision, g-mean, and area under the curve of a value of 98%, and a ratio of 2% of false positive rate. In addition, it has a competitive convergence ability. Hence, the proposed SSA-KELM algorithm represents a promising approach for efficient ransomware detection.


Sign in / Sign up

Export Citation Format

Share Document