scholarly journals Higgs analysis with quantum classifiers

2021 ◽  
Vol 251 ◽  
pp. 03070
Author(s):  
Vasilis Belis ◽  
Samuel González-Castillo ◽  
Christina Reissel ◽  
Sofia Vallecorsa ◽  
Elías F. Combarro ◽  
...  

We have developed two quantum classifier models for the ttH classification problem, both of which fall into the category of hybrid quantumclassical algorithms for Noisy Intermediate Scale Quantum devices (NISQ). Our results, along with other studies, serve as a proof of concept that Quantum Machine Learning (QML) methods can have similar or better performance, in specific cases of low number of training samples, with respect to conventional ML methods even with a limited number of qubits available in current hardware. To utilise algorithms with a low number of qubits — to accommodate for limitations in both simulation hardware and real quantum hardware — we investigated different feature reduction methods. Their impact on the performance of both the classical and quantum models was assessed. We addressed different implementations of two QML models, representative of the two main approaches to supervised quantum machine learning today: a Quantum Support Vector Machine (QSVM), a kernel-based method, and a Variational Quantum Circuit (VQC), a variational approach.

2020 ◽  
Author(s):  
Nalika Ulapane ◽  
Karthick Thiyagarajan ◽  
sarath kodagoda

<div>Classification has become a vital task in modern machine learning and Artificial Intelligence applications, including smart sensing. Numerous machine learning techniques are available to perform classification. Similarly, numerous practices, such as feature selection (i.e., selection of a subset of descriptor variables that optimally describe the output), are available to improve classifier performance. In this paper, we consider the case of a given supervised learning classification task that has to be performed making use of continuous-valued features. It is assumed that an optimal subset of features has already been selected. Therefore, no further feature reduction, or feature addition, is to be carried out. Then, we attempt to improve the classification performance by passing the given feature set through a transformation that produces a new feature set which we have named the “Binary Spectrum”. Via a case study example done on some Pulsed Eddy Current sensor data captured from an infrastructure monitoring task, we demonstrate how the classification accuracy of a Support Vector Machine (SVM) classifier increases through the use of this Binary Spectrum feature, indicating the feature transformation’s potential for broader usage.</div><div><br></div>


2021 ◽  
Author(s):  
Mohammad Hassan Almaspoor ◽  
Ali Safaei ◽  
Afshin Salajegheh ◽  
Behrouz Minaei-Bidgoli

Abstract Classification is one of the most important and widely used issues in machine learning, the purpose of which is to create a rule for grouping data to sets of pre-existing categories is based on a set of training sets. Employed successfully in many scientific and engineering areas, the Support Vector Machine (SVM) is among the most promising methods of classification in machine learning. With the advent of big data, many of the machine learning methods have been challenged by big data characteristics. The standard SVM has been proposed for batch learning in which all data are available at the same time. The SVM has a high time complexity, i.e., increasing the number of training samples will intensify the need for computational resources and memory. Hence, many attempts have been made at SVM compatibility with online learning conditions and use of large-scale data. This paper focuses on the analysis, identification, and classification of existing methods for SVM compatibility with online conditions and large-scale data. These methods might be employed to classify big data and propose research areas for future studies. Considering its advantages, the SVM can be among the first options for compatibility with big data and classification of big data. For this purpose, appropriate techniques should be developed for data preprocessing in order to covert data into an appropriate form for learning. The existing frameworks should also be employed for parallel and distributed processes so that SVMs can be made scalable and properly online to be able to handle big data.


2020 ◽  
Author(s):  
Wenjie Liu ◽  
Ying Zhang ◽  
Zhiliang Deng ◽  
Jiaojiao Zhao ◽  
Lian Tong

Abstract As an emerging field that aims to bridge the gap between human activities and computing systems, human-centered computing (HCC) in cloud, edge, fog has had a huge impact on the artificial intelligence algorithms. The quantum generative adversarial network (QGAN) is considered to be one of the quantum machine learning algorithms with great application prospects, which also should be improved to conform to the human-centered paradigm. The generation process of QGAN is relatively random and the generated model does not conform to the human-centered concept, so it is not quite suitable for real scenarios. In order to solve these problems, a hybrid quantum-classical conditional generative adversarial network (QCGAN) algorithm is proposed, which is a knowledge-driven human-computer interaction computing mode in cloud. The purpose of stabilizing the generation process and the interaction between human and computing process is achieved by inputting conditional information in the generator and discriminator. The generator uses the parameterized quantum circuit with an all-to-all connected topology, which facilitates the tuning of network parameters during the training process. The discriminator uses the classical neural network, which effectively avoids the ”input bottleneck” of quantum machine learning. Finally, the BAS training set is selected to conduct experiment on the quantum cloud computing platform. The result shows that the QCGAN algorithm can effectively converge to the Nash equilibrium point after training and perform human-centered classification generation tasks.


2021 ◽  
Author(s):  
jorge cabrera Alvargonzalez ◽  
Ana Larranaga Janeiro ◽  
Sonia Perez ◽  
Javier Martinez Torres ◽  
Lucia martinez lamas ◽  
...  

Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been and remains one of the major challenges humanity has faced thus far. Over the past few months, large amounts of information have been collected that are only now beginning to be assimilated. In the present work, the existence of residual information in the massive numbers of rRT-PCRs that tested positive out of the almost half a million tests that were performed during the pandemic is investigated. This residual information is believed to be highly related to a pattern in the number of cycles that are necessary to detect positive samples as such. Thus, a database of more than 20,000 positive samples was collected, and two supervised classification algorithms (a support vector machine and a neural network) were trained to temporally locate each sample based solely and exclusively on the number of cycles determined in the rRT-PCR of each individual. Finally, the results obtained from the classification show how the appearance of each wave is coincident with the surge of each of the variants present in the region of Galicia (Spain) during the development of the SARS-CoV-2 pandemic and clearly identified with the classification algorithm.


2019 ◽  
Vol 11 (24) ◽  
pp. 2948 ◽  
Author(s):  
Hoang Minh Nguyen ◽  
Begüm Demir ◽  
Michele Dalponte

Tree species classification at individual tree crowns (ITCs) level, using remote-sensing data, requires the availability of a sufficient number of reliable reference samples (i.e., training samples) to be used in the learning phase of the classifier. The classification performance of the tree species is mainly affected by two main issues: (i) an imbalanced distribution of the tree species classes, and (ii) the presence of unreliable samples due to field collection errors, coordinate misalignments, and ITCs delineation errors. To address these problems, in this paper, we present a weighted Support Vector Machine (wSVM)-based approach for the detection of tree species at ITC level. The proposed approach initially extracts (i) different weights associated to different classes of tree species, to mitigate the effect of the imbalanced distribution of the classes; and (ii) different weights associated to different training samples according to their importance for the classification problem, to reduce the effect of unreliable samples. Then, in order to exploit different weights in the learning phase of the classifier a wSVM algorithm is used. The features to characterize the tree species at ITC level are extracted from both the elevation and intensity of airborne light detection and ranging (LiDAR) data. Experimental results obtained on two study areas located in the Italian Alps show the effectiveness of the proposed approach.


2012 ◽  
Vol 198-199 ◽  
pp. 1333-1337 ◽  
Author(s):  
San Xi Wei ◽  
Zong Hai Sun

Gaussian processes (GPs) is a very promising technology that has been applied both in the regression problem and the classification problem. In recent years, models based on Gaussian process priors have attracted much attention in the machine learning. Binary (or two-class, C=2) classification using Gaussian process is a very well-developed method. In this paper, a Multi-classification (C>2) method is illustrated, which is based on Binary GPs classification. A good accuracy can be obtained through this method. Meanwhile, a comparison about decision time and accuracy between this method and Support Vector Machine (SVM) is made during the experiments.


2014 ◽  
Vol 21 (4) ◽  
pp. 569-605 ◽  
Author(s):  
F. CANAN PEMBE ◽  
TUNGA GÜNGÖR

AbstractIn this paper, we study the problem of structural analysis of Web documents aiming at extracting the sectional hierarchy of a document. In general, a document can be represented as a hierarchy of sections and subsections with corresponding headings and subheadings. We developed two machine learning models: heading extraction model and hierarchy extraction model. Heading extraction was formulated as a classification problem whereas a tree-based learning approach was employed in hierarchy extraction. For this purpose, we developed an incremental learning algorithm based on support vector machines and perceptrons. The models were evaluated in detail with respect to the performance of the heading and hierarchy extraction tasks. For comparison, a baseline rule-based approach was used that relies on heuristics and HTML document object model tree processing. The machine learning approach, which is a fully automatic approach, outperformed the rule-based approach. We also analyzed the effect of document structuring on automatic summarization in the context of Web search. The results of the task-based evaluation on TREC queries showed that structured summaries are superior to unstructured summaries both in terms of accuracy and user ratings, and enable the users to determine the relevancy of search results more accurately than search engine snippets.


2019 ◽  
Vol 8 (2) ◽  
pp. 4800-4807

Recently, engineers are concentrating on designing an effective prediction model for finding the rate of student admission in order to raise the educational growth of the nation. The method to predict the student admission towards the higher education is a challenging task for any educational organization. There is a high visibility of crisis towards admission in the higher education. The admission rate of the student is the major risk to the educational society in the world. The student admission greatly affects the economic, social, academic, profit and cultural growth of the nation. The student admission rate also depends on the admission procedures and policies of the educational institutions. The chance of student admission also depends on the feedback given by all the stake holders of the educational sectors. The forecasting of the student admission is a major task for any educational institution to protect the profit and wealth of the organization. This paper attempts to analyze the performance of the student admission prediction by using machine learning dimensionality reduction algorithms. The Admission Predict dataset from Kaggle machine learning Repository is used for prediction analysis and the features are reduced by feature reduction methods. The prediction of the chance of Admit is achieved in four ways. Firstly, the correlation between each of the dataset attributes are found and depicted as a histogram. Secondly, the top most high correlated features are identified which are directly contributing to the prediction of chance of admit. Thirdly, the Admission Predict dataset is subjected to dimensionality reduction methods like principal component analysis (PCA), Sparse PCA, Incremental PCA , Kernel PCA and Mini Batch Sparse PCA. Fourth, the optimized dimensionality reduced dataset is then executed to analyze and compare the mean squared error, Mean Absolute Error and R2 Score of each method. The implementation is done by python in Anaconda Spyder Navigator Integrated Development Environment. Experimental Result shows that the CGPA, GRE Score and TOEFL Score are highly correlated features in predicting the chance of admit. The execution of performance analysis shows that Incremental PCA have achieved the effective prediction of chance of admit with minimum MSE of 0.09, MAE of 0.24 and reasonable R2 Score of 0.26.


2012 ◽  
Vol 10 (10) ◽  
pp. 547
Author(s):  
Mei Zhang ◽  
Gregory Johnson ◽  
Jia Wang

<span style="font-family: Times New Roman; font-size: small;"> </span><p style="margin: 0in 0.5in 0pt; text-align: justify; mso-pagination: none; mso-layout-grid-align: none;" class="MsoNormal"><span style="color: black; font-size: 10pt; mso-themecolor: text1;"><span style="font-family: Times New Roman;">A takeover success prediction model aims at predicting the probability that a takeover attempt will succeed by using publicly available information at the time of the announcement.<span style="mso-spacerun: yes;"> </span>We perform a thorough study using machine learning techniques to predict takeover success.<span style="mso-spacerun: yes;"> </span>Specifically, we model takeover success prediction as a binary classification problem, which has been widely studied in the machine learning community.<span style="mso-spacerun: yes;"> </span>Motivated by the recent advance in machine learning, we empirically evaluate and analyze many state-of-the-art classifiers, including logistic regression, artificial neural network, support vector machines with different kernels, decision trees, random forest, and Adaboost.<span style="mso-spacerun: yes;"> </span>The experiments validate the effectiveness of applying machine learning in takeover success prediction, and we found that the support vector machine with linear kernel and the Adaboost with stump weak classifiers perform the best for the task.<span style="mso-spacerun: yes;"> </span>The result is consistent with the general observations of these two approaches.</span></span></p><span style="font-family: Times New Roman; font-size: small;"> </span>


Author(s):  
Sadhana Patidar ◽  
Priyanka Parihar ◽  
Chetan Agrawal

Now-a-days with growing applications over internet increases the security issues over network. Many security applications are designed to cope with such security concerns but still it required more attention to improve speed as well accuracy. With advancement of technologies there is also evolution of new threats or attacks in network. So, it is required to design such detection system that can handle new threats in network. One of the network security tools is intrusion detection system which is used to detect malicious data packets. Machine learning tool is also used to improve efficiency of network-based intrusion detection system. In this paper, an intrusion detection system is proposed with an application of machine learning tools. The proposed model integrates feature reduction, affinity clustering and multilevel Ensemble Support Vector Machine. The proposed model performance is analyzed over two datasets i.e. NSL-KDD and UNSW-NB 15 dataset and achieved approx. 12% of efficiency over other existing work.


Sign in / Sign up

Export Citation Format

Share Document