scholarly journals A Linearly Involved Generalized Moreau Enhancement of ℓ2,1-Norm with Application to Weighted Group Sparse Classification

Algorithms ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 312
Author(s):  
Yang Chen ◽  
Masao Yamagishi ◽  
Isao Yamada

This paper proposes a new group-sparsity-inducing regularizer to approximate ℓ2,0 pseudo-norm. The regularizer is nonconvex, which can be seen as a linearly involved generalized Moreau enhancement of ℓ2,1-norm. Moreover, the overall convexity of the corresponding group-sparsity-regularized least squares problem can be achieved. The model can handle general group configurations such as weighted group sparse problems, and can be solved through a proximal splitting algorithm. Among the applications, considering that the bias of convex regularizer may lead to incorrect classification results especially for unbalanced training sets, we apply the proposed model to the (weighted) group sparse classification problem. The proposed classifier can use the label, similarity and locality information of samples. It also suppresses the bias of convex regularizer-based classifiers. Experimental results demonstrate that the proposed classifier improves the performance of convex ℓ2,1 regularizer-based methods, especially when the training data set is unbalanced. This paper enhances the potential applicability and effectiveness of using nonconvex regularizers in the frame of convex optimization.

2018 ◽  
Vol 13 (3) ◽  
pp. 408-428 ◽  
Author(s):  
Phu Vo Ngoc

We have already survey many significant approaches for many years because there are many crucial contributions of the sentiment classification which can be applied in everyday life, such as in political activities, commodity production, and commercial activities. We have proposed a novel model using a Latent Semantic Analysis (LSA) and a Dennis Coefficient (DNC) for big data sentiment classification in English. Many LSA vectors (LSAV) have successfully been reformed by using the DNC. We use the DNC and the LSAVs to classify 11,000,000 documents of our testing data set to 5,000,000 documents of our training data set in English. This novel model uses many sentiment lexicons of our basis English sentiment dictionary (bESD). We have tested the proposed model in both a sequential environment and a distributed network system. The results of the sequential system are not as good as that of the parallel environment. We have achieved 88.76% accuracy of the testing data set, and this is better than the accuracies of many previous models of the semantic analysis. Besides, we have also compared the novel model with the previous models, and the experiments and the results of our proposed model are better than that of the previous model. Many different fields can widely use the results of the novel model in many commercial applications and surveys of the sentiment classification.


2019 ◽  
Vol 15 (1) ◽  
pp. 155014771882052 ◽  
Author(s):  
Bowen Qin ◽  
Fuyuan Xiao

Due to its efficiency to handle uncertain information, Dempster–Shafer evidence theory has become the most important tool in many information fusion systems. However, how to determine basic probability assignment, which is the first step in evidence theory, is still an open issue. In this article, a new method integrating interval number theory and k-means++ cluster method is proposed to determine basic probability assignment. At first, k-means++ clustering method is used to calculate lower and upper bound values of interval number with training data. Then, the differentiation degree based on distance and similarity of interval number between the test sample and constructed models are defined to generate basic probability assignment. Finally, Dempster’s combination rule is used to combine multiple basic probability assignments to get the final basic probability assignment. The experiments on Iris data set that is widely used in classification problem illustrated that the proposed method is effective in determining basic probability assignment and classification problem, and the proposed method shows more accurate results in which the classification accuracy reaches 96.7%.


2019 ◽  
Vol 45 (2) ◽  
pp. 267-292 ◽  
Author(s):  
Akiko Eriguchi ◽  
Kazuma Hashimoto ◽  
Yoshimasa Tsuruoka

Neural machine translation (NMT) has shown great success as a new alternative to the traditional Statistical Machine Translation model in multiple languages. Early NMT models are based on sequence-to-sequence learning that encodes a sequence of source words into a vector space and generates another sequence of target words from the vector. In those NMT models, sentences are simply treated as sequences of words without any internal structure. In this article, we focus on the role of the syntactic structure of source sentences and propose a novel end-to-end syntactic NMT model, which we call a tree-to-sequence NMT model, extending a sequence-to-sequence model with the source-side phrase structure. Our proposed model has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence. We have empirically compared the proposed model with sequence-to-sequence models in various settings on Chinese-to-Japanese and English-to-Japanese translation tasks. Our experimental results suggest that the use of syntactic structure can be beneficial when the training data set is small, but is not as effective as using a bi-directional encoder. As the size of training data set increases, the benefits of using a syntactic tree tends to diminish.


Electronics ◽  
2019 ◽  
Vol 8 (7) ◽  
pp. 736
Author(s):  
Mondol ◽  
Lee

A successful Hearing-Aid Fitting (HAF) is more than just selecting an appropriate HearingAid (HA) device for a patient with Hearing Loss (HL). The initial fitting is given by the prescriptionbased on user’s hearing loss; however, it is often necessary for the audiologist to readjust someparameters to satisfy the user demands. Therefore, in this paper, we concentrated on a new applicationof Neural Network (NN) combined with a Transfer Learning (TL) strategy to develop a fittingalgorithm with the prescription database for hearing loss and readjusted gain to minimize the gapbetween fitting satisfaction. As prior information, we generated the data set from two popularhearing-aid fitting software, then fed the training data to our proposed model, and verified theperformance of the architecture. Pondering real life circumstances, where numerous fitting recordsmay not always be accessible, we first investigated the number of minimum fitting records requiredfor possible sufficient training. After that, we evaluated the performance of the proposed algorithmin two phases: (a) NN with refined hyper parameter showed enhanced performance in compareto state-of-the-art DNN approach, and (b) the TL approach boosted the performance of the NNalgorithm in a broad way. Altogether, our model provides a pragmatic and promising tool for HAF.


2021 ◽  
Author(s):  
Nguyen Ha Huy Cuong

Abstract In agriculture, a timely and accurate estimate of ripeness in the orchard improves the post-harvest process. Choosing fruits based on their maturity stages can reduce storage costs and increase market results. In addition, the estimation of the ripeness of the fruit based on the detection of input and output indicators has brought about practical effects in the harvesting process, as well as determining the amount of water needed for irrigation. pepper, the amount of fertilizer for the end of the season appropriate. In this paper, propose a technical solution for a model to detect persimmon green grapefruit fruit at agricultural farms, Vietnam. Aggregation model and transfer learning method are used. The proposed model contains two object detection sub models and the decision model is the pre-processed model, the transfer model and the corresponding aggregation model. Improving the YOLO algorithm is trained with more than one hundred object types, the total proposed processing is 500,000 images, from the COCO image data set used as a preprocessing model. Aggregation model and transfer learning method are also used as an initial step to train the model transferred by the transfer learning technique. Only images are used for transfer model training. Finally, the aggregation model with the techniques used to make decisions selects the best results from the pre-trained model and the transfer model. Using our proposed model, it has improved and reduced the time when analyzing the maximum number of training data sets and training time. The accuracy of model union is 98.20%. The test results of the classifier are proposed through a data set of 10000 images of each layer for sensitivity of 98.2%, specificity 97.2% with accuracy of 96.5% and 0, 98 in training for all grades.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2767
Author(s):  
Wenqiong Zhang ◽  
Yiwei Huang ◽  
Jianfei Tong ◽  
Ming Bao ◽  
Xiaodong Li

Low-frequency multi-source direction-of-arrival (DOA) estimation has been challenging for micro-aperture arrays. Deep learning (DL)-based models have been introduced to this problem. Generally, existing DL-based methods formulate DOA estimation as a multi-label multi-classification problem. However, the accuracy of these methods is limited by the number of grids, and the performance is overly dependent on the training data set. In this paper, we propose an off-grid DL-based DOA estimation. The backbone is based on circularly fully convolutional networks (CFCN), trained by the data set labeled by space-frequency pseudo-spectra, and provides on-grid DOA proposals. Then, the regressor is developed to estimate the precise DOAs according to corresponding proposals and features. In this framework, spatial phase features are extracted by the circular convolution calculation. The improvement in spatial resolution is converted to increasing the dimensionality of features by rotating convolutional networks. This model ensures that the DOA estimations at different sub-bands have the same interpretation ability and effectively reduce network model parameters. The simulation and semi-anechoic chamber experiment results show that CFCN-based DOA is superior to existing methods in terms of generalization ability, resolution, and accuracy.


Author(s):  
Yadala Sucharitha ◽  
Y. Vijayalata ◽  
V. Kamakshi Prasad

Introduction: In the present scenario, social media network plays a significant role in sharing information between individuals. This incorporates information about news and events that are presently occurring in the real world. Anticipating election results is presently turning in to a fascinating research topic through social media. In this article, we proposed a strategy to anticipate election results by consolidating sub-event discovery and sentimental analysis in micro blogs to break down as well as imagine political inclinations un covered by those social media users Methodology: This approach discovers and investigates sentimental data from micro-blogs to anticipate the popularity of contestants. In general, many organizations and media houses conduct prepoll review and expert’s perspectives to anticipate the result of the election, but for our model, we use twitter data to predict the result of an election by gathering twitter information and evaluate it to anticipate the result of the election by analyzing the sentiment of twitter information about the contestants. Results: The number of seats won by the first, second and the third party in AP Assembly Election 2019 has been deter-mined by utilizing PSS’s of these parties by means of equation(2),(3), and(4), respectively. In Table 2 actual results of the election and our model prediction results are shown and these outcomes are very close to actual results. We utilized SVM with 15-fold cross-validation, for sentiment polarity classification utilizing our training set, which gives us the precision of 94.2%. There are 7500 tuples in our training data set, with 3750 positive tweets and 3750 negative tweets. Conclusions: Our outcomes state that the proposed model can precisely forecast the election results with accuracy (94.2 %) over the given baselines. The experimental outcomes are very closer to actual election results and contrasted with conventional strategies utilized by various survey agencies for exit polls and approval of results demonstrated that social media data can foresee with better exactness. Discussion: In the future we might want to expand this work into different areas and nations of the reality where Twitter is picking up prevalence as a political battling tool and where politicians and individuals are turning towards micro-blogs for political communicates and data. We would likewise expand this research into various fields other than general elections and from politicians to state organizations.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Feng Hu ◽  
Xiao Liu ◽  
Jin Dai ◽  
Hong Yu

The classification problem for imbalance data is paid more attention to. So far, many significant methods are proposed and applied to many fields. But more efficient methods are needed still. Hypergraph may not be powerful enough to deal with the data in boundary region, although it is an efficient tool to knowledge discovery. In this paper, the neighborhood hypergraph is presented, combining rough set theory and hypergraph. After that, a novel classification algorithm for imbalance data based on neighborhood hypergraph is developed, which is composed of three steps: initialization of hyperedge, classification of training data set, and substitution of hyperedge. After conducting an experiment of 10-fold cross validation on 18 data sets, the proposed algorithm has higher average accuracy than others.


2019 ◽  
Vol 28 (1) ◽  
pp. 13-23
Author(s):  
Jarosław Kurek ◽  
Joanna Aleksiejuk-Gawron ◽  
Izabella Antoniuk ◽  
Jarosław Górski ◽  
Albina Jegorowa ◽  
...  

In this paper we introduce the enhanced drill wear recognition method, based on classifiers ensemble, obtained using transfer learning and data augmentation methods. Red, green and yellow classes are used to describe the current drill state. The first one corresponds to the case when drill should be immediately replaced. The second one denotes a tool that is still in a good condition. The final class refers to the case when a drill is suspected of being worn out, and a human expert evaluation would be required. The proposed algorithm uses three different, pretrained network models and adjusts them to the drill wear classification problem. To ensure satisfactory results, each of the methods used was required to achieve accuracy above 90\% for the given classification task. Final evaluation is achieved by voting of all three classifiers. Since the initial data set was small (242 instances), the data augmentation method was used to artificially increase the total number of drill hole images. The experiments performed confirmed that the presented approach can achieve high accuracy, even with such a limited set of training data.


2019 ◽  
Vol 20 (9) ◽  
pp. 755-765
Author(s):  
Arshpreet Kaur ◽  
Karan Verma ◽  
Amol P. Bhondekar ◽  
Kumar Shashvat

Background: To decipher EEG (Electroencephalography), intending to locate inter-ictal and ictal discharges for supporting the diagnoses of epilepsy and locating the seizure focus, is a critical task. The aim of this work was to find how the ensemble model distinguishes between two different sets of problems which are group 1: inter-ictal and ictal, group 2: controlled and inter-ictal using approximate entropy as a parameter. Methods: This work addresses the classification problem for two groups; Group 1: “inter-ictal vs. ictal” for which case 1(C-E), and case 2(D-E) are included and Group 2; “activity from controlled vs. inter-ictal activity” considering four cases which are case 3 (A-C), case 4(B-C), case 5 (A-D) and case 6(B-D) respectively. To divide the EEG into sub-bands, DWT (Discrete Wavelet Transform) was used and approximate Entropy was extracted out of all the five sub-bands of EEG for each case. Bagged SVM was used to classify the different groups considered. Results: The highest accuracy for Group 1 using Bagged SVM Ensemble model for case 1 was observed to be 96.83% with testing data; which was similar to 97% achieved by using training data. For case 2 (D-E) 93.92% accuracy with training and 84.83% with testing data were obtained. For Group 2, there was a large disparity between SVM and Bagged Ensemble model, where 76%, 81.66%, 72.835% and 71.16% for case 3, case 4, case 5 and case 6 were obtained. While for training data set, 92.87%, 91.74%, 92% and 92.64% accuracy was attained, respectively. The results obtained by SVM for Group 2 showed a huge difference from the highest accuracy achieved by bagged SVM for both the training and the test data. Conclusion: Bagged Ensemble model outperformed SVM model for every case with a huge difference with both training as well as test dataset for Group 2 and marginally better for Group 1.


Sign in / Sign up

Export Citation Format

Share Document