scholarly journals COMPACT AND HYBRID FEATURE DESCRIPTION FOR BUILDING EXTRACTION

Author(s):  
Z. Li ◽  
Y. Liu ◽  
Y. Hu ◽  
P. Li ◽  
Y. Ding

Building extraction in aerial orthophotos is crucial for various applications. Currently, deep learning has been shown to be successful in addressing building extraction with high accuracy and high robustness. However, quite a large number of samples is required in training a classifier when using deep learning model. In order to realize accurate and semi-interactive labelling, the performance of feature description is crucial, as it has significant effect on the accuracy of classification. In this paper, we bring forward a compact and hybrid feature description method, in order to guarantees desirable classification accuracy of the corners on the building roof contours. The proposed descriptor is a hybrid description of an image patch constructed from 4 sets of binary intensity tests. Experiments show that benefiting from binary description and making full use of color channels, this descriptor is not only computationally frugal, but also accurate than SURF for building extraction.

2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Sunil Kumar Prabhakar ◽  
Dong-Ok Won

To unlock information present in clinical description, automatic medical text classification is highly useful in the arena of natural language processing (NLP). For medical text classification tasks, machine learning techniques seem to be quite effective; however, it requires extensive effort from human side, so that the labeled training data can be created. For clinical and translational research, a huge quantity of detailed patient information, such as disease status, lab tests, medication history, side effects, and treatment outcomes, has been collected in an electronic format, and it serves as a valuable data source for further analysis. Therefore, a huge quantity of detailed patient information is present in the medical text, and it is quite a huge challenge to process it efficiently. In this work, a medical text classification paradigm, using two novel deep learning architectures, is proposed to mitigate the human efforts. The first approach is that a quad channel hybrid long short-term memory (QC-LSTM) deep learning model is implemented utilizing four channels, and the second approach is that a hybrid bidirectional gated recurrent unit (BiGRU) deep learning model with multihead attention is developed and implemented successfully. The proposed methodology is validated on two medical text datasets, and a comprehensive analysis is conducted. The best results in terms of classification accuracy of 96.72% is obtained with the proposed QC-LSTM deep learning model, and a classification accuracy of 95.76% is obtained with the proposed hybrid BiGRU deep learning model.


2021 ◽  
Author(s):  
Tong Guo

In industry NLP application, our manually labeled data has a certain number of noisy data. We present a simple method to find the noisy data and relabel them manually, meanwhile we collect the correction information. Then we present novel method to incorporate the human correction information into deep learning model. Human know how to correct noisy data. So the correction information can be inject into deep learning model. We do the experiment on our own text classification dataset, which is manually labeled, because we relabel the noisy data in our dataset for our industry application. The experiment result shows that our method improve the classification accuracy from 91.7% to 92.5%. The 91.7% baseline is based on BERT training on the corrected dataset, which is hard to surpass.


2021 ◽  
Author(s):  
Tong Guo

In industry NLP application, our manually labeled data has a certain number of noisy data. We present a simple method to find the noisy data and relabel them manually, meanwhile we collect the correction information. Then we present novel method to incorporate the human correction information into deep learning model. Human know how to correct noisy data. So the correction information can be inject into deep learning model. We do the experiment on our own text classification dataset, which is manually labeled, because we relabel the noisy data in our dataset for our industry application. The experiment result shows that our method improve the classification accuracy from 91.7% to 92.5%. The 91.7% baseline is based on BERT training on the corrected dataset, which is hard to surpass.


2021 ◽  
Author(s):  
Tong Guo

In industry NLP application, our manually labeled data has a certain number of noisy data. We present a simple method to find the noisy data and relabel them manually, meanwhile we collect the correction information. Then we present novel method to incorporate the human correction information into deep learning model. Human know how to correct noisy data. So the correction information can be inject into deep learning model. We do the experiment on our own text classification dataset, which is manually labeled, because we relabel the noisy data in our dataset for our industry application. The experiment result shows that our method improve the classification accuracy from 91.7% to 92.5%. The 91.7% baseline is based on BERT training on the corrected dataset, which is hard to surpass.


2021 ◽  
Author(s):  
Tong Guo

In industry NLP application, our manually labeled data has a certain number of noisy data. We present a simple method to find the noisy data and relabel them manually, meanwhile we collect the correction information. Then we present novel method to incorporate the human correction information into deep learning model. Human know how to correct noisy data. So the correction information can be inject into deep learning model. We do the experiment on our own text classification dataset, which is manually labeled, because we relabel the noisy data in our dataset for our industry application. The experiment result shows that our method improve the classification accuracy from 91.7% to 92.5%. The 91.7% baseline is based on BERT training on the corrected dataset, which is hard to surpass.


2020 ◽  
Author(s):  
Noor Ayesha ◽  
Saleha Yurf ◽  
Syed Mohammad Mehmood Abbas ◽  
Ali Haider Bangash ◽  
Adil Baloch ◽  
...  

NAFLD is reported to be the only hepatic ailment increasing in itsprevalence concurrently with both; obesity & T2DM. In the wake of a massivestrain on global health resources due to COVID 19 pandemic, NAFLD is boundto be neglected & shelved. Abdominal ultrasonography is done for NAFLDscreening diagnosis which has a high monetary cost associated with it. Wepresent a deep learning model that requires only easy-to-measureanthropometric measures for coming up with a screening diagnosis for NAFLDwith very high accuracy. Further studies are suggested to validate thegeneralization of the presented model.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Yuliang Ma ◽  
Bin Chen ◽  
Rihui Li ◽  
Chushan Wang ◽  
Jun Wang ◽  
...  

The rapid development of the automotive industry has brought great convenience to our life, which also leads to a dramatic increase in the amount of traffic accidents. A large proportion of traffic accidents were caused by driving fatigue. EEG is considered as a direct, effective, and promising modality to detect driving fatigue. In this study, we presented a novel feature extraction strategy based on a deep learning model to achieve high classification accuracy and efficiency in using EEG for driving fatigue detection. EEG signals were recorded from six healthy volunteers in a simulated driving experiment. The feature extraction strategy was developed by integrating the principal component analysis (PCA) and a deep learning model called PCA network (PCANet). In particular, the principal component analysis (PCA) was used to preprocess EEG data to reduce its dimension in order to overcome the limitation of dimension explosion caused by PCANet, making this approach feasible for EEG-based driving fatigue detection. Results demonstrated high and robust performance of the proposed modified PCANet method with classification accuracy up to 95%, which outperformed the conventional feature extraction strategies in the field. We also identified that the parietal and occipital lobes of the brain were strongly associated with driving fatigue. This is the first study, to the best of our knowledge, to practically apply the modified PCANet technique for EEG-based driving fatigue detection.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
MoungHo Yi ◽  
MyungJin Lim ◽  
Hoon Ko ◽  
JuHyun Shin

With the rising number of Internet users, there has been a rapid increase in cyberbullying. Among the types of cyberbullying, verbal abuse is emerging as the most serious problem, for preventing which profanity is being identified and blocked. However, users employ words cleverly to avoid blocking. With the existing profanity discrimination methods, deliberate typos and profanity using special characters can be discriminated with high accuracy. However, as they cannot grasp the meaning of the words and the flow of sentences, standard words such as “Sibaljeom (starting point, a Korean word that sounds similar to a swear word)” and “Saekkibalgalag (little toe, a Korean word that sounds similar to another swear word)” are less accurately discriminated. Therefore, in order to solve this problem, this study proposes a method of discriminating profanity using a deep learning model that can grasp the meaning and context of words after separating Hangul into the onset, nucleus, and coda.


2020 ◽  
Author(s):  
Ethan Schonfeld ◽  
Edward Vendrow ◽  
Joshua Vendrow ◽  
Elan Schonfeld

AbstractIdentification and study of human-essential genes has become of practical importance with the realization that disruption or loss of nearby essential genes can introduce latent-vulnerabilities to cancer cells. Essential genes have been studied by copy-number-variants and deletion events, which are associated with introns. The premise of our work is that introns of essential genes have characteristic properties that are distinct from the introns of nonessential genes. We provide support for the existence of characteristic properties by training a deep learning model on introns of essential and nonessential genes and demonstrated that introns alone can be used to classify essential and nonessential genes with high accuracy (AUC of 0.846). We further demonstrated that the accuracy of the same deep-learning model limited to first introns will perform at an increased level, thereby demonstrating the critical importance of introns and particularly first introns in gene essentiality. Using a computational approach, we identified several novel properties of introns of essential genes, finding that their structure protects against deletion and intron-loss events, and that these traits are especially centered on the first intron. We showed that GC density is increased in the first introns of essential genes, allowing for increased enhancer activity, protection against deletions, and improved splice-site recognition. Furthermore, we found that first introns of essential genes are of remarkably smaller size than their nonessential counterparts, and to protect against common 3’ end deletion events, essential genes carry an increased number of (smaller) introns. To demonstrate the importance of the seven features we identified, we trained a feature–based model using only information from these features and achieved high accuracy (AUC of 0.787).


Sign in / Sign up

Export Citation Format

Share Document