feature expression
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 36)

H-INDEX

4
(FIVE YEARS 2)

2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
Chunhua Zhao ◽  
zhangwen Lin ◽  
Jinling Tan ◽  
Hengxing Hu ◽  
Qian Li

Aiming at solving the acquisition problems of wear particle data of large-modulus gear teeth and few training datasets, an integrated model of LCNNE based on transfer learning is proposed in this paper. Firstly, the wear particles are diagnosed and classified by connecting a new joint loss function and two pretrained models VGG19 and GoogLeNet. Subsequently, the wear particles in gearbox lubricating oil are chosen as the experimental object to make a comparison. Compared with the other four models’ experimental results, the model superiority in wear particle identification and classification is verified. Taking five models as feature extractors and support vector machines as classifiers, the experimental results and comparative analysis reveal that the LCNNE model is better than the other four models because its feature expression ability is stronger than that of the other four models.


2021 ◽  
Author(s):  
Yan Miao ◽  
Fu Liu ◽  
Tao Hou ◽  
Qiaoliang Liu ◽  
Tian Dong ◽  
...  

A metagenome contains all DNA sequences from an environmental sample, including viruses, bacteria, fungi, actinomycetes and so on. Since viruses are of huge abundance and have caused vast mortality and morbidity to human society in history as a kind of major pathogens, detecting viruses from metagenomes plays a crucial role in analysing the viral component of samples and is the very first step for clinical diagnosis. However, detecting viral fragments directly from the metagenomes is still a tough issue because of the existence of huge number of short sequences. In this paper, a hybrid Deep lEarning model for idenTifying vIral sequences fRom mEtagenomes (DETIRE), is proposed to solve the problem. Firstly, the graph-based nucleotide sequence embedding strategy is utilized to enrich the expression of DNA sequences by training an embedding matrix. Then the spatial and sequential features are extracted by trained CNN and BiLSTM networks respectively to improve the feature expression of short sequences. Finally, the two set of features are weighted combined for the final decision. Trained by 220,000 sequences of 500bp subsampled from the Virus and Host RefSeq genomes, DETIRE identifies more short viral sequences (<1,000bp) than three latest methods, DeepVirFinder, PPR-Meta and CHEER. DETIRE is freely available at https://github.com/crazyinter/DETIRE.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012034
Author(s):  
Xuemei Hou ◽  
Fei Gao ◽  
Jianping Wu ◽  
Minghui Wu

Abstract The traditional hepaticcell carcinoma (HCC) pathological grading depends on biopsy, which will cause damage to the patient's body and is not suitable for everyone's pathological grading diagnosis. The purpose of this paper is to study the pathological grading of liver tumors on MRI images by using deep learning algorithm, so as to further improve the accuracy of HCC pathological grading. An improved network model based on SE-DenseNet is proposed. The nonlinear mapping relationship between feature channels is modeled and recalibrated using attention mechanism, and rich deep-seated features are extracted, so as to improve the feature expression ability of the network. The method proposed in this paper is verified on the data set including 197 patients, including 130 training sets and 67 test sets. The experimental results are evaluated by receiver operating characteristic (ROC) and area under the ROC curve (AUC). The improved SE-Densenet network achieves good results, and AUC 0.802 is obtained on the test set. The experimental results show that the method proposed in this paper can well predict the pathological grade of HCC.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Weixian Song ◽  
Junlong Fang ◽  
Runtao Wang ◽  
Kezhu Tan ◽  
Marwan Aouad

Abstract The behaviours of the pig are often closely related to their health. Pig recognition is very important for pig behaviour analysis and digital breeding. Currently, the early signs and abnormal behaviours of sick pigs in breeding farms are mainly completed by human observation. However, visual inspection is labour intensive and time-consuming, and it suffers from the problems of individual experiences and varying environments. An improved ResNet model was proposed and applied to detect individual pigs in this study based on deep learning knowledge. The developed model captured the features of pigs applying across layer connections, and the ability of feature expression was improved by adding a new residual module. The number of layers was reduced to minimise the net complexity. Generally, the ResNet frame was developed by reducing the number of convolution layers, constructing different types of the residual module and adding the number of convolution kernels. The training accuracy and testing accuracy reached 98.2% and 96.4%, respectively, when using the improved model. The experiment results showed that the method proposed in this paper for checking living situations and disease prevention of commercial pigs in pig farms is potential.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1398
Author(s):  
Taian Guo ◽  
Tao Dai ◽  
Ling Liu ◽  
Zexuan Zhu ◽  
Shu-Tao Xia

Convolutional Neural Networks (CNNs) have been widely used in video super-resolution (VSR). Most existing VSR methods focus on how to utilize the information of multiple frames, while neglecting the feature correlations of the intermediate features, thus limiting the feature expression of the models. To address this problem, we propose a novel SAA network, that is, Scale-and-Attention-Aware Networks, to apply different attention to different temporal-length streams, while further exploring both spatial and channel attention on separate streams with a newly proposed Criss-Cross Channel Attention Module (C3AM). Experiments on public VSR datasets demonstrate the superiority of our method over other state-of-the-art methods in terms of both quantitative and qualitative metrics.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Lingjing Chen

Facial features are an effective representation of students’ fatigue state, and the eye is more closely related to fatigue state. However, there are three main problems in the existing research: (1) the positioning of the eye is vulnerable to the external environment; (2) the ocular features need to be artificially defined and extracted for state judgment; and (3) although the student fatigue state detection based on convolutional neural network has a high accuracy, it is difficult to apply in the terminal side in real time. In view of the above problems, a method of student fatigue state judgment is proposed which combines face detection and lightweight depth learning technology. First, the AdaBoost algorithm is used to detect the human face from the input images, and the images marked with human face regions are saved to the local folder, which is used as the sample dataset of the open-close judgment part. Second, a novel reconstructed pyramid structure is proposed to improve the MobileNetV2-SSD to improve the accuracy of target detection. Then, the feature enhancement suppression mechanism based on SE-Net module is introduced to effectively improve the feature expression ability. The final experimental results show that, compared with the current commonly used target detection network, the proposed method has better classification ability for eye state and is improved in real-time performance and accuracy.


2021 ◽  
Author(s):  
Chunyan Zeng ◽  
Yao Yang ◽  
Zhifeng Wang ◽  
Shuai Kong ◽  
Shixiong Feng ◽  
...  

Abstract Digital Audio tampering detection can be applied to verify the authenticity of digital audio. However, the current methods are mostly based on visual comparison analysis of the continuity of electronic network frequency (ENF) of digital audio with a standard ENF database. It is usually tricky to obtain the ENF database, and the feature expression of the visualization method is weak, which leads to low detection accuracy. In order to solve this problem, this paper proposed an audio tampering detection method based on the fusion of shallow and deep features. Firstly, the band-pass filtering process is performed on the audio signal to obtain the ENF components, and then the discrete Fourier transform and Hilbert transform are applied to obtain the phase and instantaneous frequency of the ENF components. Secondly, the shallow features are extracted by performing framing and fitting operations on the estimated phase and instantaneous frequency. Then, the designed convolutional neural network is used to obtain deep features, and the attention mechanism is applied to fuse shallow features and deep features. Finally, after dimensionality reduction through the fully connected layer, the Softmax layer is used for classification to detect the tampering audio. The method achieves 97.03% accuracy on three classic databases, which are Carioca 1, Carioca 2, and New Spanish. In addition, we have achieved an accuracy of 88.31% on the newly constructed database GAUDI-DI. Experimental results show that the proposed method is superior to the state-of-the-art method.


Algorithms ◽  
2021 ◽  
Vol 14 (10) ◽  
pp. 290
Author(s):  
Kai Ma ◽  
Ming-Jun Nie ◽  
Sen Lin ◽  
Jianlei Kong ◽  
Cheng-Cai Yang ◽  
...  

Accurate identification of insect pests is the key to improve crop yield and ensure quality and safety. However, under the influence of environmental conditions, the same kind of pests show obvious differences in intraclass representation, while the different kinds of pests show slight similarities. The traditional methods have been difficult to deal with fine-grained identification of pests, and their practical deployment is low. In order to solve this problem, this paper uses a variety of equipment terminals in the agricultural Internet of Things to obtain a large number of pest images and proposes a fine-grained identification model of pests based on probability fusion network FPNT. This model designs a fine-grained feature extractor based on an optimized CSPNet backbone network, mining different levels of local feature expression that can distinguish subtle differences. After the integration of the NetVLAD aggregation layer, the gated probability fusion layer gives full play to the advantages of information complementarity and confidence coupling of multi-model fusion. The comparison test shows that the PFNT model has an average recognition accuracy of 93.18% for all kinds of pests, and its performance is better than other deep-learning methods, with the average processing time drop to 61 ms, which can meet the needs of fine-grained image recognition of pests in the Internet of Things in agricultural and forestry practice, and provide technical application reference for intelligent early warning and prevention of pests.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6382
Author(s):  
Weizheng Qiao ◽  
Xiaojun Bi

Recently, deep convolutional neural networks (CNN) with inception modules have attracted much attention due to their excellent performances on diverse domains. Nevertheless, the basic CNN can only capture a univariate feature, which is essentially linear. It leads to a weak ability in feature expression, further resulting in insufficient feature mining. In view of this issue, researchers incessantly deepened the network, bringing parameter redundancy and model over-fitting. Hence, whether we can employ this efficient deep neural network architecture to improve CNN and enhance the capacity of image recognition task still remains unknown. In this paper, we introduce spike-and-slab units to the modified inception module, enabling our model to capture dual latent variables and the average and covariance information. This operation further enhances the robustness of our model to variations of image intensity without increasing the model parameters. The results of several tasks demonstrated that dual variable operations can be well-integrated into inception modules, and excellent results have been achieved.


Sign in / Sign up

Export Citation Format

Share Document