scholarly journals Train Fast While Reducing False Positives: Improving Animal Classification Performance Using Convolutional Neural Networks

Geomatics ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 34-49
Author(s):  
Mael Moreni ◽  
Jerome Theau ◽  
Samuel Foucher

The combination of unmanned aerial vehicles (UAV) with deep learning models has the capacity to replace manned aircrafts for wildlife surveys. However, the scarcity of animals in the wild often leads to highly unbalanced, large datasets for which even a good detection method can return a large amount of false detections. Our objectives in this paper were to design a training method that would reduce training time, decrease the number of false positives and alleviate the fine-tuning effort of an image classifier in a context of animal surveys. We acquired two highly unbalanced datasets of deer images with a UAV and trained a Resnet-18 classifier using hard-negative mining and a series of recent techniques. Our method achieved sub-decimal false positive rates on two test sets (1 false positive per 19,162 and 213,312 negatives respectively), while training on small but relevant fractions of the data. The resulting training times were therefore significantly shorter than they would have been using the whole datasets. This high level of efficiency was achieved with little tuning effort and using simple techniques. We believe this parsimonious approach to dealing with highly unbalanced, large datasets could be particularly useful to projects with either limited resources or extremely large datasets.

Author(s):  
Huiwu Luo ◽  
Yuan Yan Tang ◽  
Robert P. Biuk-Aghai ◽  
Xu Yang ◽  
Lina Yang ◽  
...  

In this paper, we propose a novel scheme to learn high-level representative features and conduct classification for hyperspectral image (HSI) data in an automatic fashion. The proposed method is a collaboration of a wavelet-based extended morphological profile (WTEMP) and a deep autoencoder (DAE) (“WTEMP-DAE”), with the aim of exploiting the discriminative capability of DAE when using WTEMP features as the input. Each part of WTEMP-DAE is ingenious and contributes to the final classification performance. Specifically, in WTEMP-DAE, the spatial information is extracted from the WTEMP, which is then joined with the wavelet denoised spectral information to form the spectral-spatial description of HSI data. The obtained features are fed into DAE as the original input, where the good weights and bias of the network are initialized through unsupervised pre-training. Once the pre-training is completed, the reconstruction layers are discarded and a logistic regression (LR) layer is added to the top of the network to perform supervised fine-tuning and classification. Experimental results on two real HSI data sets demonstrate that the proposed strategy improves classification performance in comparison with other state-of-the-art hand-crafted feature extractors and their combinations.


Genes ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 618
Author(s):  
Yue Jin ◽  
Shihao Li ◽  
Yang Yu ◽  
Chengsong Zhang ◽  
Xiaojun Zhang ◽  
...  

A mutant of the ridgetail white prawn, which exhibited rare orange-red body color with a higher level of free astaxanthin (ASTX) concentration than that in the wild-type prawn, was obtained in our lab. In order to understand the underlying mechanism for the existence of a high level of free astaxanthin, transcriptome analysis was performed to identify the differentially expressed genes (DEGs) between the mutant and wild-type prawns. A total of 78,224 unigenes were obtained, and 1863 were identified as DEGs, in which 902 unigenes showed higher expression levels, while 961 unigenes presented lower expression levels in the mutant in comparison with the wild-type prawns. Based on Gene Ontology analysis and Kyoto Encyclopedia of Genes and Genomes analysis, as well as further investigation of annotated DEGs, we found that the biological processes related to astaxanthin binding, transport, and metabolism presented significant differences between the mutant and the wild-type prawns. Some genes related to these processes, including crustacyanin, apolipoprotein D (ApoD), cathepsin, and cuticle proteins, were identified as DEGs between the two types of prawns. These data may provide important information for us to understand the molecular mechanism of the existence of a high level of free astaxanthin in the prawn.


2021 ◽  
Vol 9 (6) ◽  
pp. 1290
Author(s):  
Natalia Alvarez-Santullano ◽  
Pamela Villegas ◽  
Mario Sepúlveda Mardones ◽  
Roberto E. Durán ◽  
Raúl Donoso ◽  
...  

Burkholderia sensu lato (s.l.) species have a versatile metabolism. The aims of this review are the genomic reconstruction of the metabolic pathways involved in the synthesis of polyhydroxyalkanoates (PHAs) by Burkholderia s.l. genera, and the characterization of the PHA synthases and the pha genes organization. The reports of the PHA synthesis from different substrates by Burkholderia s.l. strains were reviewed. Genome-guided metabolic reconstruction involving the conversion of sugars and fatty acids into PHAs by 37 Burkholderia s.l. species was performed. Sugars are metabolized via the Entner–Doudoroff (ED), pentose-phosphate (PP), and lower Embden–Meyerhoff–Parnas (EMP) pathways, which produce reducing power through NAD(P)H synthesis and PHA precursors. Fatty acid substrates are metabolized via β-oxidation and de novo synthesis of fatty acids into PHAs. The analysis of 194 Burkholderia s.l. genomes revealed that all strains have the phaC, phaA, and phaB genes for PHA synthesis, wherein the phaC gene is generally present in ≥2 copies. PHA synthases were classified into four phylogenetic groups belonging to class I II and III PHA synthases and one outlier group. The reconstruction of PHAs synthesis revealed a high level of gene redundancy probably reflecting complex regulatory layers that provide fine tuning according to diverse substrates and physiological conditions.


Author(s):  
Mehdi Bahri ◽  
Eimear O’ Sullivan ◽  
Shunwang Gong ◽  
Feng Liu ◽  
Xiaoming Liu ◽  
...  

AbstractStandard registration algorithms need to be independently applied to each surface to register, following careful pre-processing and hand-tuning. Recently, learning-based approaches have emerged that reduce the registration of new scans to running inference with a previously-trained model. The potential benefits are multifold: inference is typically orders of magnitude faster than solving a new instance of a difficult optimization problem, deep learning models can be made robust to noise and corruption, and the trained model may be re-used for other tasks, e.g. through transfer learning. In this paper, we cast the registration task as a surface-to-surface translation problem, and design a model to reliably capture the latent geometric information directly from raw 3D face scans. We introduce Shape-My-Face (SMF), a powerful encoder-decoder architecture based on an improved point cloud encoder, a novel visual attention mechanism, graph convolutional decoders with skip connections, and a specialized mouth model that we smoothly integrate with the mesh convolutions. Compared to the previous state-of-the-art learning algorithms for non-rigid registration of face scans, SMF only requires the raw data to be rigidly aligned (with scaling) with a pre-defined face template. Additionally, our model provides topologically-sound meshes with minimal supervision, offers faster training time, has orders of magnitude fewer trainable parameters, is more robust to noise, and can generalize to previously unseen datasets. We extensively evaluate the quality of our registrations on diverse data. We demonstrate the robustness and generalizability of our model with in-the-wild face scans across different modalities, sensor types, and resolutions. Finally, we show that, by learning to register scans, SMF produces a hybrid linear and non-linear morphable model. Manipulation of the latent space of SMF allows for shape generation, and morphing applications such as expression transfer in-the-wild. We train SMF on a dataset of human faces comprising 9 large-scale databases on commodity hardware.


Author(s):  
Xuhai Xu ◽  
Ebrahim Nemati ◽  
Korosh Vatanparvar ◽  
Viswam Nathan ◽  
Tousif Ahmed ◽  
...  

The prevalence of ubiquitous computing enables new opportunities for lung health monitoring and assessment. In the past few years, there have been extensive studies on cough detection using passively sensed audio signals. However, the generalizability of a cough detection model when applied to external datasets, especially in real-world implementation, is questionable and not explored adequately. Beyond detecting coughs, researchers have looked into how cough sounds can be used in assessing lung health. However, due to the challenges in collecting both cough sounds and lung health condition ground truth, previous studies have been hindered by the limited datasets. In this paper, we propose Listen2Cough to address these gaps. We first build an end-to-end deep learning architecture using public cough sound datasets to detect coughs within raw audio recordings. We employ a pre-trained MobileNet and integrate a number of augmentation techniques to improve the generalizability of our model. Without additional fine-tuning, our model is able to achieve an F1 score of 0.948 when tested against a new clean dataset, and 0.884 on another in-the-wild noisy dataset, leading to an advantage of 5.8% and 8.4% on average over the best baseline model, respectively. Then, to mitigate the issue of limited lung health data, we propose to transform the cough detection task to lung health assessment tasks so that the rich cough data can be leveraged. Our hypothesis is that these tasks extract and utilize similar effective representation from cough sounds. We embed the cough detection model into a multi-instance learning framework with the attention mechanism and further tune the model for lung health assessment tasks. Our final model achieves an F1-score of 0.912 on healthy v.s. unhealthy, 0.870 on obstructive v.s. non-obstructive, and 0.813 on COPD v.s. asthma classification, outperforming the baseline by 10.7%, 6.3%, and 3.7%, respectively. Moreover, the weight value in the attention layer can be used to identify important coughs highly correlated with lung health, which can potentially provide interpretability for expert diagnosis in the future.


2019 ◽  
Vol 2019 ◽  
pp. 1-9
Author(s):  
Yizhe Wang ◽  
Cunqian Feng ◽  
Yongshun Zhang ◽  
Sisan He

Precession is a common micromotion form of space targets, introducing additional micro-Doppler (m-D) modulation into the radar echo. Effective classification of space targets is of great significance for further micromotion parameter extraction and identification. Feature extraction is a key step during the classification process, largely influencing the final classification performance. This paper presents two methods for classifying different types of space precession targets from the HRRPs. We first establish the precession model of space targets and analyze the scattering characteristics and then compute electromagnetic data of the cone target, cone-cylinder target, and cone-cylinder-flare target. Experimental results demonstrate that the support vector machine (SVM) using histograms of oriented gradient (HOG) features achieves a good result, whereas the deep convolutional neural network (DCNN) obtains a higher classification accuracy. DCNN combines the feature extractor and the classifier itself to automatically mine the high-level signatures of HRRPs through a training process. Besides, the efficiency of the two classification processes are compared using the same dataset.


2019 ◽  
Vol 152 (Supplement_1) ◽  
pp. S35-S36
Author(s):  
Hadrian Mendoza ◽  
Christopher Tormey ◽  
Alexa Siddon

Abstract In the evaluation of bone marrow (BM) and peripheral blood (PB) for hematologic malignancy, positive immunoglobulin heavy chain (IG) or T-cell receptor (TCR) gene rearrangement results may be detected despite unrevealing results from morphologic, flow cytometric, immunohistochemical (IHC), and/or cytogenetic studies. The significance of positive rearrangement studies in the context of otherwise normal ancillary findings is unknown, and as such, we hypothesized that gene rearrangement studies may be predictive of an emerging B- or T-cell clone in the absence of other abnormal laboratory tests. Data from all patients who underwent IG or TCR gene rearrangement testing at the authors’ affiliated VA hospital between January 1, 2013, and July 6, 2018, were extracted from the electronic medical record. Date of testing; specimen source; and morphologic, flow cytometric, IHC, and cytogenetic characterization of the tissue source were recorded from pathology reports. Gene rearrangement results were categorized as true positive, false positive, false negative, or true negative. Lastly, patient records were reviewed for subsequent diagnosis of hematologic malignancy in patients with positive gene rearrangement results with negative ancillary testing. A total of 136 patients, who had 203 gene rearrangement studies (50 PB and 153 BM), were analyzed. In TCR studies, there were 2 false positives and 1 false negative in 47 PB assays, as well as 7 false positives and 1 false negative in 54 BM assays. Regarding IG studies, 3 false positives and 12 false negatives in 99 BM studies were identified. Sensitivity and specificity, respectively, were calculated for PB TCR studies (94% and 93%), BM IG studies (71% and 95%), and BM TCR studies (92% and 83%). Analysis of PB IG gene rearrangement studies was not performed due to the small number of tests (3; all true negative). None of the 12 patients with false-positive IG/TCR gene rearrangement studies later developed a lymphoproliferative disorder, although 2 patients were later diagnosed with acute myeloid leukemia. Of the 14 false negatives, 10 (71%) were related to a diagnosis of plasma cell neoplasms. Results from the present study suggest that positive IG/TCR gene rearrangement studies are not predictive of lymphoproliferative disorders in the context of otherwise negative BM or PB findings. As such, when faced with equivocal pathology reports, clinicians can be practically advised that isolated positive IG/TCR gene rearrangement results may not indicate the need for closer surveillance.


2006 ◽  
Vol 72 (6) ◽  
pp. 3924-3932 ◽  
Author(s):  
Erik Lys�e ◽  
Sonja S. Klemsdal ◽  
Karen R. Bone ◽  
Rasmus J. N. Frandsen ◽  
Thomas Johansen ◽  
...  

ABSTRACT Zearalenones are produced by several Fusarium species and can cause reproductive problems in animals. Some aurofusarin mutants of Fusarium pseudograminearum produce elevated levels of zearalenone (ZON), one of the estrogenic mycotoxins comprising the zearalenones. An analysis of transcripts from polyketide synthase genes identified in the Fusarium graminearum database was carried out for these mutants. PKS4 was the only gene with an enoyl reductase domain that had a higher level of transcription in the aurofusarin mutants than in the wild type. An Agrobacterium tumefaciens-mediated transformation protocol was used to replace the central part of the PKS4 gene with a hygB resistance gene through double homologous recombination in an F. graminearum strain producing a high level of ZON. PCR and Southern analysis of transformants were used to identify isolates with single insertional replacements of PKS4. High-performance liquid chromatography analysis showed that the PKS4 replacement mutant did not produce ZON. Thus, PKS4 encodes an enzyme required for the production of ZON in F. graminearum. Barley root infection studies revealed no alteration in the pathogenicity of the PKS4 mutant compared to the pathogenicity of the wild type. The expression of PKS13, which is located in the same cluster as PKS4, decreased dramatically in the mutant, while transcription of PKS4 was unchanged. This differential expression may indicate that ZON or its derivatives do not regulate expression of PKS4 and that the PKS4-encoded protein or its product stimulates expression of PKS13. Furthermore, both the lack of aurofusarin and ZON influenced the expression of other polyketide synthases, demonstrating that one polyketide can influence the expression of others.


Author(s):  
Bo Wang ◽  
Xiaoting Yu ◽  
Chengeng Huang ◽  
Qinghong Sheng ◽  
Yuanyuan Wang ◽  
...  

The excellent feature extraction ability of deep convolutional neural networks (DCNNs) has been demonstrated in many image processing tasks, by which image classification can achieve high accuracy with only raw input images. However, the specific image features that influence the classification results are not readily determinable and what lies behind the predictions is unclear. This study proposes a method combining the Sobel and Canny operators and an Inception module for ship classification. The Sobel and Canny operators obtain enhanced edge features from the input images. A convolutional layer is replaced with the Inception module, which can automatically select the proper convolution kernel for ship objects in different image regions. The principle is that the high-level features abstracted by the DCNN, and the features obtained by multi-convolution concatenation of the Inception module must ultimately derive from the edge information of the preprocessing input images. This indicates that the classification results are based on the input edge features, which indirectly interpret the classification results to some extent. Experimental results show that the combination of the edge features and the Inception module improves DCNN ship classification performance. The original model with the raw dataset has an average accuracy of 88.72%, while when using enhanced edge features as input, it achieves the best performance of 90.54% among all models. The model that replaces the fifth convolutional layer with the Inception module has the best performance of 89.50%. It performs close to VGG-16 on the raw dataset and is significantly better than other deep neural networks. The results validate the functionality and feasibility of the idea posited.


2018 ◽  
Vol 156 (5) ◽  
pp. 234 ◽  
Author(s):  
Karen A. Collins ◽  
Kevin I. Collins ◽  
Joshua Pepper ◽  
Jonathan Labadie-Bartz ◽  
Keivan G. Stassun ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document