scholarly journals A Dynamic Learning Method for the Classification of the HEp-2 Cell Images

Electronics ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 850 ◽  
Author(s):  
Caleb Vununu ◽  
Suk-Hwan Lee ◽  
Oh-Jun Kwon ◽  
Ki-Ryong Kwon

The complete analysis of the images representing the human epithelial cells of type 2, commonly referred to as HEp-2 cells, is one of the most important tasks in the diagnosis procedure of various autoimmune diseases. The problem of the automatic classification of these images has been widely discussed since the unfolding of deep learning-based methods. Certain datasets of the HEp-2 cell images exhibit an extreme complexity due to their significant heterogeneity. We propose in this work a method that tackles specifically the problem related to this disparity. A dynamic learning process is conducted with different networks taking different input variations in parallel. In order to emphasize the localized changes in intensity, the discrete wavelet transform is used to produce different versions of the input image. The approximation and detail coefficients are fed to four different deep networks in a parallel learning paradigm in order to efficiently homogenize the features extracted from the images that have different intensity levels. The feature maps from these different networks are then concatenated and passed to the classification layers to produce the final type of the cellular image. The proposed method was tested on a public dataset that comprises images from two intensity levels. The significant heterogeneity of this dataset limits the discrimination results of some of the state-of-the-art deep learning-based methods. We have conducted a comparative study with these methods in order to demonstrate how the dynamic learning proposed in this work manages to significantly minimize this heterogeneity related problem, thus boosting the discrimination results.

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5813
Author(s):  
Muhammad Umair ◽  
Muhammad Shahbaz Khan ◽  
Fawad Ahmed ◽  
Fatmah Baothman ◽  
Fehaid Alqahtani ◽  
...  

The COVID-19 outbreak began in December 2019 and has dreadfully affected our lives since then. More than three million lives have been engulfed by this newest member of the corona virus family. With the emergence of continuously mutating variants of this virus, it is still indispensable to successfully diagnose the virus at early stages. Although the primary technique for the diagnosis is the PCR test, the non-contact methods utilizing the chest radiographs and CT scans are always preferred. Artificial intelligence, in this regard, plays an essential role in the early and accurate detection of COVID-19 using pulmonary images. In this research, a transfer learning technique with fine tuning was utilized for the detection and classification of COVID-19. Four pre-trained models i.e., VGG16, DenseNet-121, ResNet-50, and MobileNet were used. The aforementioned deep neural networks were trained using the dataset (available on Kaggle) of 7232 (COVID-19 and normal) chest X-ray images. An indigenous dataset of 450 chest X-ray images of Pakistani patients was collected and used for testing and prediction purposes. Various important parameters, e.g., recall, specificity, F1-score, precision, loss graphs, and confusion matrices were calculated to validate the accuracy of the models. The achieved accuracies of VGG16, ResNet-50, DenseNet-121, and MobileNet are 83.27%, 92.48%, 96.49%, and 96.48%, respectively. In order to display feature maps that depict the decomposition process of an input image into various filters, a visualization of the intermediate activations is performed. Finally, the Grad-CAM technique was applied to create class-specific heatmap images in order to highlight the features extracted in the X-ray images. Various optimizers were used for error minimization purposes. DenseNet-121 outperformed the other three models in terms of both accuracy and prediction.


2020 ◽  
Vol 12 (3) ◽  
pp. 582 ◽  
Author(s):  
Rui Li ◽  
Shunyi Zheng ◽  
Chenxi Duan ◽  
Yang Yang ◽  
Xiqi Wang

In recent years, researchers have paid increasing attention on hyperspectral image (HSI) classification using deep learning methods. To improve the accuracy and reduce the training samples, we propose a double-branch dual-attention mechanism network (DBDA) for HSI classification in this paper. Two branches are designed in DBDA to capture plenty of spectral and spatial features contained in HSI. Furthermore, a channel attention block and a spatial attention block are applied to these two branches respectively, which enables DBDA to refine and optimize the extracted feature maps. A series of experiments on four hyperspectral datasets show that the proposed framework has superior performance to the state-of-the-art algorithm, especially when the training samples are signally lacking.


Author(s):  
D. Zhang ◽  
J. Lv ◽  
Z. Cheng ◽  
Y. Bai ◽  
Y. Cao

Abstract. After the development of deep learning object tracking methods in recent years, the fully convolutional siamese network object tracking algorithm SiamFC has become a more classic deep learning object tracking algorithm. In view of the problem that the accuracy of the tracking results of SiamFC will be reduced in the case of complex backgrounds, this paper introduces the attention mechanism based on the SiamFC, which performs channel and spatial weighting on the feature maps obtained by convolution of the input image. At the same time, the backbone network model of CNN in the algorithm is adjusted, then the siamese network combined with attention mechanism for object tracking is proposed. It can strengthen the effectiveness of the results of feature extraction and enhance the ability of the network model to discriminate targets. In this paper, the algorithm is tested on the OTB2015, VOT2016 and VOT2017 datasets, and compared with multiple object tracking algorithms. Experimental results show that the algorithm in this paper can better solve the complex background problem in object tracking, and has certain advantages compared with other algorithms.


2022 ◽  
Vol 22 (3) ◽  
pp. 1-14
Author(s):  
K. Shankar ◽  
Eswaran Perumal ◽  
Mohamed Elhoseny ◽  
Fatma Taher ◽  
B. B. Gupta ◽  
...  

COVID-19 pandemic has led to a significant loss of global deaths, economical status, and so on. To prevent and control COVID-19, a range of smart, complex, spatially heterogeneous, control solutions, and strategies have been conducted. Earlier classification of 2019 novel coronavirus disease (COVID-19) is needed to cure and control the disease. It results in a requirement of secondary diagnosis models, since no precise automated toolkits exist. The latest finding attained using radiological imaging techniques highlighted that the images hold noticeable details regarding the COVID-19 virus. The application of recent artificial intelligence (AI) and deep learning (DL) approaches integrated to radiological images finds useful to accurately detect the disease. This article introduces a new synergic deep learning (SDL)-based smart health diagnosis of COVID-19 using Chest X-Ray Images. The SDL makes use of dual deep convolutional neural networks (DCNNs) and involves a mutual learning process from one another. Particularly, the representation of images learned by both DCNNs is provided as the input of a synergic network, which has a fully connected structure and predicts whether the pair of input images come under the identical class. Besides, the proposed SDL model involves a fuzzy bilateral filtering (FBF) model to pre-process the input image. The integration of FBL and SDL resulted in the effective classification of COVID-19. To investigate the classifier outcome of the SDL model, a detailed set of simulations takes place and ensures the effective performance of the FBF-SDL model over the compared methods.


2022 ◽  
Vol 23 (1) ◽  
Author(s):  
Tulika Kakati ◽  
Dhruba K. Bhattacharyya ◽  
Jugal K. Kalita ◽  
Trina M. Norden-Krichmar

Abstract Background A limitation of traditional differential expression analysis on small datasets involves the possibility of false positives and false negatives due to sample variation. Considering the recent advances in deep learning (DL) based models, we wanted to expand the state-of-the-art in disease biomarker prediction from RNA-seq data using DL. However, application of DL to RNA-seq data is challenging due to absence of appropriate labels and smaller sample size as compared to number of genes. Deep learning coupled with transfer learning can improve prediction performance on novel data by incorporating patterns learned from other related data. With the emergence of new disease datasets, biomarker prediction would be facilitated by having a generalized model that can transfer the knowledge of trained feature maps to the new dataset. To the best of our knowledge, there is no Convolutional Neural Network (CNN)-based model coupled with transfer learning to predict the significant upregulating (UR) and downregulating (DR) genes from both trained and untrained datasets. Results We implemented a CNN model, DEGnext, to predict UR and DR genes from gene expression data obtained from The Cancer Genome Atlas database. DEGnext uses biologically validated data along with logarithmic fold change values to classify differentially expressed genes (DEGs) as UR and DR genes. We applied transfer learning to our model to leverage the knowledge of trained feature maps to untrained cancer datasets. DEGnext’s results were competitive (ROC scores between 88 and 99$$\%$$ % ) with those of five traditional machine learning methods: Decision Tree, K-Nearest Neighbors, Random Forest, Support Vector Machine, and XGBoost. DEGnext was robust and effective in terms of transferring learned feature maps to facilitate classification of unseen datasets. Additionally, we validated that the predicted DEGs from DEGnext were mapped to significant Gene Ontology terms and pathways related to cancer. Conclusions DEGnext can classify DEGs into UR and DR genes from RNA-seq cancer datasets with high performance. This type of analysis, using biologically relevant fine-tuning data, may aid in the exploration of potential biomarkers and can be adapted for other disease datasets.


2020 ◽  
Vol 12 (20) ◽  
pp. 3453
Author(s):  
Liming Pu ◽  
Xiaoling Zhang ◽  
Zenan Zhou ◽  
Jun Shi ◽  
Shunjun Wei ◽  
...  

Phase filtering is a key issue in interferometric synthetic aperture radar (InSAR) applications, such as deformation monitoring and topographic mapping. The accuracy of the deformation and terrain height is highly dependent on the quality of phase filtering. Researchers are committed to continuously improving the accuracy and efficiency of phase filtering. Inspired by the successful application of neural networks in SAR image denoising, in this paper we propose a phase filtering method that is based on deep learning to efficiently filter out the noise in the interferometric phase. In this method, the real and imaginary parts of the interferometric phase are filtered while using a scale recurrent network, which includes three single scale subnetworks based on the encoder-decoder architecture. The network can utilize the global structural phase information contained in the different-scaled feature maps, because RNN units are used to connect the three different-scaled subnetworks and transmit current state information among different subnetworks. The encoder part is used for extracting the phase features, and the decoder part restores detailed information from the encoded feature maps and makes the size of the output image the same as that of the input image. Experiments on simulated and real InSAR data prove that the proposed method is superior to three widely-used phase filtering methods by qualitative and quantitative comparisons. In addition, on the same simulated data set, the overall performance of the proposed method is better than another deep learning-based method (DeepInSAR). The runtime of the proposed method is only about 0.043s for an image with a size of 1024×1024 pixels, which has the significant advantage of computational efficiency in practical applications that require real-time processing.


Author(s):  
Virender Ranga ◽  
Shivam Gupta ◽  
Priyansh Agrawal ◽  
Jyoti Meena

Introduction: The major area of work of pathologists is concerned with detecting the diseases and helping the patients in their healthcare and well-being. The present method used by pathologists for this purpose is manually viewing the slides using a microscope and other instruments. But this method suffers from a lot of problems, like there is no standard way of diagnosing, human errors and it puts a heavy load on the laboratory men to diagnose such a large number of slides daily. Method: The slide viewing method is widely used and converted into digital form to produce high resolution images. This enables the area of deep learning and machine learning to deep dive into this field of medical sciences. In the present study, a neural based network has been proposed for classification of blood cells images into various categories. When input image is passed through the proposed architecture and all the hyper parameters and dropout ratio values are used in accordance with proposed algorithm, then model classifies the blood images with an accuracy of 95.24%. Result: After training the models on 20 epochs. The plots of training accuracy, testing accuracy and corresponding training loss, testing loss for proposed model is plotted using matplotlib and trends. Discussion: The performance of proposed model is better than existing standard architectures and other work done by various researchers. Thus, the proposed model enables the development of pathological system which will reduce human errors and daily load on laboratory men. This can also in turn help pathologists in carrying out their work more efficiently and effectively. Conclusion: In the present study, a neural based network has been proposed for classification of blood cells images into various categories. These categories have significance in the medical sciences. When input image is passed through the proposed architecture and all the hyper parameters and dropout ratio values are used in accordance with proposed algorithm, then model classifies the images with an accuracy of 95.24%. This accuracy is better than standard architectures.. Further it can be seen that the proposed neural network performs better than present related works carried by various researchers.


2021 ◽  
Author(s):  
Jan Lansky ◽  
Mokhtar Mohammadi ◽  
Adil Hussein Mohammed ◽  
Sarkhel H.Taher Karim ◽  
Shima Rashidi ◽  
...  

Abstract The ever-increasing complication and severity of the computer networks' security attacks have inspired security researchers to apply various machine learning methods to protect the organizations' data and reputation. Deep learning is one of the exciting techniques that recently have been widely used by intrusion detection systems (IDS) to secure computer networks and hosts' performance. This survey article focuses on the signature-based IDS using deep learning techniques and puts forward an in-depth survey and classification of these schemes. For this purpose, it first presents the essential background concepts about IDS architecture and various deep learning techniques. It then classifies these schemes according to the type of deep learning methods applied in each of them. It describes how deep learning networks are utilized in the misuse detection process to recognize intrusions accurately. Finally, a complete analysis of the investigated IDS frameworks is provided, and concluding remarks and future directions are highlighted.


2021 ◽  
Vol 503 (2) ◽  
pp. 2369-2379
Author(s):  
Anna M M Scaife ◽  
Fiona Porter

ABSTRACT Weight sharing in convolutional neural networks (CNNs) ensures that their feature maps will be translation-equivariant. However, although conventional convolutions are equivariant to translation, they are not equivariant to other isometries of the input image data, such as rotation and reflection. For the classification of astronomical objects such as radio galaxies, which are expected statistically to be globally orientation invariant, this lack of dihedral equivariance means that a conventional CNN must learn explicitly to classify all rotated versions of a particular type of object individually. In this work we present the first application of group-equivariant convolutional neural networks to radio galaxy classification and explore their potential for reducing intra-class variability by preserving equivariance for the Euclidean group E(2), containing translations, rotations, and reflections. For the radio galaxy classification problem considered here, we find that classification performance is modestly improved by the use of both cyclic and dihedral models without additional hyper-parameter tuning, and that a D16 equivariant model provides the best test performance. We use the Monte Carlo Dropout method as a Bayesian approximation to recover epistemic uncertainty as a function of image orientation and show that E(2)-equivariant models are able to reduce variations in model confidence as a function of rotation.


Sign in / Sign up

Export Citation Format

Share Document