learning networks
Recently Published Documents


TOTAL DOCUMENTS

1281
(FIVE YEARS 562)

H-INDEX

39
(FIVE YEARS 12)

Author(s):  
Layth Kamil Adday Almajmaie ◽  
Ahmed Raad Raheem ◽  
Wisam Ali Mahmood ◽  
Saad Albawi

<span>The segmented brain tissues from magnetic resonance images (MRI) always pose substantive challenges to the clinical researcher community, especially while making precise estimation of such tissues. In the recent years, advancements in deep learning techniques, more specifically in fully convolution neural networks (FCN) have yielded path breaking results in segmenting brain tumour tissues with pin-point accuracy and precision, much to the relief of clinical physicians and researchers alike. A new hybrid deep learning architecture combining SegNet and U-Net techniques to segment brain tissue is proposed here. Here, a skip connection of the concerned U-Net network was suitably explored. The results indicated optimal multi-scale information generated from the SegNet, which was further exploited to obtain precise tissue boundaries from the brain images. Further, in order to ensure that the segmentation method performed better in conjunction with precisely delineated contours, the output is incorporated as the level set layer in the deep learning network. The proposed method primarily focused on analysing brain tumor segmentation (BraTS) 2017 and BraTS 2018, dedicated datasets dealing with MRI brain tumour. The results clearly indicate better performance in segmenting brain tumours than existing ones.</span>


2022 ◽  
Vol 122 ◽  
pp. 108304
Author(s):  
Zhengping Hu ◽  
Zijun Li ◽  
Xueyu Wang ◽  
Saiyue Zheng

Plant Methods ◽  
2022 ◽  
Vol 18 (1) ◽  
Author(s):  
Lili Li ◽  
Jiangwei Qiao ◽  
Jian Yao ◽  
Jie Li ◽  
Li Li

Abstract Background Freezing injury is a devastating yet common damage that occurs to winter rapeseed during the overwintering period which directly reduces the yield and causes heavy economic loss. Thus, it is an important and urgent task for crop breeders to find the freezing-tolerant rapeseed materials in the process of breeding. Existing large-scale freezing-tolerant rapeseed material recognition methods mainly rely on the field investigation conducted by the agricultural experts using some professional equipments. These methods are time-consuming, inefficient and laborious. In addition, the accuracy of these traditional methods depends heavily on the knowledge and experience of the experts. Methods To solve these problems of existing methods, we propose a low-cost freezing-tolerant rapeseed material recognition approach using deep learning and unmanned aerial vehicle (UAV) images captured by a consumer UAV. We formulate the problem of freezing-tolerant material recognition as a binary classification problem, which can be solved well using deep learning. The proposed method can automatically and efficiently recognize the freezing-tolerant rapeseed materials from a large number of crop candidates. To train the deep learning network, we first manually construct the real dataset using the UAV images of rapeseed materials captured by the DJI Phantom 4 Pro V2.0. Then, five classic deep learning networks (AlexNet, VGGNet16, ResNet18, ResNet50 and GoogLeNet) are selected to perform the freezing-tolerant rapeseed material recognition. Result and conclusion The accuracy of the five deep learning networks used in our work is all over 92%. Especially, ResNet50 provides the best accuracy (93.33$$\%$$ % ) in this task. In addition, we also compare deep learning networks with traditional machine learning methods. The comparison results show that the deep learning-based methods significantly outperform the traditional machine learning-based methods in our task. The experimental results show that it is feasible to recognize the freezing-tolerant rapeseed using UAV images and deep learning.


2022 ◽  
Vol 2022 ◽  
pp. 1-21
Author(s):  
Kalyani Dhananjay Kadam ◽  
Swati Ahirrao ◽  
Ketan Kotecha

With the technological advancements of the modern era, the easy availability of image editing tools has dramatically minimized the costs, expense, and expertise needed to exploit and perpetuate persuasive visual tampering. With the aid of reputable online platforms such as Facebook, Twitter, and Instagram, manipulated images are distributed worldwide. Users of online platforms may be unaware of the existence and spread of forged images. Such images have a significant impact on society and have the potential to mislead decision-making processes in areas like health care, sports, crime investigation, and so on. In addition, altered images can be used to propagate misleading information which interferes with democratic processes (e.g., elections and government legislation) and crisis situations (e.g., pandemics and natural disasters). Therefore, there is a pressing need for effective methods for the detection and identification of forgeries. Various techniques are currently employed for the identification and detection of these forgeries. Traditional techniques depend on handcrafted or shallow-learning features. In traditional techniques, selecting features from images can be a challenging task, as the researcher has to decide which features are important and which are not. Also, if the number of features to be extracted is quite large, feature extraction using these techniques can become time-consuming and tedious. Deep learning networks have recently shown remarkable performance in extracting complicated statistical characteristics from large input size data, and these techniques efficiently learn underlying hierarchical representations. However, the deep learning networks for handling these forgeries are expensive in terms of the high number of parameters, storage, and computational cost. This research work presents Mask R-CNN with MobileNet, a lightweight model, to detect and identify copy move and image splicing forgeries. We have performed a comparative analysis of the proposed work with ResNet-101 on seven different standard datasets. Our lightweight model outperforms on COVERAGE and MICCF2000 datasets for copy move and on COLUMBIA dataset for image splicing. This research work also provides a forged percentage score for a region in an image.


2022 ◽  
Vol 14 ◽  
Author(s):  
Oshri Avraham ◽  
Pan-Yue Deng ◽  
Dario Maschi ◽  
Vitaly A. Klyachko ◽  
Valeria Cavalli

Among most prevalent deficits in individuals with Fragile X syndrome (FXS) is hypersensitivity to sensory stimuli and somatosensory alterations. Whether dysfunction in peripheral sensory system contributes to these deficits remains poorly understood. Satellite glial cells (SGCs), which envelop sensory neuron soma, play critical roles in regulating neuronal function and excitability. The potential contributions of SGCs to sensory deficits in FXS remain unexplored. Here we found major structural defects in sensory neuron-SGC association in the dorsal root ganglia (DRG), manifested by aberrant covering of the neuron and gaps between SGCs and the neuron along their contact surface. Single-cell RNAseq analyses demonstrated transcriptional changes in both neurons and SGCs, indicative of defects in neuronal maturation and altered SGC vesicular secretion. We validated these changes using fluorescence microscopy, qPCR, and high-resolution transmission electron microscopy (TEM) in combination with computational analyses using deep learning networks. These results revealed a disrupted neuron-glia association at the structural and functional levels. Given the well-established role for SGCs in regulating sensory neuron function, altered neuron-glia association may contribute to sensory deficits in FXS.


RSC Advances ◽  
2022 ◽  
Vol 12 (3) ◽  
pp. 1769-1776
Author(s):  
Ruizhao Yang ◽  
Yun Li ◽  
Binyi Qin ◽  
Di Zhao ◽  
Yongjin Gan ◽  
...  

We proposed a WGAN-ResNet method, which combines two deep learning networks, the Wasserstein generative adversarial network (WGAN) and residual neural network (ResNet), to detect carbendazim based on terahertz spectroscopy.


2022 ◽  
Vol 70 (2) ◽  
pp. 3589-3607
Author(s):  
Sengul Bayrak ◽  
Eylem Yucel ◽  
Hidayet Takci

Biology ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 15
Author(s):  
Khalil ur Rehman ◽  
Jianqiang Li ◽  
Yan Pei ◽  
Anaa Yasin ◽  
Saqib Ali ◽  
...  

Architectural distortion is the third most suspicious appearance on a mammogram representing abnormal regions. Architectural distortion (AD) detection from mammograms is challenging due to its subtle and varying asymmetry on breast mass and small size. Automatic detection of abnormal ADs regions in mammograms using computer algorithms at initial stages could help radiologists and doctors. The architectural distortion star shapes ROIs detection, noise removal, and object location, affecting the classification performance, reducing accuracy. The computer vision-based technique automatically removes the noise and detects the location of objects from varying patterns. The current study investigated the gap to detect architectural distortion ROIs (region of interest) from mammograms using computer vision techniques. Proposed an automated computer-aided diagnostic system based on architectural distortion using computer vision and deep learning to predict breast cancer from digital mammograms. The proposed mammogram classification framework pertains to four steps such as image preprocessing, augmentation and image pixel-wise segmentation. Architectural distortion ROI`s detection, training deep learning, and machine learning networks to classify AD`s ROIs into malignant and benign classes. The proposed method has been evaluated on three databases, the PINUM, the CBIS-DDSM, and the DDSM mammogram images, using computer vision and depth-wise 2D V-net 64 convolutional neural networks and achieved 0.95, 0.97, and 0.98 accuracies, respectively. Experimental results reveal that our proposed method outperforms as compared with the ShuffelNet, MobileNet, SVM, K-NN, RF, and previous studies.


Sign in / Sign up

Export Citation Format

Share Document