scholarly journals Multimodal Semisupervised Deep Graph Learning for Automatic Precipitation Nowcasting

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Kaichao Miao ◽  
Wei Wang ◽  
Rui Hu ◽  
Lei Zhang ◽  
Yali Zhang ◽  
...  

Precipitation nowcasting plays a key role in land security and emergency management of natural calamities. A majority of existing deep learning-based techniques realize precipitation nowcasting by learning a deep nonlinear function from a single information source, e.g., weather radar. In this study, we propose a novel multimodal semisupervised deep graph learning framework for precipitation nowcasting. Unlike existing studies, different modalities of observation data (including both meteorological and nonmeteorological data) are modeled jointly, thereby benefiting each other. All information is converted into image structures, next, precipitation nowcasting is deemed as a computer vision task to be optimized. To handle areas with unavailable precipitation, we convert all observation information into a graph structure and introduce a semisupervised graph convolutional network with a sequence connect architecture to learn the features of all local areas. With the learned features, precipitation is predicted through a multilayer fully connected regression network. Experiments on real datasets confirm the effectiveness of the proposed method.

2022 ◽  
Vol 2022 ◽  
pp. 1-14
Author(s):  
Yue Liu ◽  
Junqi Ma ◽  
Xingzhen Tao ◽  
Jingyun Liao ◽  
Tao Wang ◽  
...  

In the era of digital manufacturing, huge amount of image data generated by manufacturing systems cannot be instantly handled to obtain valuable information due to the limitations (e.g., time) of traditional techniques of image processing. In this paper, we propose a novel self-supervised self-attention learning framework—TriLFrame for image representation learning. The TriLFrame is based on the hybrid architecture of Convolutional Network and Transformer. Experiments show that TriLFrame outperforms state-of-the-art self-supervised methods on the ImageNet dataset and achieves competitive performances when transferring learned features on ImageNet to other classification tasks. Moreover, TriLFrame verifies the proposed hybrid architecture, which combines the powerful local convolutional operation and the long-range nonlocal self-attention operation and works effectively in image representation learning tasks.


BMC Cancer ◽  
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Zhongjian Ju ◽  
Wen Guo ◽  
Shanshan Gu ◽  
Jin Zhou ◽  
Wei Yang ◽  
...  

Abstract Background It is very important to accurately delineate the CTV on the patient’s three-dimensional CT image in the radiotherapy process. Limited to the scarcity of clinical samples and the difficulty of automatic delineation, the research of automatic delineation of cervical cancer CTV based on CT images for new patients is slow. This study aimed to assess the value of Dense-Fully Connected Convolution Network (Dense V-Net) in predicting Clinical Target Volume (CTV) pre-delineation in cervical cancer patients for radiotherapy. Methods In this study, we used Dense V-Net, a dense and fully connected convolutional network with suitable feature learning in small samples to automatically pre-delineate the CTV of cervical cancer patients based on computed tomography (CT) images and then we assessed the outcome. The CT data of 133 patients with stage IB and IIA postoperative cervical cancer with a comparable delineation scope was enrolled in this study. One hundred and thirteen patients were randomly designated as the training set to adjust the model parameters. Twenty cases were used as the test set to assess the network performance. The 8 most representative parameters were also used to assess the pre-sketching accuracy from 3 aspects: sketching similarity, sketching offset, and sketching volume difference. Results The results presented that the DSC, DC/mm, HD/cm, MAD/mm, ∆V, SI, IncI and JD of CTV were 0.82 ± 0.03, 4.28 ± 2.35, 1.86 ± 0.48, 2.52 ± 0.40, 0.09 ± 0.05, 0.84 ± 0.04, 0.80 ± 0.05, and 0.30 ± 0.04, respectively, and the results were greater than those with a single network. Conclusions Dense V-Net can correctly predict CTV pre-delineation of cervical cancer patients and can be applied in clinical practice after completing simple modifications.


2021 ◽  
Author(s):  
Seshadri Ramana K ◽  
Bala Chowdappa K ◽  
Obulesu ooruchintala ◽  
Deena Babu Mandru ◽  
kallam suresh

Abstract Cancer is uncontrolled cell growth in any part of the body. Early cancer detection aims to identify patients who exhibit symptoms early on in order to maximise their chances of a successful treatment. Cancer disease mortality is decreased through early detection and treatment. Numerous researchers proposed a variety of image processing and machine learning approaches for cancer detection. However, existing systems did not improve detection accuracy or efficiency. A Deep Convolutional Neural Learning Classifier Model based on the Least Mean Square Filterative Ricker Wavelet Transform (L-DCNLC) is proposed to address the aforementioned issues. The L-DCNLC Model's primary objective is to detect cancer earlier by utilising a fully connected max pooling deep convolutional network with increased accuracy and reduced time consumption. The fully connected max pooling deep convolutional network is composed of one input layer, three hidden layers, and one output layer. Initially, the input layer of the L-DCNLC Model considers the number of patient images in the database as input.


2020 ◽  
Author(s):  
Sam Gelman ◽  
Philip A. Romero ◽  
Anthony Gitter

ABSTRACTThe mapping from protein sequence to function is highly complex, making it challenging to predict how sequence changes will affect a protein’s behavior and properties. We present a supervised deep learning framework to learn the sequence-function mapping from deep mutational scanning data and make predictions for new, uncharacterized sequence variants. We test multiple neural network architectures, including a graph convolutional network that incorporates protein structure, to explore how a network’s internal representation affects its ability to learn the sequence-function mapping. Our supervised learning approach displays superior performance over physics-based and unsupervised prediction methods. We find networks that capture nonlinear interactions and share parameters across sequence positions are important for learning the relationship between sequence and function. Further analysis of the trained models reveals the networks’ ability to learn biologically meaningful information about protein structure and mechanism. Our software is available from https://github.com/gitter-lab/nn4dms.


2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Fang Su ◽  
Hai-Yang Shang ◽  
Jing-Yan Wang

In this paper, we propose a novel multitask learning method based on the deep convolutional network. The proposed deep network has four convolutional layers, three max-pooling layers, and two parallel fully connected layers. To adjust the deep network to multitask learning problem, we propose to learn a low-rank deep network so that the relation among different tasks can be explored. We proposed to minimize the number of independent parameter rows of one fully connected layer to explore the relations among different tasks, which is measured by the nuclear norm of the parameter of one fully connected layer, and seek a low-rank parameter matrix. Meanwhile, we also propose to regularize another fully connected layer by sparsity penalty so that the useful features learned by the lower layers can be selected. The learning problem is solved by an iterative algorithm based on gradient descent and back-propagation algorithms. The proposed algorithm is evaluated over benchmark datasets of multiple face attribute prediction, multitask natural language processing, and joint economics index predictions. The evaluation results show the advantage of the low-rank deep CNN model over multitask problems.


Information ◽  
2020 ◽  
Vol 11 (11) ◽  
pp. 525
Author(s):  
Franz Hell ◽  
Yasser Taha ◽  
Gereon Hinz ◽  
Sabine Heibei ◽  
Harald Müller ◽  
...  

Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance in recommender system benchmarks. Adapting these methods to pharmacy product cross-selling recommendation tasks with a million products and hundreds of millions of sales remains a challenge, due to the intricate medical and legal properties of pharmaceutical data. To tackle this challenge, we developed a graph convolutional network (GCN) algorithm called PharmaSage, which uses graph convolutions to generate embeddings for pharmacy products, which are then used in a downstream recommendation task. In the underlying graph, we incorporate both cross-sales information from the sales transaction within the graph structure, as well as product information as node features. Via modifications to the sampling involved in the network optimization process, we address a common phenomenon in recommender systems, the so-called popularity bias: popular products are frequently recommended, while less popular items are often neglected and recommended seldomly or not at all. We deployed PharmaSage using real-world sales data and trained it on 700,000 articles represented as nodes in a graph with edges between nodes representing approximately 100 million sales transactions. By exploiting the pharmaceutical product properties, such as their indications, ingredients, and adverse effects, and combining these with large sales histories, we achieved better results than with a purely statistics based approach. To our knowledge, this is the first application of deep graph embeddings for pharmacy product cross-selling recommendation at this scale to date.


Author(s):  
Richard Kurle ◽  
Stephan Günnemann ◽  
Patrick Van der Smagt

Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computing divergence measures between individual source’s posterior approximations. We explore a variety of options to learn these encoders and to integrate the beliefs they compute into a consistent posterior approximation. We visualise learned beliefs on a toy dataset and evaluate our methods for learning shared representations and structured output prediction, showing trade-offs of learning separate encoders for each information source. Furthermore, we demonstrate how conflict detection and redundancy can increase robustness of inference in a multi-source setting.


Author(s):  
Hao Zheng ◽  
Yizhe Zhang ◽  
Lin Yang ◽  
Peixian Liang ◽  
Zhuo Zhao ◽  
...  

3D image segmentation plays an important role in biomedical image analysis. Many 2D and 3D deep learning models have achieved state-of-the-art segmentation performance on 3D biomedical image datasets. Yet, 2D and 3D models have their own strengths and weaknesses, and by unifying them together, one may be able to achieve more accurate results. In this paper, we propose a new ensemble learning framework for 3D biomedical image segmentation that combines the merits of 2D and 3D models. First, we develop a fully convolutional network based meta-learner to learn how to improve the results from 2D and 3D models (base-learners). Then, to minimize over-fitting for our sophisticated meta-learner, we devise a new training method that uses the results of the baselearners as multiple versions of “ground truths”. Furthermore, since our new meta-learner training scheme does not depend on manual annotation, it can utilize abundant unlabeled 3D image data to further improve the model. Extensive experiments on two public datasets (the HVSMR 2016 Challenge dataset and the mouse piriform cortex dataset) show that our approach is effective under fully-supervised, semisupervised, and transductive settings, and attains superior performance over state-of-the-art image segmentation methods.


2017 ◽  
Vol 77 (8) ◽  
pp. 9943-9957
Author(s):  
Wangjie Sun ◽  
Shuxia Pan

Sign in / Sign up

Export Citation Format

Share Document