scholarly journals Deblending and classifying astronomical sources with Mask R-CNN deep learning

2019 ◽  
Vol 490 (3) ◽  
pp. 3952-3965 ◽  
Author(s):  
Colin J Burke ◽  
Patrick D Aleo ◽  
Yu-Ching Chen ◽  
Xin Liu ◽  
John R Peterson ◽  
...  

ABSTRACT We apply a new deep learning technique to detect, classify, and deblend sources in multiband astronomical images. We train and evaluate the performance of an artificial neural network built on the Mask Region-based Convolutional Neural Network image processing framework, a general code for efficient object detection, classification, and instance segmentation. After evaluating the performance of our network against simulated ground truth images for star and galaxy classes, we find a precision of 92 per cent at 80 per cent recall for stars and a precision of 98 per cent at 80 per cent recall for galaxies in a typical field with ∼30 galaxies arcmin−2. We investigate the deblending capability of our code, and find that clean deblends are handled robustly during object masking, even for significantly blended sources. This technique, or extensions using similar network architectures, may be applied to current and future deep imaging surveys such as Large Synoptic Survey Telescope and Wide-Field Infrared Survey Telescope. Our code, astro r-cnn, is publicly available at https://github.com/burke86/astro_rcnn.

2021 ◽  
pp. 1-12
Author(s):  
Gaurav Sarraf ◽  
Anirudh Ramesh Srivatsa ◽  
MS Swetha

With the ever-rising threat to security, multiple industries are always in search of safer communication techniques both in rest and transit. Multiple security institutions agree that any systems security can be modeled around three major concepts: Confidentiality, Availability, and Integrity. We try to reduce the holes in these concepts by developing a Deep Learning based Steganography technique. In our study, we have seen, data compression has to be at the heart of any sound steganography system. In this paper, we have shown that it is possible to compress and encode data efficiently to solve critical problems of steganography. The deep learning technique, which comprises an auto-encoder with Convolutional Neural Network as its building block, not only compresses the secret file but also learns how to hide the compressed data in the cover file efficiently. The proposed techniques can encode secret files of the same size as of cover, or in some sporadic cases, even larger files can be encoded. We have also shown that the same model architecture can theoretically be applied to any file type. Finally, we show that our proposed technique surreptitiously evades all popular steganalysis techniques.


2021 ◽  
Vol 14 (2) ◽  
pp. 93
Author(s):  
Kristina Gorshkova ◽  
Victoria Zueva ◽  
Maria Kuznetsova ◽  
Larisa Tugashova

2019 ◽  
Vol 26 (6) ◽  
pp. 580-581
Author(s):  
Anne Cocos ◽  
Alexander G Fiks ◽  
Aaron J Masino

Abstract We appreciate the detailed review provided by Magge et al1 of our article, “Deep learning for pharmacovigilance: recurrent neural network architectures for labeling adverse drug reactions in Twitter posts.” 2 In their letter, they present a subjective criticism that rests on concerns about our dataset composition and potential misinterpretation of comparisons to existing methods. Our article underwent two rounds of extensive peer review and has been cited 28 times1 in the nearly 2 years since it was published online (February 2017). Neither the reviewers nor the citing authors raised similar concerns. There are, however, portions of the commentary that highlight areas of our work that would benefit from further clarification.


2018 ◽  
Author(s):  
Yan Yan ◽  
Douglas H. Roossien ◽  
Benjamin V. Sadis ◽  
Jason J. Corso ◽  
Dawen Cai

AbstractNeuronal morphology reconstruction in fluorescence microscopy 3D images is essential for analyzing neuronal cell type and connectivity. Manual tracing of neurons in these images is time consuming and subjective. Automated tracing is highly desired yet is one of the foremost challenges in computational neuroscience. The multispectral labeling technique, Brainbow utilizes high dimensional spectral information to distinguish intermingled neuronal processes. It is particular interesting to develop new algorithms to include the spectral information into the tracing process. Recently, deep learning approaches achieved state-of-the-art in different computer vision and medical imaging applications. To benefit from the power of deep learning, in this paper, we propose an automated neural tracing approach in multispectral 3D Brainbow images based on recurrent neural net-work. We first adopt VBM4D approach to denoise multispectral 3D images. Then we generate cubes as training samples along the ground truth, manually traced paths. These cubes are the input to the recur-rent neural network. The proposed approach is simple and effective. The approach can be implemented with the deep learning toolbox ‘Keras’ in 100 lines. Finally, to evaluate our approach, we computed the average and standard deviation of DIADEM metric from the ground truth results to our tracing results, and from our tracing results to the ground truth results. Extensive experimental results on the collected dataset demonstrate that the proposed approach performs well in Brainbow labeled mouse brain images.


2019 ◽  
Vol 1 (Supplement_1) ◽  
pp. i20-i21
Author(s):  
Min Zhang ◽  
Geoffrey Young ◽  
Huai Chen ◽  
Lei Qin ◽  
Xinhua Cao ◽  
...  

Abstract BACKGROUND AND OBJECTIVE: Brain metastases have been found to account for one-fourth of all cancer metastases seen in clinics. Magnetic resonance imaging (MRI) is widely used for detecting brain metastases. Accurate detection of the brain metastases is critical to design radiotherapy to treat the cancer and monitor their progression or response to the therapy and prognosis. However, finding metastases on brain MRI is very challenging as many metastases are small and manifest as objects of weak contrast on the images. In this work we present a deep learning approach integrated with a classification scheme to detect cancer metastases to the brain on MRI. MATERIALS AND METHODS: We retrospectively extracted 101 metastases patients, equal to 1535 metastases on 10192 slices of images in a total of 336 scans from our PACS and manually marked the lesions on T1-weighted contrast enhanced MRI as the ground-truth. We then randomly separated the cases into training, validation, and test sets for developing and optimizing the deep learning neural network. We designed a 2-step computer-aided detection (CAD) pipeline by first applying a fast region-based convolutional neural network method (R-CNN) to sequentially process each slice of an axial brain MRI to find abnormal hyper-intensity that may correspond to a brain metastasis and, second, applying a random under sampling boost (RUSBoost) classification method to reduce the false positive metastases. RESULTS: The computational pipeline was tested on real brain images. A sensitivity of 97.28% and false positive rate of 36.25 per scan over the images were achieved by using the proposed method. CONCLUSION: Our results demonstrated the deep learning-based method can detect metastases in very challenging cases and can serve as CAD tool to help radiologists interpret brain MRIs in a time-constrained environment.


2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Jordan Ott ◽  
Mike Pritchard ◽  
Natalie Best ◽  
Erik Linstead ◽  
Milan Curcic ◽  
...  

Implementing artificial neural networks is commonly achieved via high-level programming languages such as Python and easy-to-use deep learning libraries such as Keras. These software libraries come preloaded with a variety of network architectures, provide autodifferentiation, and support GPUs for fast and efficient computation. As a result, a deep learning practitioner will favor training a neural network model in Python, where these tools are readily available. However, many large-scale scientific computation projects are written in Fortran, making it difficult to integrate with modern deep learning methods. To alleviate this problem, we introduce a software library, the Fortran-Keras Bridge (FKB). This two-way bridge connects environments where deep learning resources are plentiful with those where they are scarce. The paper describes several unique features offered by FKB, such as customizable layers, loss functions, and network ensembles. The paper concludes with a case study that applies FKB to address open questions about the robustness of an experimental approach to global climate simulation, in which subgrid physics are outsourced to deep neural network emulators. In this context, FKB enables a hyperparameter search of one hundred plus candidate models of subgrid cloud and radiation physics, initially implemented in Keras, to be transferred and used in Fortran. Such a process allows the model’s emergent behavior to be assessed, i.e., when fit imperfections are coupled to explicit planetary-scale fluid dynamics. The results reveal a previously unrecognized strong relationship between offline validation error and online performance, in which the choice of the optimizer proves unexpectedly critical. This in turn reveals many new neural network architectures that produce considerable improvements in climate model stability including some with reduced error, for an especially challenging training dataset.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Jin-Woong Lee ◽  
Woon Bae Park ◽  
Jin Hee Lee ◽  
Satendra Pal Singh ◽  
Kee-Sun Sohn

AbstractHere we report a facile, prompt protocol based on deep-learning techniques to sort out intricate phase identification and quantification problems in complex multiphase inorganic compounds. We simulate plausible powder X-ray diffraction (XRD) patterns for 170 inorganic compounds in the Sr-Li-Al-O quaternary compositional pool, wherein promising LED phosphors have been recently discovered. Finally, 1,785,405 synthetic XRD patterns are prepared by combinatorically mixing the simulated powder XRD patterns of 170 inorganic compounds. Convolutional neural network (CNN) models are built and eventually trained using this large prepared dataset. The fully trained CNN model promptly and accurately identifies the constituent phases in complex multiphase inorganic compounds. Although the CNN is trained using the simulated XRD data, a test with real experimental XRD data returns an accuracy of nearly 100% for phase identification and 86% for three-step-phase-fraction quantification.


2021 ◽  
Author(s):  
Ghassan Mohammed Halawani

The main purpose of this project is to modify a convolutional neural network for image classification, based on a deep-learning framework. A transfer learning technique is used by the MATLAB interface to Alex-Net to train and modify the parameters in the last two fully connected layers of Alex-Net with a new dataset to perform classifications of thousands of images. First, the general common architecture of most neural networks and their benefits are presented. The mathematical models and the role of each part in the neural network are explained in detail. Second, different neural networks are studied in terms of architecture, application, and the working method to highlight the strengths and weaknesses of each of neural network. The final part conducts a detailed study on one of the most powerful deep-learning networks in image classification – i.e. the convolutional neural network – and how it can be modified to suit different classification tasks by using transfer learning technique in MATLAB.


Sign in / Sign up

Export Citation Format

Share Document