scholarly journals Bars formed in galaxy merging and their classification with deep learning

2020 ◽  
Vol 641 ◽  
pp. A77
Author(s):  
M. K. Cavanagh ◽  
K. Bekki

Context. Stellar bars are a common morphological feature of spiral galaxies. While it is known that they can form in isolation, or be induced tidally, few studies have explored the production of stellar bars in galaxy merging. We look to investigate bar formation in galaxy merging using methods from deep learning to analyse our N-body simulations. Aims. The primary aim is to determine the constraints on the mass ratio and orientations of merging galaxies that are most conducive to bar formation. We further aim to explore whether it is possible to classify simulated barred spiral galaxies based on the mechanism of their formation. We test the feasibility of this new classification schema with simulated galaxies. Methods. Using a set of 29 400 images obtained from our simulations, we first trained a convolutional neural network to distinguish between barred and non-barred galaxies. We then tested the network on simulations with different mass ratios and spin angles. We adapted the core neural network architecture for use with our additional aims. Results. We find that a strong inverse relationship exists between the mass ratio and the number of bars produced. We also identify two distinct phases in the bar formation process; (1) the initial, tidally induced formation pre-merger and (2) the destruction and/or regeneration of the bar during and after the merger. Conclusions. Mergers with low mass ratios and closely-aligned orientations are considerably more conducive to bar formation compared to equal-mass mergers. We demonstrate the flexibility of our deep learning approach by showing it is feasible to classify bars based on their formation mechanism.

2020 ◽  
Vol 6 (3) ◽  
pp. 501-504
Author(s):  
Dennis Schmidt ◽  
Andreas Rausch ◽  
Thomas Schanze

AbstractThe Institute of Virology at the Philipps-Universität Marburg is currently researching possible drugs to combat the Marburg virus. This involves classifying cell structures based on fluoroscopic microscopic image sequences. Conventionally, membranes of cells must be marked for better analysis, which is time consuming. In this work, an approach is presented to identify cell structures in images that are marked for subviral particles. It could be shown that there is a correlation between the distribution of subviral particles in an infected cell and the position of the cell’s structures. The segmentation is performed with a "Mask-R-CNN" algorithm, presented in this work. The model (a region-based convolutional neural network) is applied to enable a robust and fast recognition of cell structures. Furthermore, the network architecture is described. The proposed method is tested on data evaluated by experts. The results show a high potential and demonstrate that the method is suitable.


Neurosurgery ◽  
2020 ◽  
Vol 67 (Supplement_1) ◽  
Author(s):  
Syed M Adil ◽  
Lefko T Charalambous ◽  
Kelly R Murphy ◽  
Shervin Rahimpour ◽  
Stephen C Harward ◽  
...  

Abstract INTRODUCTION Opioid misuse persists as a public health crisis affecting approximately one in four Americans.1 Spinal cord stimulation (SCS) is a neuromodulation strategy to treat chronic pain, with one goal being decreased opioid consumption. Accurate prognostication about SCS success is key in optimizing surgical decision making for both physicians and patients. Deep learning, using neural network models such as the multilayer perceptron (MLP), enables accurate prediction of non-linear patterns and has widespread applications in healthcare. METHODS The IBM MarketScan® (IBM) database was queried for all patients ≥ 18 years old undergoing SCS from January 2010 to December 2015. Patients were categorized into opioid dose groups as follows: No Use, ≤ 20 morphine milligram equivalents (MME), 20–50 MME, 50–90 MME, and >90 MME. We defined “opiate weaning” as moving into a lower opioid dose group (or remaining in the No Use group) during the 12 months following permanent SCS implantation. After pre-processing, there were 62 predictors spanning demographics, comorbidities, and pain medication history. We compared an MLP with four hidden layers to the LR model with L1 regularization. Model performance was assessed using area under the receiver operating characteristic curve (AUC) with 5-fold nested cross-validation. RESULTS Ultimately, 6,124 patients were included, of which 77% had used opioids for >90 days within the 1-year pre-SCS and 72% had used >5 types of medications during the 90 days prior to SCS. The mean age was 56 ± 13 years old. Collectively, 2,037 (33%) patients experienced opiate weaning. The AUC was 0.74 for the MLP and 0.73 for the LR model. CONCLUSION To our knowledge, we present the first use of deep learning to predict opioid weaning after SCS. Model performance was slightly better than regularized LR. Future efforts should focus on optimization of neural network architecture and hyperparameters to further improve model performance. Models should also be calibrated and externally validated on an independent dataset. Ultimately, such tools may assist both physicians and patients in predicting opioid dose reduction after SCS.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


Author(s):  
Ranganath Singari ◽  
Karun Singla ◽  
Gangesh Chawla

Deep learning has offered new avenues in the field of industrial management. Traditional methods of quality inspection such as Acceptance Sampling relies on a probabilistic measure derived from inspecting a sample of finished products. Evaluating a fixed number of products to derive the quality level for the complete batch is not a robust approach. Visual inspection solutions based on deep learning can be employed in the large manufacturing units to improve the quality inspection units for steel surface defect detection. This leads to optimization of the human capital due to reduction in manual intervention and turnaround time in the overall supply chain of the industry. Consequently, the sample size in the Acceptance sampling can be increased with minimal effort vis-à-vis an increase in the overall accuracy of the inspection. The learning curve of this work is supported by Convolutional Neural Network which has been used to extract feature representations from grayscale images to classify theinputs into six types of surface defects. The neural network architecture is compiled in Keras framework using Tensorflow backend with state of the art Adam RMS Prop with Nesterov Momentum (NADAM) optimizer. The proposed classification algorithm holds the potential to identify the dominant flaws in the manufacturing system responsible for leaking costs.


2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Zohreh Gholami Doborjeh ◽  
Nikola Kasabov ◽  
Maryam Gholami Doborjeh ◽  
Alexander Sumich

2004 ◽  
Vol 220 ◽  
pp. 277-278
Author(s):  
Glen Petitpas ◽  
Mousumi Das ◽  
Peter Teuben ◽  
Stuart Vogel

Two-dimensional velocity fields have been used to determine the dark matter properties of a sample of barred galaxies taken from the BIMA Survey of Nearby Galaxies (SONG). Preliminary results indicate that the maximal disk model is not appropriate in several galaxies in our sample, but higher resolution results will be needed to confirm this.


2020 ◽  
Vol 4 (s1) ◽  
pp. 45-46
Author(s):  
Carol Tran ◽  
Orit Glenn ◽  
Christopher Hess ◽  
Andreas Rauschecker

OBJECTIVES/GOALS: We seek to develop an automated deep learning-based method for segmentation and volumetric quantification of the fetal brain on T2-weighted fetal MRIs. We will evaluate the performance of the algorithm by comparing it to gold standard manual segmentations. The method will be used to create a normative sample of brain volumes across gestational ages. METHODS/STUDY POPULATION: We will adapt a U-Net convolutional neural network architecture for fetal brain MRIs using 3D volumes. After re-sampling 2D fetal brain acquisitions to 3mm3 3D volumes using linear interpolation, the network will be trained to perform automated brain segmentation on 40 randomly-sampled, normal fetal brain MRI scans of singleton pregnancies. Training will be performed in 3 acquisition planes (axial, coronal, sagittal). Performance will be evaluated on 10 test MRIs (in 3 acquisition planes, 30 total test samples) using Dice scores, compared to radiologists’ manual segmentations. The algorithm’s performance on measuring total brain volume will also be evaluated. RESULTS/ANTICIPATED RESULTS: Based on the success of prior U-net architectures for volumetric segmentation tasks in medical imaging (e.g. Duong et al., 2019), we anticipate that the convolutional neural network will accurately provide segmentations and associated volumetry of fetal brains in fractions of a second. We anticipate median Dice scores greater than 0.8 across our test sample. Once validated, the method will retrospectively generate a normative database of over 1500 fetal brain volumes across gestational ages (18 weeks to 30 weeks) collected at our institution. DISCUSSION/SIGNIFICANCE OF IMPACT: Quantitative estimates of brain volume, and deviations from normative data, would be a major advancement in objective clinical assessments of fetal MRI. Such data can currently only be obtained through laborious manual segmentations; automated deep learning methods have the potential to reduce the time and cost of this process.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6666
Author(s):  
Kamil Książek ◽  
Michał Romaszewski ◽  
Przemysław Głomb ◽  
Bartosz Grabowski ◽  
Michał Cholewa

In recent years, growing interest in deep learning neural networks has raised a question on how they can be used for effective processing of high-dimensional datasets produced by hyperspectral imaging (HSI). HSI, traditionally viewed as being within the scope of remote sensing, is used in non-invasive substance classification. One of the areas of potential application is forensic science, where substance classification on the scenes is important. An example problem from that area—blood stain classification—is a case study for the evaluation of methods that process hyperspectral data. To investigate the deep learning classification performance for this problem we have performed experiments on a dataset which has not been previously tested using this kind of model. This dataset consists of several images with blood and blood-like substances like ketchup, tomato concentrate, artificial blood, etc. To test both the classic approach to hyperspectral classification and a more realistic application-oriented scenario, we have prepared two different sets of experiments. In the first one, Hyperspectral Transductive Classification (HTC), both a training and a test set come from the same image. In the second one, Hyperspectral Inductive Classification (HIC), a test set is derived from a different image, which is more challenging for classifiers but more useful from the point of view of forensic investigators. We conducted the study using several architectures like 1D, 2D and 3D convolutional neural networks (CNN), a recurrent neural network (RNN) and a multilayer perceptron (MLP). The performance of the models was compared with baseline results of Support Vector Machine (SVM). We have also presented a model evaluation method based on t-SNE and confusion matrix analysis that allows us to detect and eliminate some cases of model undertraining. Our results show that in the transductive case, all models, including the MLP and the SVM, have comparative performance, with no clear advantage of deep learning models. The Overall Accuracy range across all models is 98–100% for the easier image set, and 74–94% for the more difficult one. However, in a more challenging inductive case, selected deep learning architectures offer a significant advantage; their best Overall Accuracy is in the range of 57–71%, improving the baseline set by the non-deep models by up to 9 percentage points. We have presented a detailed analysis of results and a discussion, including a summary of conclusions for each tested architecture. An analysis of per-class errors shows that the score for each class is highly model-dependent. Considering this and the fact that the best performing models come from two different architecture families (3D CNN and RNN), our results suggest that tailoring the deep neural network architecture to hyperspectral data is still an open problem.


Sign in / Sign up

Export Citation Format

Share Document