scholarly journals AutoProf – I. An automated non-parametric light profile pipeline for modern galaxy surveys

2021 ◽  
Vol 508 (2) ◽  
pp. 1870-1887
Author(s):  
Connor J Stone ◽  
Nikhil Arora ◽  
Stéphane Courteau ◽  
Jean-Charles Cuillandre

ABSTRACT We present an automated non-parametric light profile extraction pipeline called autoprof. All steps for extracting surface brightness (SB) profiles are included in autoprof, allowing streamlined analyses of galaxy images. autoprof improves upon previous non-parametric ellipse fitting implementations with fit-stabilization procedures adapted from machine learning techniques. Additional advanced analysis methods are included in the flexible pipeline for the extraction of alternative brightness profiles (along radial or axial slices), smooth axisymmetric models, and the implementation of decision trees for arbitrarily complex pipelines. Detailed comparisons with widely used photometry algorithms (photutils, xvista, and galfit) are also presented. These comparisons rely on a large collection of late-type galaxy images from the PROBES catalogue. The direct comparison of SB profiles shows that autoprof can reliably extract fainter isophotes than other methods on the same images, typically by >2 mag arcsec−2. Contrasting non-parametric elliptical isophote fitting with simple parametric models also shows that two-component fits (e.g. Sérsic plus exponential) are insufficient to describe late-type galaxies with high fidelity. It is established that elliptical isophote fitting, and in particular autoprof, is ideally suited for a broad range of automated isophotal analysis tasks. autoprof is freely available to the community at: https://github.com/ConnorStoneAstro/AutoProf.

Author(s):  
Afshin Rahimi ◽  
Mofiyinoluwa O. Folami

As the number of satellite launches increases each year, it is only natural that an interest in the safety and monitoring of these systems would increase as well. However, as a system becomes more complex, generating a high-fidelity model that accurately describes the system becomes complicated. Therefore, imploring a data-driven method can provide to be more beneficial for such applications. This research proposes a novel approach for data-driven machine learning techniques on the detection and isolation of nonlinear systems, with a case-study for an in-orbit closed loop-controlled satellite with reaction wheels as actuators. High-fidelity models of the 3-axis controlled satellite are employed to generate data for both nominal and faulty conditions of the reaction wheels. The generated simulation data is used as input for the isolation method, after which the data is pre-processed through feature extraction from a temporal, statistical, and spectral domain. The pre-processed features are then fed into various machine learning classifiers. Isolation results are validated with cross-validation, and model parameters are tuned using hyperparameter optimization. To validate the robustness of the proposed method, it is tested on three characterized datasets and three reaction wheel configurations, including standard four-wheel, three-orthogonal, and pyramid. The results prove superior performance isolation accuracy for the system under study compared to previous studies using alternative methods (Rahimi & Saadat, 2019, 2020).


Author(s):  
Sudeepta Mondal ◽  
Michael M. Joly ◽  
Soumalya Sarkar

Abstract In aerodynamic design, accurate and robust surrogate models are important to accelerate computationally expensive CFD-based optimization. Machine learning techniques can also enable affordable exploration of high-dimensional design spaces with targeted selection of sparse high-fidelity data. In this paper, a multi-fidelity global-local approach is presented and applied to the surrogate-based design optimization of a highly-loaded transonic compressor rotor. The key idea is to train multi-fidelity surrogates with fewer high-fidelity RANS predictions and more rapid and inexpensive lower-fidelity RANS evaluations. The framework also introduces a global-local search algorithm that can spin-off multiple local optimization threads over narrow and targeted design spaces, concurrently to a constantly adapting global optimization thread. The approach is demonstrated with an optimization of the transonic NASA rotor 37, yielding significant increase in performance within a dozen of optimization iterations.


Author(s):  
Mark Wallis ◽  
Kuldeep Kumar ◽  
Adrian Gepp

Credit ratings are an important metric for business managers and a contributor to economic growth. Forecasting such ratings might be a suitable application of big data analytics. As machine learning is one of the foundations of intelligent big data analytics, this chapter presents a comparative analysis of traditional statistical models and popular machine learning models for the prediction of Moody's long-term corporate debt ratings. Machine learning techniques such as artificial neural networks, support vector machines, and random forests generally outperformed their traditional counterparts in terms of both overall accuracy and the Kappa statistic. The parametric models may be hindered by missing variables and restrictive assumptions about the underlying distributions in the data. This chapter reveals the relative effectiveness of non-parametric big data analytics to model a complex process that frequently arises in business, specifically determining credit ratings.


Energies ◽  
2020 ◽  
Vol 13 (17) ◽  
pp. 4565 ◽  
Author(s):  
Himakar Ganti ◽  
Manu Kamin ◽  
Prashant Khare

This study focuses on establishing a surrogate model based on machine learning techniques to predict the time-averaged spatially distributed behaviors of vaporizing liquid jets in turbulent air crossflow for momentum flux ratios between 5 and 120. This surrogate model extends a previously developed Gaussian-process-based framework applicable to laminar flows to accommodate turbulent flows and demonstrates that in addition to detailed fields of primitive variables, second-order turbulence statistics can also be predicted using machine learning techniques. The framework proceeds in 3 steps—(1) design of experiment studies to identify training points and conducting high-fidelity calculations to build the training dataset; (2) Gaussian process regression (supervised training) for the range of operating conditions under consideration for gaseous and dispersed phase quantities; and (3) error quantification of the surrogate model by comparing the machine learning predictions with the truth model for test conditions (i.e., conditions not used for training). The framework was trained using data generated by high-fidelity large eddy simulation (LES)-based calculations (also referred to as the truth model), which solves the complete set of conservation equations for mass, momentum, energy, and species in an Eulerian reference frame, coupled with a Lagrangian solver that tracks the dispersed phase. Simulations were conducted for the range of momentum flux ratios between 5 and 120 for liquid water injected into crossflowing air at a pressure of 1 atm and temperature of 600 K. Results from the machine-learned surrogate model, also called emulations, were compared with the truth model under testing conditions identified by momentum flux ratios of 7 and 40. L1 errors for time-averaged field quantities, including velocity magnitudes, pressure, temperature, vapor fraction of the evaporated liquid, and turbulent kinetic energy in the gas phase, and spray penetration and Sauter mean diameters in the dispersed phase are reported. Speedup of 65 was achieved with this emulator when compared against LES simulation of the same test conditions with errors for all quantities below 14%, thus demonstrating the potential benefits of using machine learning techniques for design space exploration of devices that are based on turbulent multiphase fluid flows. This is the first effort of its kind in the literature that demonstrates the application of machine learning techniques on turbulent multiphase flows.


2006 ◽  
Author(s):  
Christopher Schreiner ◽  
Kari Torkkola ◽  
Mike Gardner ◽  
Keshu Zhang

2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 389-P
Author(s):  
SATORU KODAMA ◽  
MAYUKO H. YAMADA ◽  
YUTA YAGUCHI ◽  
MASARU KITAZAWA ◽  
MASANORI KANEKO ◽  
...  

Author(s):  
Anantvir Singh Romana

Accurate diagnostic detection of the disease in a patient is critical and may alter the subsequent treatment and increase the chances of survival rate. Machine learning techniques have been instrumental in disease detection and are currently being used in various classification problems due to their accurate prediction performance. Various techniques may provide different desired accuracies and it is therefore imperative to use the most suitable method which provides the best desired results. This research seeks to provide comparative analysis of Support Vector Machine, Naïve bayes, J48 Decision Tree and neural network classifiers breast cancer and diabetes datsets.


Sign in / Sign up

Export Citation Format

Share Document