scholarly journals An automated method to detect transiting circumbinary planets

2019 ◽  
Vol 490 (1) ◽  
pp. 1313-1324 ◽  
Author(s):  
Diana Windemuth ◽  
Eric Agol ◽  
Josh Carter ◽  
Eric B Ford ◽  
Nader Haghighipour ◽  
...  

ABSTRACT To date a dozen transiting ‘Tatooines’ or circumbinary planets (CBPs) have been discovered, by eye, in the data from the Kepler mission; by contrast, thousands of confirmed circumstellar planets orbiting around single stars have been detected using automated algorithms. Automated detection of CBPs is challenging because their transits are strongly aperiodic with irregular profiles. Here, we describe an efficient and automated technique for detecting circumbinary planets that transit their binary hosts in Kepler light curves. Our method accounts for large transit timing variations (TTVs) and transit duration variations (TDVs), induced by binary reflex motion, in two ways: (1) We directly correct for large-scale TTVs and TDVs in the light curves by using Keplerian models to approximate binary and CBP orbits; and (2) We allow additional aperiodicities on the corrected light curves by employing the Quasi-periodic Automated Transit Search algorithm. We demonstrate that our method dramatically improves detection significance using simulated data and two previously identified CBP systems, Kepler-35 and Kepler-64.

2021 ◽  
Vol 11 (10) ◽  
pp. 4438
Author(s):  
Satyendra Singh ◽  
Manoj Fozdar ◽  
Hasmat Malik ◽  
Maria del Valle Fernández Moreno ◽  
Fausto Pedro García Márquez

It is expected that large-scale producers of wind energy will become dominant players in the future electricity market. However, wind power output is irregular in nature and it is subjected to numerous fluctuations. Due to the effect on the production of wind power, producing a detailed bidding strategy is becoming more complicated in the industry. Therefore, in view of these uncertainties, a competitive bidding approach in a pool-based day-ahead energy marketplace is formulated in this paper for traditional generation with wind power utilities. The profit of the generating utility is optimized by the modified gravitational search algorithm, and the Weibull distribution function is employed to represent the stochastic properties of wind speed profile. The method proposed is being investigated and simplified for the IEEE-30 and IEEE-57 frameworks. The results were compared with the results obtained with other optimization methods to validate the approach.


Genetics ◽  
2003 ◽  
Vol 165 (4) ◽  
pp. 2269-2282
Author(s):  
D Mester ◽  
Y Ronin ◽  
D Minkov ◽  
E Nevo ◽  
A Korol

Abstract This article is devoted to the problem of ordering in linkage groups with many dozens or even hundreds of markers. The ordering problem belongs to the field of discrete optimization on a set of all possible orders, amounting to n!/2 for n loci; hence it is considered an NP-hard problem. Several authors attempted to employ the methods developed in the well-known traveling salesman problem (TSP) for multilocus ordering, using the assumption that for a set of linked loci the true order will be the one that minimizes the total length of the linkage group. A novel, fast, and reliable algorithm developed for the TSP and based on evolution-strategy discrete optimization was applied in this study for multilocus ordering on the basis of pairwise recombination frequencies. The quality of derived maps under various complications (dominant vs. codominant markers, marker misclassification, negative and positive interference, and missing data) was analyzed using simulated data with ∼50-400 markers. High performance of the employed algorithm allows systematic treatment of the problem of verification of the obtained multilocus orders on the basis of computing-intensive bootstrap and/or jackknife approaches for detecting and removing questionable marker scores, thereby stabilizing the resulting maps. Parallel calculation technology can easily be adopted for further acceleration of the proposed algorithm. Real data analysis (on maize chromosome 1 with 230 markers) is provided to illustrate the proposed methodology.


2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Carolina Lagos ◽  
Guillermo Guerrero ◽  
Enrique Cabrera ◽  
Stefanie Niklander ◽  
Franklin Johnson ◽  
...  

A novel matheuristic approach is presented and tested on a well-known optimisation problem, namely, capacitated facility location problem (CFLP). The algorithm combines local search and mathematical programming. While the local search algorithm is used to select a subset of promising facilities, mathematical programming strategies are used to solve the subproblem to optimality. Proposed local search is influenced by instance-specific information such as installation cost and the distance between customers and facilities. The algorithm is tested on large instances of the CFLP, where neither local search nor mathematical programming is able to find good quality solutions within acceptable computational times. Our approach is shown to be a very competitive alternative to solve large-scale instances for the CFLP.


2021 ◽  
Vol 13 (14) ◽  
pp. 2671
Author(s):  
Xiaoqin Zang ◽  
Tianzhixi Yin ◽  
Zhangshuan Hou ◽  
Robert P. Mueller ◽  
Zhiqun Daniel Deng ◽  
...  

Adult American eels (Anguilla rostrata) are vulnerable to hydropower turbine mortality during outmigration from growth habitat in inland waters to the ocean where they spawn. Imaging sonar is a reliable and proven technology for monitoring of fish passage and migration; however, there is no efficient automated method for eel detection. We designed a deep learning model for automated detection of adult American eels from sonar data. The method employs convolution neural network (CNN) to distinguish between 14 images of eels and non-eel objects. Prior to image classification with CNN, background subtraction and wavelet denoising were applied to enhance sonar images. The CNN model was first trained and tested on data obtained from a laboratory experiment, which yielded overall accuracies of >98% for image-based classification. Then, the model was trained and tested on field data that were obtained near the Iroquois Dam located on the St. Lawrence River; the accuracy achieved was commensurate with that of human experts.


2021 ◽  
pp. 0958305X2110148
Author(s):  
Mojtaba Shivaie ◽  
Mohammad Kiani-Moghaddam ◽  
Philip D Weinsier

In this study, a new bilateral equilibrium model was developed for the optimal bidding strategy of both price-taker generation companies (GenCos) and distribution companies (DisCos) that participate in a joint day-ahead energy and reserve electricity market. This model, from a new perspective, simultaneously takes into account such techno-economic-environmental measures as market power, security constraints, and environmental and loss considerations. The mathematical formulation of this new model, therefore, falls into a nonlinear, two-level optimization problem. The upper-level problem maximizes the quadratic profit functions of the GenCos and DisCos under incomplete information and passes the obtained optimal bidding strategies to the lower-level problem that clears a joint day-ahead energy and reserve electricity market. A locational marginal pricing mechanism was also considered for settling the electricity market. To solve this newly developed model, a competent multi-computational-stage, multi-dimensional, multiple-homogeneous enhanced melody search algorithm (MMM-EMSA), referred to as a symphony orchestra search algorithm (SOSA), was employed. Case studies using the IEEE 118-bus test system—a part of the American electrical power grid in the Midwestern U.S.—are provided in this paper in order to illustrate the effectiveness and capability of the model on a large-scale power grid. According to the simulation results, several conclusions can be drawn when comparing the unilateral bidding strategy: the competition among GenCos and DisCos facilitates; the improved performance of the electricity market; mitigation of the polluting atmospheric emission levels; and, the increase in total profits of the GenCos and DisCos.


Energies ◽  
2019 ◽  
Vol 12 (18) ◽  
pp. 3586 ◽  
Author(s):  
Sizhou Sun ◽  
Jingqi Fu ◽  
Ang Li

Given the large-scale exploitation and utilization of wind power, the problems caused by the high stochastic and random characteristics of wind speed make researchers develop more reliable and precise wind power forecasting (WPF) models. To obtain better predicting accuracy, this study proposes a novel compound WPF strategy by optimal integration of four base forecasting engines. In the forecasting process, density-based spatial clustering of applications with noise (DBSCAN) is firstly employed to identify meaningful information and discard the abnormal wind power data. To eliminate the adverse influence of the missing data on the forecasting accuracy, Lagrange interpolation method is developed to get the corrected values of the missing points. Then, the two-stage decomposition (TSD) method including ensemble empirical mode decomposition (EEMD) and wavelet transform (WT) is utilized to preprocess the wind power data. In the decomposition process, the empirical wind power data are disassembled into different intrinsic mode functions (IMFs) and one residual (Res) by EEMD, and the highest frequent time series IMF1 is further broken into different components by WT. After determination of the input matrix by a partial autocorrelation function (PACF) and normalization into [0, 1], these decomposed components are used as the input variables of all the base forecasting engines, including least square support vector machine (LSSVM), wavelet neural networks (WNN), extreme learning machine (ELM) and autoregressive integrated moving average (ARIMA), to make the multistep WPF. To avoid local optima and improve the forecasting performance, the parameters in LSSVM, ELM, and WNN are tuned by backtracking search algorithm (BSA). On this basis, BSA algorithm is also employed to optimize the weighted coefficients of the individual forecasting results that produced by the four base forecasting engines to generate an ensemble of the forecasts. In the end, case studies for a certain wind farm in China are carried out to assess the proposed forecasting strategy.


Author(s):  
Mohamed Estai ◽  
Marc Tennant ◽  
Dieter Gebauer ◽  
Andrew Brostek ◽  
Janardhan Vignarajan ◽  
...  

Objective: This study aimed to evaluate an automated detection system to detect and classify permanent teeth on orthopantomogram (OPG) images using convolutional neural networks (CNNs). Methods: In total, 591 digital OPGs were collected from patients older than 18 years. Three qualified dentists performed individual teeth labelling on images to generate the ground truth annotations. A three-step procedure, relying upon CNNs, was proposed for automated detection and classification of teeth. Firstly, U-Net, a type of CNN, performed preliminary segmentation of tooth regions or detecting regions of interest (ROIs) on panoramic images. Secondly, the Faster R-CNN, an advanced object detection architecture, identified each tooth within the ROI determined by the U-Net. Thirdly, VGG-16 architecture classified each tooth into 32 categories, and a tooth number was assigned. A total of 17,135 teeth cropped from 591 radiographs were used to train and validate the tooth detection and tooth numbering modules. 90% of OPG images were used for training, and the remaining 10% were used for validation. 10-folds cross-validation was performed for measuring the performance. The intersection over union (IoU), F1 score, precision, and recall (i.e. sensitivity) were used as metrics to evaluate the performance of resultant CNNs. Results: The ROI detection module had an IoU of 0.70. The tooth detection module achieved a recall of 0.99 and a precision of 0.99. The tooth numbering module had a recall, precision and F1 score of 0.98. Conclusion: The resultant automated method achieved high performance for automated tooth detection and numbering from OPG images. Deep learning can be helpful in the automatic filing of dental charts in general dentistry and forensic medicine.


Author(s):  
Young Hyun Kim ◽  
Eun-Gyu Ha ◽  
Kug Jin Jeon ◽  
Chena Lee ◽  
Sang-Sun Han

Objectives: This study aimed to develop a fully automated human identification method based on a convolutional neural network (CNN) with a large-scale dental panoramic radiograph (DPR) dataset. Methods: In total, 2,760 DPRs from 746 subjects who had 2 to 17 DPRs with various changes in image characteristics due to various dental treatments (tooth extraction, oral surgery, prosthetics, orthodontics, or tooth development) were collected. The test dataset included the latest DPR of each subject (746 images) and the other DPRs (2,014 images) were used for model training. A modified VGG16 model with two fully connected layers was applied for human identification. The proposed model was evaluated with rank-1, –3, and −5 accuracies, running time, and gradient-weighted class activation mapping (Grad-CAM)–applied images. Results: This model had rank-1,–3, and −5 accuracies of 82.84%, 89.14%, and 92.23%, respectively. All rank-1 accuracy values of the proposed model were above 80% regardless of changes in image characteristics. The average running time to train the proposed model was 60.9 sec per epoch, and the prediction time for 746 test DPRs was short (3.2 sec/image). The Grad-CAM technique verified that the model automatically identified humans by focusing on identifiable dental information. Conclusion: The proposed model showed good performance in fully automatic human identification despite differing image characteristics of DPRs acquired from the same patients. Our model is expected to assist in the fast and accurate identification by experts by comparing large amounts of images and proposing identification candidates at high speed.


Author(s):  
Hongli Wang ◽  
Bin Guo ◽  
Jiaqi Liu ◽  
Sicong Liu ◽  
Yungang Wu ◽  
...  

Deep Neural Networks (DNNs) have made massive progress in many fields and deploying DNNs on end devices has become an emerging trend to make intelligence closer to users. However, it is challenging to deploy large-scale and computation-intensive DNNs on resource-constrained end devices due to their small size and lightweight. To this end, model partition, which aims to partition DNNs into multiple parts to realize the collaborative computing of multiple devices, has received extensive research attention. To find the optimal partition, most existing approaches need to run from scratch under given resource constraints. However, they ignore that resources of devices (e.g., storage, battery power), and performance requirements (e.g., inference latency), are often continuously changing, making the optimal partition solution change constantly during processing. Therefore, it is very important to reduce the tuning latency of model partition to realize the real-time adaption under the changing processing context. To address these problems, we propose the Context-aware Adaptive Surgery (CAS) framework to actively perceive the changing processing context, and adaptively find the appropriate partition solution in real-time. Specifically, we construct the partition state graph to comprehensively model different partition solutions of DNNs by import context resources. Then "the neighbor effect" is proposed, which provides the heuristic rule for the search process. When the processing context changes, CAS adopts the runtime search algorithm, Graph-based Adaptive DNN Surgery (GADS), to quickly find the appropriate partition that satisfies resource constraints under the guidance of the neighbor effect. The experimental results show that CAS realizes adaptively rapid tuning of the model partition solutions in 10ms scale even for large DNNs (2.25x to 221.7x search time improvement than the state-of-the-art researches), and the total inference latency still keeps the same level with baselines.


2018 ◽  
Vol 2018 ◽  
pp. 1-23 ◽  
Author(s):  
Hao Chen ◽  
Shu Yang ◽  
Jun Li ◽  
Ning Jing

With the development of aerospace science and technology, Earth Observation Satellite cluster which consists of heterogeneous satellites with many kinds of payloads appears gradually. Compared with the traditional satellite systems, satellite cluster has some particular characteristics, such as large-scale, heterogeneous satellite platforms, various payloads, and the capacity of performing all the observation tasks. How to select a subset from satellite cluster to perform all observation tasks effectively with low cost is a new challenge arousing in the field of aerospace resource scheduling. This is the agent team formation problem for observation task-oriented satellite cluster. A mathematical scheduling model is built. Three novel algorithms, i.e., complete search algorithm, heuristic search algorithm, and swarm intelligence optimization algorithm, are proposed to solve the problem in different scales. Finally, some experiments are conducted to validate the effectiveness and practicability of our algorithms.


Sign in / Sign up

Export Citation Format

Share Document