scholarly journals The CARMENES search for exoplanets around M dwarfs

2020 ◽  
Vol 642 ◽  
pp. A22 ◽  
Author(s):  
V. M. Passegger ◽  
A. Bello-García ◽  
J. Ordieres-Meré ◽  
J. A. Caballero ◽  
A. Schweitzer ◽  
...  

Existing and upcoming instrumentation is collecting large amounts of astrophysical data, which require efficient and fast analysis techniques. We present a deep neural network architecture to analyze high-resolution stellar spectra and predict stellar parameters such as effective temperature, surface gravity, metallicity, and rotational velocity. With this study, we firstly demonstrate the capability of deep neural networks to precisely recover stellar parameters from a synthetic training set. Secondly, we analyze the application of this method to observed spectra and the impact of the synthetic gap (i.e., the difference between observed and synthetic spectra) on the estimation of stellar parameters, their errors, and their precision. Our convolutional network is trained on synthetic PHOENIX-ACES spectra in different optical and near-infrared wavelength regions. For each of the four stellar parameters, Teff, log g, [M/H], and v sin i, we constructed a neural network model to estimate each parameter independently. We then applied this method to 50 M dwarfs with high-resolution spectra taken with CARMENES (Calar Alto high-Resolution search for M dwarfs with Exo-earths with Near-infrared and optical Échelle Spectrographs), which operates in the visible (520–960 nm) and near-infrared wavelength range (960–1710 nm) simultaneously. Our results are compared with literature values for these stars. They show mostly good agreement within the errors, but also exhibit large deviations in some cases, especially for [M/H], pointing out the importance of a better understanding of the synthetic gap.

2020 ◽  
Vol 28 (5-6) ◽  
pp. 275-286 ◽  
Author(s):  
S Assadzadeh ◽  
CK Walker ◽  
LS McDonald ◽  
P Maharjan ◽  
JF Panozzo

A global predictive model was developed for protein, moisture, and grain type, using near infrared (NIR) spectra. The model is a deep convolutional neural network, trained on NIR spectral data captured from wheat, barley, field pea, and lentil whole grains. The deep learning model performs multi-task learning to simultaneously predict grain protein, moisture, and type, with a significant reduction in prediction errors compared to linear approaches (e.g., partial least squares regression). Moreover, it is shown that the convolutional network architecture learns much more efficiently than simple feedforward neural network architectures of the same size. Thus, in addition to improved accuracy, the presented deep network is very efficient to implement, both in terms of model development time, and the required computational resources.


2020 ◽  
Vol 492 (4) ◽  
pp. 5470-5507
Author(s):  
E Marfil ◽  
H M Tabernero ◽  
D Montes ◽  
J A Caballero ◽  
M G Soto ◽  
...  

ABSTRACT With the purpose of assessing classic spectroscopic methods on high-resolution and high signal-to-noise ratio spectra in the near-infrared wavelength region, we selected a sample of 65 F-, G-, and K-type stars observed with CARMENES, the new, ultra-stable, double-channel spectrograph at the 3.5 m Calar Alto telescope. We computed their stellar atmospheric parameters (Teff, log g, ξ, and [Fe/H]) by means of the stepar code, a python implementation of the equivalent width method that employs the 2017 version of the moog code and a grid of MARCS model atmospheres. We compiled four Fe i and Fe ii line lists suited to metal-rich dwarfs, metal-poor dwarfs, metal-rich giants, and metal-poor giants that cover the wavelength range from 5300 to 17 100 Å, thus substantially increasing the number of identified Fe i and Fe ii lines up to 653 and 23, respectively. We examined the impact of the near-infrared Fe i and Fe ii lines upon our parameter determinations after an exhaustive literature search, placing special emphasis on the 14 Gaia benchmark stars contained in our sample. Even though our parameter determinations remain in good agreement with the literature values, the increase in the number of Fe i and Fe ii lines when the near-infrared region is taken into account reveals a deeper Teff scale that might stem from a higher sensitivity of the near-infrared lines to Teff.


Author(s):  
Sophia Bano ◽  
Francisco Vasconcelos ◽  
Emmanuel Vander Poorten ◽  
Tom Vercauteren ◽  
Sebastien Ourselin ◽  
...  

Abstract Purpose Fetoscopic laser photocoagulation is a minimally invasive surgery for the treatment of twin-to-twin transfusion syndrome (TTTS). By using a lens/fibre-optic scope, inserted into the amniotic cavity, the abnormal placental vascular anastomoses are identified and ablated to regulate blood flow to both fetuses. Limited field-of-view, occlusions due to fetus presence and low visibility make it difficult to identify all vascular anastomoses. Automatic computer-assisted techniques may provide better understanding of the anatomical structure during surgery for risk-free laser photocoagulation and may facilitate in improving mosaics from fetoscopic videos. Methods We propose FetNet, a combined convolutional neural network (CNN) and long short-term memory (LSTM) recurrent neural network architecture for the spatio-temporal identification of fetoscopic events. We adapt an existing CNN architecture for spatial feature extraction and integrated it with the LSTM network for end-to-end spatio-temporal inference. We introduce differential learning rates during the model training to effectively utilising the pre-trained CNN weights. This may support computer-assisted interventions (CAI) during fetoscopic laser photocoagulation. Results We perform quantitative evaluation of our method using 7 in vivo fetoscopic videos captured from different human TTTS cases. The total duration of these videos was 5551 s (138,780 frames). To test the robustness of the proposed approach, we perform 7-fold cross-validation where each video is treated as a hold-out or test set and training is performed using the remaining videos. Conclusion FetNet achieved superior performance compared to the existing CNN-based methods and provided improved inference because of the spatio-temporal information modelling. Online testing of FetNet, using a Tesla V100-DGXS-32GB GPU, achieved a frame rate of 114 fps. These results show that our method could potentially provide a real-time solution for CAI and automating occlusion and photocoagulation identification during fetoscopic procedures.


Geosciences ◽  
2018 ◽  
Vol 8 (8) ◽  
pp. 289 ◽  
Author(s):  
Serena Benatti

Exoplanet research has shown an incessant growth since the first claim of a hot giant planet around a solar-like star in the mid-1990s. Today, the new facilities are working to spot the first habitable rocky planets around low-mass stars as a forerunner for the detection of the long-awaited Sun-Earth analog system. All the achievements in this field would not have been possible without the constant development of the technology and of new methods to detect more and more challenging planets. After the consolidation of a top-level instrumentation for high-resolution spectroscopy in the visible wavelength range, a huge effort is now dedicated to reaching the same precision and accuracy in the near-infrared. Actually, observations in this range present several advantages in the search for exoplanets around M dwarfs, known to be the most favorable targets to detect possible habitable planets. They are also characterized by intense stellar activity, which hampers planet detection, but its impact on the radial velocity modulation is mitigated in the infrared. Simultaneous observations in the visible and near-infrared ranges appear to be an even more powerful technique since they provide combined and complementary information, also useful for many other exoplanetary science cases.


2020 ◽  
Vol 640 ◽  
pp. A50 ◽  
Author(s):  
F. F. Bauer ◽  
M. Zechmeister ◽  
A. Kaminski ◽  
C. Rodríguez López ◽  
J. A. Caballero ◽  
...  

The high-resolution, dual channel, visible and near-infrared spectrograph CARMENES offers exciting opportunities for stellar and exoplanetary research on M dwarfs. In this work we address the challenge of reaching the highest radial velocity precision possible with a complex, actively cooled, cryogenic instrument, such as the near-infrared channel. We describe the performance of the instrument and the work flow used to derive precise Doppler measurements from the spectra. The capability of both CARMENES channels to detect small exoplanets is demonstrated with the example of the nearby M5.0 V star CD Cet (GJ 1057), around which we announce a super-Earth (4.0 ± 0.4 M⊕) companion on a 2.29 d orbit.


Author(s):  
L. Xue ◽  
C. Liu ◽  
Y. Wu ◽  
H. Li

Semantic segmentation is a fundamental research in remote sensing image processing. Because of the complex maritime environment, the classification of roads, vegetation, buildings and water from remote Sensing Imagery is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there are a few of works using CNN for ground object segmentation and the results could be further improved. This paper used convolution neural network named U-Net, its structure has a contracting path and an expansive path to get high resolution output. In the network , We added BN layers, which is more conducive to the reverse pass. Moreover, after upsampling convolution , we add dropout layers to prevent overfitting. They are promoted to get more precise segmentation results. To verify this network architecture, we used a Kaggle dataset. Experimental results show that U-Net achieved good performance compared with other architectures, especially in high-resolution remote sensing imagery.


2019 ◽  
Vol 11 (24) ◽  
pp. 2970 ◽  
Author(s):  
Ziran Ye ◽  
Yongyong Fu ◽  
Muye Gan ◽  
Jinsong Deng ◽  
Alexis Comber ◽  
...  

Automated methods to extract buildings from very high resolution (VHR) remote sensing data have many applications in a wide range of fields. Many convolutional neural network (CNN) based methods have been proposed and have achieved significant advances in the building extraction task. In order to refine predictions, a lot of recent approaches fuse features from earlier layers of CNNs to introduce abundant spatial information, which is known as skip connection. However, this strategy of reusing earlier features directly without processing could reduce the performance of the network. To address this problem, we propose a novel fully convolutional network (FCN) that adopts attention based re-weighting to extract buildings from aerial imagery. Specifically, we consider the semantic gap between features from different stages and leverage the attention mechanism to bridge the gap prior to the fusion of features. The inferred attention weights along spatial and channel-wise dimensions make the low level feature maps adaptive to high level feature maps in a target-oriented manner. Experimental results on three publicly available aerial imagery datasets show that the proposed model (RFA-UNet) achieves comparable and improved performance compared to other state-of-the-art models for building extraction.


Author(s):  
S Safinaz ◽  
AV Ravi kumar

In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames to high resolution frames. We compare our outcomes with multiple exiting algorithms. Our extensive results of proposed technique RemCNN (Reconstruction error minimization Convolution Neural Network) shows that our model outperforms the existing technologies such as bicubic, bilinear, MCResNet and provide better reconstructed motioning images and video frames. The experimental results shows that our average PSNR result is 47.80474 considering upscale-2, 41.70209 for upscale-3 and 36.24503 for upscale-4 for Myanmar dataset which is very high in contrast to other existing techniques. This results proves our proposed model real-time video scaling based on convolution neural network architecture’s high efficiency and better performance.


2018 ◽  
Vol 29 (7) ◽  
pp. 1073-1097 ◽  
Author(s):  
Gurinderpal Singh ◽  
VK Jain ◽  
Amanpreet Singh

The photovoltaic thermal greenhouse system highly supports the production of biogas. The system’s prime advantage is biogas heating and crop drying through varied directions of air flow. Further, it diminishes the upward loss of the system. This paper aims to model a practical greenhouse system for obtaining the precise estimation of the heating efficiency, given by the solar radiance. The simulation model adopts the self-adaptive firefly neural network model that applies on known experimental data. Therefore, the error function between the model outcome and the experimental outcome is substantially minimized. The performance analysis involves an effective comparative study on the root mean square error between the adopted self-adaptive firefly neural network model and the conventional models such as Levenberg–Marquardt neural network and firefly neural network. Later, the impact of self-adaptiveness, FF update and learning performance on attaining the knowledge regarding the characteristics of SAFF algorithm is analysed to yield better performance.


2019 ◽  
Vol 11 (23) ◽  
pp. 2813 ◽  
Author(s):  
Wenchao Kang ◽  
Yuming Xiang ◽  
Feng Wang ◽  
Hongjian You

Automatic building extraction from high-resolution remote sensing images has many practical applications, such as urban planning and supervision. However, fine details and various scales of building structures in high-resolution images bring new challenges to building extraction. An increasing number of neural network-based models have been proposed to handle these issues, while they are not efficient enough, and still suffer from the error ground truth labels. To this end, we propose an efficient end-to-end model, EU-Net, in this paper. We first design the dense spatial pyramid pooling (DSPP) to extract dense and multi-scale features simultaneously, which facilitate the extraction of buildings at all scales. Then, the focal loss is used in reverse to suppress the impact of the error labels in ground truth, making the training stage more stable. To assess the universality of the proposed model, we tested it on three public aerial remote sensing datasets: WHU aerial imagery dataset, Massachusetts buildings dataset, and Inria aerial image labeling dataset. Experimental results show that the proposed EU-Net is superior to the state-of-the-art models of all three datasets and increases the prediction efficiency by two to four times.


Sign in / Sign up

Export Citation Format

Share Document