scholarly journals Lost Horizon: Quantifying the Effect of Local Topography on Global 21 cm Cosmology Data Analysis

2021 ◽  
Vol 923 (1) ◽  
pp. 33
Author(s):  
Neil Bassett ◽  
David Rapetti ◽  
Keith Tauscher ◽  
Bang D. Nhan ◽  
David D. Bordenave ◽  
...  

Abstract We present an investigation of the horizon and its effect on global 21 cm observations and analysis. We find that the horizon cannot be ignored when modeling low-frequency observations. Even if the sky and antenna beam are known exactly, forward models cannot fully describe the beam-weighted foreground component without accurate knowledge of the horizon. When fitting data to extract the 21 cm signal, a single time-averaged spectrum or independent multi-spectrum fits may be able to compensate for the bias imposed by the horizon. However, these types of fits lack constraining power on the 21 cm signal, leading to large uncertainties on the signal extraction, in some cases larger in magnitude than the 21 cm signal itself. A significant decrease in uncertainty can be achieved by performing multi-spectrum fits in which the spectra are modeled simultaneously with common parameters. The cost of this greatly increased constraining power, however, is that the time dependence of the horizon’s effect, which is more complex than its spectral dependence, must be precisely modeled to achieve a good fit. To aid in modeling the horizon, we present an algorithm and Python package for calculating the horizon profile from a given observation site using elevation data. We also address several practical concerns such as pixelization error, uncertainty in the horizon profile, and foreground obstructions such as surrounding buildings and vegetation. We demonstrate that our training-set-based analysis pipeline can account for all of these factors to model the horizon well enough to precisely extract the 21 cm signal from simulated observations.

1997 ◽  
Vol 22 (19) ◽  
pp. 1485 ◽  
Author(s):  
Ke-Xun Sun ◽  
Martin M. Fejer ◽  
Eric K. Gustafson ◽  
Robert L. Byer

2007 ◽  
Vol 38 (7) ◽  
pp. 11-17
Author(s):  
Ronald M. Aarts

Conventionally, the ultimate goal in loudspeaker design has been to obtain a flat frequency response over a specified frequency range. This can be achieved by carefully selecting the main loudspeaker parameters such as the enclosure volume, the cone diameter, the moving mass and the very crucial “force factor”. For loudspeakers in small cabinets the results of this design procedure appear to be quite inefficient, especially at low frequencies. This paper describes a new solution to this problem. It consists of the combination of a highly non-linear preprocessing of the audio signal and the use of a so called low-force-factor loudspeaker. This combination yields a strongly increased efficiency, at least over a limited frequency range, at the cost of a somewhat altered sound quality. An analytically tractable optimality criterion has been defined and has been verified by the design of an experimental loudspeaker. This has a much higher efficiency and a higher sensitivity than current low-frequency loudspeakers, while its cabinet can be much smaller.


2010 ◽  
Vol 6 (S274) ◽  
pp. 268-273
Author(s):  
N. Mandolesi ◽  
C. Burigana ◽  
A. Gruppuso ◽  
P. Procopio ◽  
S. Ricciardi

AbstractThis paper provides an overview of the ESA Planck mission and its scientific promises. Planck is equipped with a 1.5–m effective aperture telescope with two actively-cooled instruments observing the sky in nine frequency channels from 30 GHz to 857 GHz: the Low Frequency Instrument (LFI) operating at 20 K with pseudo-correlation radiometers, and the High Frequency Instrument (HFI) with bolometers operating at 100 mK. After the successful launch in May 2009, Planck has already mapped the sky twice (at the time of writing this review) with the expected behavior and it is planned to complete at least two further all-sky surveys. The first scientific results, consisting of an Early Release Compact Source Catalog (ERCSC) and in about twenty papers on instrument performance in flight, data analysis pipeline, and main astrophysical results, will be released on January 2011. The first publications of the main cosmological implications are expected in 2012.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Huaping Guo ◽  
Xiaoyu Diao ◽  
Hongbing Liu

Rotation Forest is an ensemble learning approach achieving better performance comparing to Bagging and Boosting through building accurate and diverse classifiers using rotated feature space. However, like other conventional classifiers, Rotation Forest does not work well on the imbalanced data which are characterized as having much less examples of one class (minority class) than the other (majority class), and the cost of misclassifying minority class examples is often much more expensive than the contrary cases. This paper proposes a novel method called Embedding Undersampling Rotation Forest (EURF) to handle this problem (1) sampling subsets from the majority class and learning a projection matrix from each subset and (2) obtaining training sets by projecting re-undersampling subsets of the original data set to new spaces defined by the matrices and constructing an individual classifier from each training set. For the first method, undersampling is to force the rotation matrix to better capture the features of the minority class without harming the diversity between individual classifiers. With respect to the second method, the undersampling technique aims to improve the performance of individual classifiers on the minority class. The experimental results show that EURF achieves significantly better performance comparing to other state-of-the-art methods.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nisha Kanwar ◽  
Celia Blanco ◽  
Irene A. Chen ◽  
Burckhard Seelig

AbstractAdvances in sequencing technology have allowed researchers to sequence DNA with greater ease and at decreasing costs. Main developments have focused on either sequencing many short sequences or fewer large sequences. Methods for sequencing mid-sized sequences of 600–5,000 bp are currently less efficient. For example, the PacBio Sequel I system yields ~ 100,000–300,000 reads with an accuracy per base pair of 90–99%. We sought to sequence several DNA populations of ~ 870 bp in length with a sequencing accuracy of 99% and to the greatest depth possible. We optimised a simple, robust method to concatenate genes of ~ 870 bp five times and then sequenced the resulting DNA of ~ 5,000 bp by PacBioSMRT long-read sequencing. Our method improved upon previously published concatenation attempts, leading to a greater sequencing depth, high-quality reads and limited sample preparation at little expense. We applied this efficient concatenation protocol to sequence nine DNA populations from a protein engineering study. The improved method is accompanied by a simple and user-friendly analysis pipeline, DeCatCounter, to sequence medium-length sequences efficiently at one-fifth of the cost.


Geophysics ◽  
2017 ◽  
Vol 82 (5) ◽  
pp. P61-P73 ◽  
Author(s):  
Lasse Amundsen ◽  
Ørjan Pedersen ◽  
Are Osen ◽  
Johan O. A. Robertsson ◽  
Martin Landrø

The source depth influences the frequency band of seismic data. Due to the source ghost effect, it is advantageous to deploy sources deep to enhance the low-frequency content of seismic data. But, for a given source volume, the bubble period decreases with the source depth, thereby degrading the low-frequency content. At the same time, deep sources reduce the seismic bandwidth. Deploying sources at shallower depths has the opposite effects. A shallow source provides improved high-frequency content at the cost of degraded low-frequency content due to the ghosting effect, whereas the bubble period increases with a lesser source depth, thereby slightly improving the low-frequency content. A solution to the challenge of extending the bandwidth on the low- and high-frequency side is to deploy over/under sources, in which sources are towed at two depths. We have developed a mathematical ghost model for over/under point sources fired in sequential and simultaneous modes, and we have found an inverse model, which on common receiver gathers can jointly perform designature and deghosting of the over/under source measurements. We relate the model for simultaneous mode shooting to recent work on general multidepth level array sources, with previous known solutions. Two numerical examples related to over/under sequential shooting develop the main principles and the viability of the method.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yanduo Ren ◽  
Jiangbo Qian ◽  
Yihong Dong ◽  
Yu Xin ◽  
Huahui Chen

Nearest neighbour search (NNS) is the core of large data retrieval. Learning to hash is an effective way to solve the problems by representing high-dimensional data into a compact binary code. However, existing learning to hash methods needs long bit encoding to ensure the accuracy of query, and long bit encoding brings large cost of storage, which severely restricts the long bit encoding in the application of big data. An asymmetric learning to hash with variable bit encoding algorithm (AVBH) is proposed to solve the problem. The AVBH hash algorithm uses two types of hash mapping functions to encode the dataset and the query set into different length bits. For datasets, the hash code frequencies of datasets after random Fourier feature encoding are statistically analysed. The hash code with high frequency is compressed into a longer coding representation, and the hash code with low frequency is compressed into a shorter coding representation. The query point is quantized to a long bit hash code and compared with the same length cascade concatenated data point. Experiments on public datasets show that the proposed algorithm effectively reduces the cost of storage and improves the accuracy of query.


2019 ◽  
Author(s):  
Sam Ronan Finnegan ◽  
Leslie Nitsche ◽  
Matteo Mondani ◽  
M Florencia Camus ◽  
Kevin Fowler ◽  
...  

AbstractMale mate preferences have been demonstrated across a range of species, including the Malaysian stalk-eyed fly, Teleopsis dalmanni. This species is subject to sex-ratio (SR), an X-linked male meiotic driver, which causes the dysfunction of Y-sperm and the production of all-female broods. While there has been work considering female avoidance of meiotic drive males, the mating decisions of drive-bearing males have not been considered previously. Drive males may be less able to bear the cost of choice as SR is associated with a low-frequency inversion that causes reduced organismal fitness. Drive males may also experience weaker selection for preference maintenance if they are avoided by females. Using binary choice trials, across two experiments, we confirmed male preference for large (fecund) females but found no evidence that the strength of male preference differs between drive and standard males. We showed that large eyespan males displayed strong preference for large females, whereas small eyespan males showed no preference. Taken together, these results suggest that, even though meiotic drive is associated with lower genetic quality, it does not directly interfere with male mate preference among available females. However, as drive males tend to have smaller eyespan (albeit only ~5% on average), this will to a minor extent weaken their strength of preference.


2015 ◽  
Vol 2015 ◽  
pp. 1-8
Author(s):  
Chao Huang ◽  
Xin Xu ◽  
Dunge Liu ◽  
Wanhua Zhu ◽  
Xiaojuan Zhang ◽  
...  

It is a technical challenge to effectively remove the influence of magnetic noise from the vicinity of the receiving sensors on low-frequency magnetic communication. The traditional denoising methods are difficult to extract high-quality original signals under the condition of low SNR (the signal-to-noise ratio). In this paper, we analyze the numerical characteristics of the low-frequency magnetic field and propose the algorithms of the fast optimization of blind source separation (FOBSS) and the frequency-domain correlation extraction (FDCE). FOBSS is based on blind source separation (BSS). Signal extraction of low SNR can be implemented through FOBSS and FDCE. This signal extraction method is verified in multiple field experiments which can remove the magnetic noise by about 25 dB or more.


2020 ◽  
Author(s):  
Velimir Ilić ◽  
Alessandro Bertolini ◽  
Fabio Bonsignorio ◽  
Dario Jozinović ◽  
Tomasz Bulik ◽  
...  

<p>The analysis of low-frequency gravitational waves (GW) data is a crucial mission of GW science and the performance of Earth-based GW detectors is largely influenced by ability of combating the low-frequency ambient seismic noise and other seismic influences. This tasks require multidisciplinary research in the fields of seismic sensing, signal processing, robotics, machine learning and mathematical modeling.<br><br>In practice, this kind of research is conducted by large teams of researchers with different expertise, so that project management emerges as an important real life challenge in the projects for acquisition, processing and interpretation of seismic data from GW detector site. A prominent example that successfully deals with this aspect could be observed in the COST Action G2Net (CA17137 - A network for Gravitational Waves, Geophysics and Machine Learning) and its seismic research group, which counts more than 30 members. </p><div>In this talk we will review the structure of the group, present the goals and recent activities of the group, and present new methods for combating the seismic influences at GW detector site that will be developed and applied within this collaboration.</div><div> <p> </p> <p>This publication is based upon work from CA17137 - A network for Gravitational Waves, Geophysics and Machine Learning, supported by COST (European Cooperation in Science and Technology).</p> </div>


Sign in / Sign up

Export Citation Format

Share Document