scholarly journals Measurement of geophysical effects on the large-scale gravitational-wave interferometer

2020 ◽  
Vol 29 (07) ◽  
pp. 2050050
Author(s):  
A. V. Gusev ◽  
E. Majorana ◽  
V. N. Rudenko ◽  
V. D. Yushkin

Geophysical application of large free-mass laser interferometers, which had been designed merely for the detection of gravitational radiation of an astrophysical nature, are considered. Despite the suspended mass-mirrors, these interferometers can be considered as two coordinate meters even at very low frequency ([Formula: see text][Formula: see text]Hz) are rather accurate two-coordinate distance meters. In this case, the measurement of geodynamic deformations looks like a parallel product of long-term observations dictated by the task of the blind search for gravitational waves (GW) of extraterrestrial origin. Compared to conventional laser strain meters, gravitational interferometers have the advantage of an increased absolute value of the deformation signal due to the 3–4[Formula: see text]km baseline. The magnitude of the tidal variations of the baseline is 150–200[Formula: see text]microns, leading to conceive the observation of the fine structure of geodynamic disturbances. This paper presents the results of processing geophysical measurements made on a Virgo interferometer during test (technical) series of observations in 2007–2009. The specific design of mass-mirrors suspensions in the Virgo gravitational interferometer also creates a unique possibility of separating gravitational and deformation perturbations through a recording mutual angular deviations of the suspensions of its central and end mirrors. It gives a measurement of the spatial derivative of the gravity acceleration along with the geoid of the Earth. In this mode, the physics of the interferometer is considered with estimates of the achievable sensitivity in the application to the classical problem of registration of oscillations of the inner Earth’s core.

2021 ◽  
Vol 11 (9) ◽  
pp. 3868
Author(s):  
Qiong Wu ◽  
Hairui Zhang ◽  
Jie Lian ◽  
Wei Zhao ◽  
Shijie Zhou ◽  
...  

The energy harvested from the renewable energy has been attracting a great potential as a source of electricity for many years; however, several challenges still exist limiting output performance, such as the package and low frequency of the wave. Here, this paper proposed a bistable vibration system for harvesting low-frequency renewable energy, the bistable vibration model consisting of an inverted cantilever beam with a mass block at the tip in a random wave environment and also develop a vibration energy harvesting system with a piezoelectric element attached to the surface of a cantilever beam. The experiment was carried out by simulating the random wave environment using the experimental equipment. The experiment result showed a mass block’s response vibration was indeed changed from a single stable vibration to a bistable oscillation when a random wave signal and a periodic signal were co-excited. It was shown that stochastic resonance phenomena can be activated reliably using the proposed bistable motion system, and, correspondingly, large-scale bistable responses can be generated to realize effective amplitude enlargement after input signals are received. Furthermore, as an important design factor, the influence of periodic excitation signals on the large-scale bistable motion activity was carefully discussed, and a solid foundation was laid for further practical energy harvesting applications.


Symmetry ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 845
Author(s):  
Dongheun Han ◽  
Chulwoo Lee ◽  
Hyeongyeop Kang

The neural-network-based human activity recognition (HAR) technique is being increasingly used for activity recognition in virtual reality (VR) users. The major issue of a such technique is the collection large-scale training datasets which are key for deriving a robust recognition model. However, collecting large-scale data is a costly and time-consuming process. Furthermore, increasing the number of activities to be classified will require a much larger number of training datasets. Since training the model with a sparse dataset can only provide limited features to recognition models, it can cause problems such as overfitting and suboptimal results. In this paper, we present a data augmentation technique named gravity control-based augmentation (GCDA) to alleviate the sparse data problem by generating new training data based on the existing data. The benefits of the symmetrical structure of the data are that it increased the number of data while preserving the properties of the data. The core concept of GCDA is two-fold: (1) decomposing the acceleration data obtained from the inertial measurement unit (IMU) into zero-gravity acceleration and gravitational acceleration, and augmenting them separately, and (2) exploiting gravity as a directional feature and controlling it to augment training datasets. Through the comparative evaluations, we validated that the application of GCDA to training datasets showed a larger improvement in classification accuracy (96.39%) compared to the typical data augmentation methods (92.29%) applied and those that did not apply the augmentation method (85.21%).


Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1474
Author(s):  
Ruben Tapia-Olvera ◽  
Francisco Beltran-Carbajal ◽  
Antonio Valderrabano-Gonzalez ◽  
Omar Aguilar-Mejia

This proposal is aimed to overcome the problem that arises when diverse regulation devices and controlling strategies are involved in electric power systems regulation design. When new devices are included in electric power system after the topology and regulation goals were defined, a new design stage is generally needed to obtain the desired outputs. Moreover, if the initial design is based on a linearized model around an equilibrium point, the new conditions might degrade the whole performance of the system. Our proposal demonstrates that the power system performance can be guaranteed with one design stage when an adequate adaptive scheme is updating some critic controllers’ gains. For large-scale power systems, this feature is illustrated with the use of time domain simulations, showing the dynamic behavior of the significant variables. The transient response is enhanced in terms of maximum overshoot and settling time. This is demonstrated using the deviation between the behavior of some important variables with StatCom, but without or with PSS. A B-Spline neural networks algorithm is used to define the best controllers’ gains to efficiently attenuate low frequency oscillations when a short circuit event is presented. This strategy avoids the parameters and power system model dependency; only a dataset of typical variable measurements is required to achieve the expected behavior. The inclusion of PSS and StatCom with positive interaction, enhances the dynamic performance of the system while illustrating the ability of the strategy in adding different controllers in only one design stage.


2021 ◽  
Vol 11 (15) ◽  
pp. 6688
Author(s):  
Jesús Romero Leguina ◽  
Ángel Cuevas Rumin ◽  
Rubén Cuevas Rumin

The goal of digital marketing is to connect advertisers with users that are interested in their products. This means serving ads to users, and it could lead to a user receiving hundreds of impressions of the same ad. Consequently, advertisers can define a maximum threshold to the number of impressions a user can receive, referred to as Frequency Cap. However, low frequency caps mean many users are not engaging with the advertiser. By contrast, with high frequency caps, users may receive many ads leading to annoyance and wasting budget. We build a robust and reliable methodology to define the number of ads that should be delivered to different users to maximize the ROAS and reduce the possibility that users get annoyed with the ads’ brand. The methodology uses a novel technique to find the optimal frequency capping based on the number of non-clicked impressions rather than the traditional number of received impressions. This methodology is validated using simulations and large-scale datasets obtained from real ad campaigns data. To sum up, our work proves that it is feasible to address the frequency capping optimization as a business problem, and we provide a framework that can be used to configure efficient frequency capping values.


1998 ◽  
Vol 58 (3) ◽  
pp. 3768-3776 ◽  
Author(s):  
B. Weyssow ◽  
J. D. Reuss ◽  
J. Misguich

2018 ◽  
Vol 22 (6) ◽  
pp. 3105-3124 ◽  
Author(s):  
Zilefac Elvis Asong ◽  
Howard Simon Wheater ◽  
Barrie Bonsal ◽  
Saman Razavi ◽  
Sopan Kurkute

Abstract. Drought is a recurring extreme climate event and among the most costly natural disasters in the world. This is particularly true over Canada, where drought is both a frequent and damaging phenomenon with impacts on regional water resources, agriculture, industry, aquatic ecosystems, and health. However, nationwide drought assessments are currently lacking and impacted by limited ground-based observations. This study provides a comprehensive analysis of historical droughts over the whole of Canada, including the role of large-scale teleconnections. Drought events are characterized by the Standardized Precipitation Evapotranspiration Index (SPEI) over various temporal scales (1, 3, 6, and 12 consecutive months, 6 months from April to September, and 12 months from October to September) applied to different gridded monthly data sets for the period 1950–2013. The Mann–Kendall test, rotated empirical orthogonal function, continuous wavelet transform, and wavelet coherence analyses are used, respectively, to investigate the trend, spatio-temporal patterns, periodicity, and teleconnectivity of drought events. Results indicate that southern (northern) parts of the country experienced significant trends towards drier (wetter) conditions although substantial variability exists. Two spatially well-defined regions with different temporal evolution of droughts were identified – the Canadian Prairies and northern central Canada. The analyses also revealed the presence of a dominant periodicity of between 8 and 32 months in the Prairie region and between 8 and 40 months in the northern central region. These cycles of low-frequency variability are found to be associated principally with the Pacific–North American (PNA) and Multivariate El Niño/Southern Oscillation Index (MEI) relative to other considered large-scale climate indices. This study is the first of its kind to identify dominant periodicities in drought variability over the whole of Canada in terms of when the drought events occur, their duration, and how often they occur.


Author(s):  
Paolo Bergamo ◽  
Conny Hammer ◽  
Donat Fäh

ABSTRACT We address the relation between seismic local amplification and topographical and geological indicators describing the site morphology. We focus on parameters that can be derived from layers of diffuse information (e.g., digital elevation models, geological maps) and do not require in situ surveys; we term these parameters as “indirect” proxies, as opposed to “direct” indicators (e.g., f0, VS30) derived from field measurements. We first compiled an extensive database of indirect parameters covering 142 and 637 instrumented sites in Switzerland and Japan, respectively; we collected topographical indicators at various spatial extents and focused on shared features in the geological descriptions of the two countries. We paired this proxy database with a companion dataset of site amplification factors at 10 frequencies within 0.5–20 Hz, empirically measured at the same Swiss and Japanese stations. We then assessed the robustness of the correlation between individual site-condition indicators and local response by means of statistical analyses; we also compared the proxy-site amplification relations at Swiss versus Japanese sites. Finally, we tested the prediction of site amplification by feeding ensembles of indirect parameters to a neural network (NN) structure. The main results are: (1) indirect indicators show higher correlation with site amplification in the low-frequency range (0.5–3.33 Hz); (2) topographical parameters primarily relate to local response not because of topographical amplification effects but because topographical features correspond to the properties of the subsurface, hence to stratigraphic amplification; (3) large-scale topographical indicators relate to low-frequency response, smaller-scale to higher-frequency response; (4) site amplification versus indirect proxy relations show a more marked regional variability when compared with direct indicators; and (5) the NN-based prediction of site response is the best achieved in the 1.67–5 Hz band, with both geological and topographical proxies provided as input; topographical indicators alone perform better than geological parameters.


2014 ◽  
Vol 115 (14) ◽  
pp. 144107 ◽  
Author(s):  
E. Todd Ryan ◽  
Stephen M. Gates ◽  
Stephan A. Cohen ◽  
Yuri Ostrovski ◽  
Ed Adams ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document