Unconstrained global simulations of ocean tides up to degree 3 for satellite gravimetry

Author(s):  
Roman Sulzbach ◽  
Henryk Dobslaw ◽  
Maik Thomas

<p>Tidal de-aliasing of satellite gravimetric data is a critical task in order to correctly extract gravimetric signatures of climate signals like glacier melting or groundwater depletion and poses a high demand on the accuracy of the employed tidal solutions (Flechtner et al., 2016). Modern tidal atlases that are constrained by altimetry data possess a high level of accuracy, especially for partial tides exhibiting large open ocean signals (e.g. M2, K1). Since the achievable precision directly depends on the available density and quality of altimetry data, the accuracy relative to the tidal amplitude drops for minor tidal excitations (worse signal-to-noise ratio) as well as in polar latitudes (sparse satellite-data). In contrast, this drop in relative accuracy can be reduced by employing an unconstrained tidal model acting independently of altimetric data.<br>We will present recent results from the purely-hydrodynamic, barotropic tidal model TiME (Weis et al., 2008) that benefit from a set of recently implemented upgrades. Among others, these include a revised scheme for dynamic feedbacks of self-attraction and loading; energy-dissipation by parametrized internal wavedrag; partial tide excitations by the tide-generating potential up to degree 3; and a pole-rotation scheme allowing for simulations dedicated to polar areas. Benefiting from the recent updates, the obtained solutions for major tides are on the same level of accuracy as comparable modern unconstrained tidal models. Furthermore, we show that the relative accuracy level only drops moderately for tidal excitations with small excitation strength (e.g. for minor tides), thus narrowing down the accuracy gap to data-constrained tidal atlases. Exemplarily for this, we introduce solutions for minor tidal excitations of degrees 2 and 3 that represent valuable constraints for the expected ocean tide dynamics. While they are currently not considered for GRACE-FO de-aliasing we demonstrate that third-degree tides can lead to relevant aliasing of satellite gravity fields and correspond closely to recently published empirical solutions (Ray, 2020).</p>

2007 ◽  
Vol 50 (1) ◽  
pp. 116-123 ◽  
Author(s):  
Jiang-Cun ZHOU ◽  
He-Ping SUN

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1196 ◽  
Author(s):  
Seulah Lee ◽  
Babar Jamil ◽  
Sunhong Kim ◽  
Youngjin Choi

Myoelectric prostheses assist users to live their daily lives. However, the majority of users are primarily confined to forearm amputees because the surface electromyography (sEMG) that understands the motion intents should be acquired from a residual limb for control of the myoelectric prosthesis. This study proposes a novel fabric vest socket that includes embroidered electrodes suitable for a high-level upper amputee, especially for shoulder disarticulation. The fabric vest socket consists of rigid support and a fabric vest with embroidered electrodes. Several experiments were conducted to verify the practicality of the developed vest socket with embroidered electrodes. The sEMG signals were measured using commercial Ag/AgCl electrodes for a comparison to verify the performance of the embroidered electrodes in terms of signal amplitudes, the skin-electrode impedance, and signal-to-noise ratio (SNR). These results showed that the embroidered electrodes were as effective as the commercial electrodes. Then, posture classification was carried out by able-bodied subjects for the usability of the developed vest socket. The average classification accuracy for each subject reached 97.92%, and for all the subjects it was 93.2%. In other words, the fabric vest socket with the embroidered electrodes could measure sEMG signals with high accuracy. Therefore, it is expected that it can be readily worn by high-level amputees to control their myoelectric prostheses, as well as it is cost effective for fabrication as compared with the traditional socket.


Author(s):  
Michael Radermacher ◽  
Teresa Ruiz

Biological samples are radiation-sensitive and require imaging under low-dose conditions to minimize damage. As a result, images contain a high level of noise and exhibit signal-to-noise ratios that are typically significantly smaller than 1. Averaging techniques, either implicit or explicit, are used to overcome the limitations imposed by the high level of noise. Averaging of 2D images showing the same molecule in the same orientation results in highly significant projections. A high-resolution structure can be obtained by combining the information from many single-particle images to determine a 3D structure. Similarly, averaging of multiple copies of macromolecular assembly subvolumes extracted from tomographic reconstructions can lead to a virtually noise-free high-resolution structure. Cross-correlation methods are often used in the alignment and classification steps of averaging processes for both 2D images and 3D volumes. However, the high noise level can bias alignment and certain classification results. While other approaches may be implicitly affected, sensitivity to noise is most apparent in multireference alignments, 3D reference-based projection alignments and projection-based volume alignments. Here, the influence of the image signal-to-noise ratio on the value of the cross-correlation coefficient is analyzed and a method for compensating for this effect is provided.


2012 ◽  
Vol 108 (10) ◽  
pp. 2641-2652 ◽  
Author(s):  
K. Heimonen ◽  
E.-V. Immonen ◽  
R. V. Frolov ◽  
I. Salmela ◽  
M. Juusola ◽  
...  

In dim light, scarcity of photons typically leads to poor vision. Nonetheless, many animals show visually guided behavior with dim environments. We investigated the signaling properties of photoreceptors of the dark active cockroach ( Periplaneta americana) using intracellular and whole-cell patch-clamp recordings to determine whether they show selective functional adaptations to dark. Expectedly, dark-adapted photoreceptors generated large and slow responses to single photons. However, when light adapted, responses of both phototransduction and the nontransductive membrane to white noise (WN)-modulated stimuli remained slow with corner frequencies ∼20 Hz. This promotes temporal integration of light inputs and maintains high sensitivity of vision. Adaptive changes in dynamics were limited to dim conditions. Characteristically, both step and frequency responses stayed effectively unchanged for intensities >1,000 photons/s/photoreceptor. A signal-to-noise ratio (SNR) of the light responses was transiently higher at frequencies <5 Hz for ∼5 s after light onset but deteriorated to a lower value upon longer stimulation. Naturalistic light stimuli, as opposed to WN, evoked markedly larger responses with higher SNRs at low frequencies. This allowed realistic estimates of information transfer rates, which saturated at ∼100 bits/s at low-light intensities. We found, therefore, selective adaptations beneficial for vision in dim environments in cockroach photoreceptors: large amplitude of single-photon responses, constant high level of temporal integration of light inputs, saturation of response properties at low intensities, and only transiently efficient encoding of light contrasts. The results also suggest that the sources of the large functional variability among different photoreceptors reside mostly in phototransduction processes and not in the properties of the nontransductive membrane.


2020 ◽  
Vol 221 (2) ◽  
pp. 1190-1210 ◽  
Author(s):  
Anna F Purkhauser ◽  
Christian Siemes ◽  
Roland Pail

SUMMARY The GRACE and GRACE-FO missions have been observing time variations of the Earth's gravity field for more than 15 yr. For a possible successor mission, the need to continue mass change observations have to be balanced with the ambition for monitoring capabilities with an enhanced spatial and temporal resolution that will enable improved scientific results and will serve operational services and applications. Various study groups performed individual simulations to analyse different aspects of possible NGGMs from a scientific and technical point of view. As these studies are not directly comparable due to different assumptions regarding mission design and instrumentation, the goal of this paper is to systematically analyse and quantify the key mission parameters (number of satellite pairs, orbit altitude, sensors) and the impact of various error sources (AO, OT models, post-processing) in a consistent simulation environment. Our study demonstrates that a single-pair mission with laser interferometry in a low orbit with a drag compensation system would be the only possibility within the single-pair options to increase the performance compared to the GRACE/GRACE-FO. Tailored post-processing is not able to achieve the same performance as a double-pair mission without post-processing. Also, such a mission concept does not solve the problems of temporal aliasing due to observation geometry. In contrast, double-pair concepts have the potential to retrieve the full AOHIS signal and in some cases even double the performance to the comparable single-pair scenario. When combining a double-pair with laser interferometry and an improved accelerometer, the sensor noise is, apart from the ocean tide modelling errors, one of the limiting factors. Therefore, the next big step for observing the gravity field globally with a satellite mission can only be taken by launching a double pair mission. With this quantification of key architecture features of a future satellite gravity mission, the study aims to improve the available information to allow for an informed decision making and give an indication of priority for the different mission concepts.


2019 ◽  
Vol 9 (5) ◽  
pp. 1009 ◽  
Author(s):  
Hui Fan ◽  
Meng Han ◽  
Jinjiang Li

Image degradation caused by shadows is likely to cause technological issues in image segmentation and target recognition. In view of the existing shadow removal methods, there are problems such as small and trivial shadow processing, the scarcity of end-to-end automatic methods, the neglecting of light, and high-level semantic information such as materials. An end-to-end deep convolutional neural network is proposed to further improve the image shadow removal effect. The network mainly consists of two network models, an encoder–decoder network and a small refinement network. The former predicts the alpha shadow scale factor, and the latter refines to obtain sharper edge information. In addition, a new image database (remove shadow database, RSDB) is constructed; and qualitative and quantitative evaluations are made on databases such as UIUC, UCF and newly-created databases (RSDB) with various real images. Using the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) for quantitative analysis, the algorithm has a big improvement on the PSNR and the SSIM as opposed to other methods. In terms of qualitative comparisons, the network shadow has a clearer and shadow-free image that is consistent with the original image color and texture, and the detail processing effect is much better. The experimental results show that the proposed algorithm is superior to other algorithms, and it is more robust in subjective vision and objective quantization.


2021 ◽  
Author(s):  
Lars Erik Sjöberg ◽  
Majid Abrehdary

This chapter describes a theory and application of satellite gravity and altimetry data for determining Moho constituents (i.e. Moho depth and density contrast) with support from a seismic Moho model in a least-squares adjustment. It presents and applies the Vening Meinesz-Moritz gravimetric-isostatic model in recovering the global Moho features. Internal and external uncertainty estimates are also determined. Special emphasis is devoted to presenting methods for eliminating the so-called non-isostatic effects, i.e. the gravimetric signals from the Earth both below the crust and from partly unknown density variations in the crust and effects due to delayed Glacial Isostatic Adjustment as well as for capturing Moho features not related with isostatic balance. The global means of the computed Moho depths and density contrasts are 23.8±0.05 km and 340.5 ± 0.37 kg/m3, respectively. The two Moho features vary between 7.6 and 70.3 km as well as between 21.0 and 650.0 kg/m3. Validation checks were performed for our modeled crustal depths using a recently published seismic model, yielding an RMS difference of 4 km.


2021 ◽  
Author(s):  
Qihang Wang ◽  
Feng Liu ◽  
Guihong Wan ◽  
Ying Chen

AbstractMonitoring the depth of unconsciousness during anesthesia is useful in both clinical settings and neuroscience investigations to understand brain mechanisms. Electroencephalogram (EEG) has been used as an objective means of characterizing brain altered arousal and/or cognition states induced by anesthetics in real-time. Different general anesthetics affect cerebral electrical activities in different ways. However, the performance of conventional machine learning models on EEG data is unsatisfactory due to the low Signal to Noise Ratio (SNR) in the EEG signals, especially in the office-based anesthesia EEG setting. Deep learning models have been used widely in the field of Brain Computer Interface (BCI) to perform classification and pattern recognition tasks due to their capability of good generalization and handling noises. Compared to other BCI applications, where deep learning has demonstrated encouraging results, the deep learning approach for classifying different brain consciousness states under anesthesia has been much less investigated. In this paper, we propose a new framework based on meta-learning using deep neural networks, named Anes-MetaNet, to classify brain states under anesthetics. The Anes-MetaNet is composed of Convolutional Neural Networks (CNN) to extract power spectrum features, and a time consequence model based on Long Short-Term Memory (LSTM) Networks to capture the temporal dependencies, and a meta-learning framework to handle large cross-subject variability. We used a multi-stage training paradigm to improve the performance, which is justified by visualizing the high-level feature mapping. Experiments on the office-based anesthesia EEG dataset demonstrate the effectiveness of our proposed Anes-MetaNet by comparison of existing methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Yuxin Huang

Modulation recognition of communication signals plays an important role in both civil and military uses. Neural network-based modulation recognition methods can extract high-level abstract features which can be adopted for classification of modulation types. Compared with traditional recognition methods based on manually defined features, they have the advantage of higher recognition rate. However, in actual modulation recognition scenarios, due to inaccurate estimation of receiving parameters and other reasons, the input signal samples for modulation recognition may have large phase, frequency offsets, and time scale changes. Existing deep learning-based modulation recognition methods have not considered the influences brought by the above issues, thus resulting in a decreased recognition rate. A modulation recognition method based on the spatial transformation network is proposed in this paper. In the proposed network, some prior models for synchronization in communication are introduced, and the priori models are realized through the spatial transformation subnetwork, so as to reduce the influence of phase, frequency offsets, and time scale differences. Experiments on simulated datasets prove that compared with the traditional CNN, ResNet, and the CLDNN, the recognition rate of the proposed method has increased by 8.0%, 5.8%, and 4.6%, respectively, when the signal-to-noise ratio is greater than 0. Moreover, the proposed network is also easier to train. The training time required for convergence has reduced by 4.5% and 80.7% compared to the ResNet and CLDNN, respectively.


Author(s):  
Ghada Mohammad Tahir Kasim Aldabagh

The rapid growth of various technologies makes secure data transmission a very important issue. The cryptography technique is the most popular method in several security cases. The major issue is putting the security in place without disturbing the personal data. In such a case, steganography can be used as the most suitable alternative technique. Essentially, steganography is the art of hiding important data that we want to send over a channel or transmission medium. The hiding of information is made with the carrier, making the hacking of data difficult. Hiding the important data provides security and privacy at a high level. In general, there are five types of steganograph: text, audio, video, image and protocol. The carrier plays an important role in steganography. The selection of the carrier depends on the level of security. We proposed an algorithm which is useful for multi-level steganography, and is more advantageous than the standard, less significant, bit algorithm. In this paper, we used multi-level steganography. We focused on image steganography along with a fish algorithm in order to secure the text. We aimed to compare the performance of a basic algorithm to the proposed algorithm. The parameters which have been taken into consideration to compare these two algorithms are execution time and peak signal to noise ratio. We achieved the expected result which shows the applied security for the important data. We used MATLAB 10 software to get these results.


Sign in / Sign up

Export Citation Format

Share Document