Machine Learning Based MIMO Equalizer for High Frequency (HF) Communications

Author(s):  
Samuel Spillane ◽  
Kristopher H Jung ◽  
Kate Bowers ◽  
Ture Peken ◽  
Michael H. Marefat ◽  
...  
2020 ◽  
Author(s):  
Mark B. Green ◽  
Linda H. Pardo ◽  
Scott W. Bailey ◽  
John L. Campbell ◽  
William H. McDowell ◽  
...  

2021 ◽  
Author(s):  
Jasmina Arifovic ◽  
Xue-Zhong 'Tony' He ◽  
Lijian Wei

2021 ◽  
Vol 40 (10) ◽  
pp. 759-767
Author(s):  
Rolf H. Baardman ◽  
Rob F. Hegge

Machine learning (ML) has proven its value in the seismic industry with successful implementations in areas of seismic interpretation such as fault and salt dome detection and velocity picking. The field of seismic processing research also is shifting toward ML applications in areas such as tomography, demultiple, and interpolation. Here, a supervised ML deblending algorithm is illustrated on a dispersed source array (DSA) data example in which both high- and low-frequency vibrators were deployed simultaneously. Training data pairs of blended and corresponding unblended data were constructed from conventional (unblended) data from another survey. From this training data, the method can automatically learn a deblending operator that is used to deblend for both the low- and the high-frequency vibrators of the DSA data. The results obtained on the DSA data are encouraging and show that the ML deblending method can offer a good performing, less user-intensive alternative to existing deblending methods.


2021 ◽  
Author(s):  
Lea Himmer ◽  
Zoé Bürger ◽  
Leonie Fresz ◽  
Janina Maschke ◽  
Lore Wagner ◽  
...  

Reactivation of newly acquired memories during sleep across hippocampal and neocortical systems is proposed to underlie systems memory consolidation. Here, we investigate spontaneous memory reprocessing during sleep by applying machine learning to source space-transformed magnetoencephalographic data in a two-step exploratory and confirmatory study design. We decode memory-related activity from slow oscillations in hippocampus, frontal cortex and precuneus, indicating parallel memory processing during sleep. Moreover, we show complementary roles of hippocampus and neocortex: while gamma activity indicated memory reprocessing in hippocampus, delta and theta frequencies allowed decoding of memory in neocortex. Neocortex and hippocampus were linked through coherent activity and modulation of high-frequency gamma oscillations by theta, a dynamic similar to memory processing during wakefulness. Overall, we noninvasively demonstrate localized, coordinated memory reprocessing in human sleep.


2019 ◽  
Vol 2019 ◽  
Author(s):  
Sy Taffel

Decision making machines are today ‘trusted’ to perform or assist with a rapidly expanding array of tasks. Indeed, many contemporary industries could not now function without them. Nevertheless, this trust in and reliance upon digital automation is far from unproblematic. This paper combines insights drawn from qualitative research with creative industries professionals, with approaches derived from software studies and media archaeology to critically interrogate three ways that digital automation is currently employed and accompanying questions that relate to trust. Firstly, digital automation is examined as a way of saving time and/or reducing human labor, such as when programmers use automated build tools or graphical user interfaces. Secondly, automation enables new types of behavior by operating at more-than-human speeds, as exemplified by high-frequency trading algorithms. Finally, the mode of digital automation associated with machine learning attempts to both predict and influence human behaviors, as epitomized by personalization algorithms within social media and search engines. While creative machines are increasingly trusted to underpin industries, culture and society, we should at least query the desirability of increasing dependence on these technologies as they are currently employed. These for-profit, corporate-controlled tools performatively reproduce a neoliberal worldview. Discussing misplaced trust in digital automation frequently conjures an imagined binary opposition between humans and machines, however, this reductive fantasy conceals the far more concrete conflict between differing technocultural assemblages composed of humans and machines. Across the examples explored in this talk, what emerges are numerous ways in which creative machines are used to perpetuate social inequalities.  


2021 ◽  
Author(s):  
Rodrigo Rivera Martinez ◽  
Diego Santaren ◽  
Olivier Laurent ◽  
Ford Cropley ◽  
Cecile Mallet ◽  
...  

<p>Deploying a dense network of sensors around emitting industrial facilities allows to detect and quantify possible CH<sub>4 </sub>leaks and monitor the emissions continuously. Designing such a monitoring network with highly precise instruments is limited by the elevated cost of instruments, requirements of power consumption and maintenance. Low cost and low power metal oxide sensor could come handy to be an alternative to deploy this kind of network at a fraction of the cost with satisfactory quality of measurements for such applications.</p><p>Recent studies have tested Metal Oxide Sensors (MO<sub>x</sub>) on natural and controlled conditions to measure atmospheric methane concentrations and showed a fair agreement with high precision instruments, such as those from Cavity Ring Down Spectrometers (CRDS). Such results open perspectives regarding the potential of MOx to be employed as an alternative to measure and quantify CH<sub>4</sub> emissions on industrial facilities. However, such sensors are known to drift with time, to be highly sensitive to water vapor mole fraction, have a poor selectivity with several known cross-sensitivities to other species and present significant sensitivity environmental factors like temperature and pressure. Different approaches for the derivation of CH<sub>4</sub> mole fractions from the MO<sub>x</sub> signal and ancillary parameter measurements have been employed to overcome these problems, from traditional approaches like linear or multilinear regressions to machine learning (ANN, SVM or Random Forest).</p><p>Most studies were focused on the derivation of ambient CH<sub>4</sub> concentrations under different conditions, but few tests assessed the performance of these sensors to capture CH<sub>4</sub> variations at high frequency, with peaks of elevated concentrations, which corresponds well with the signal observed from point sources in industrial sites presenting leakage and isolated methane emission. We conducted a continuous controlled experiment over four months (from November 2019 to February 2020) in which three types of MOx Sensors from Figaro® measured high frequency CH<sub>4</sub> peaks with concentrations varying between atmospheric background levels up to 24 ppm at LSCE, Saclay, France. We develop a calibration strategy including a two-step baseline correction and compared different approaches to reconstruct CH<sub>4</sub> spikes such as linear, multilinear and polynomial regression, and ANN and random forest algorithms. We found that baseline correction in the pre-processing stage improved the reconstruction of CH<sub>4</sub> concentrations in the spikes. The random forest models performed better than other methods achieving a mean RMSE = 0.25 ppm when reconstructing peaks amplitude over windows of 4 days. In addition, we conducted tests to determine the minimum amount of data required to train successful models for predicting CH<sub>4</sub> spikes, and the needed frequency of re-calibration / re-training under these controlled circumstances. We concluded that for a target RMSE <= 0.3 ppm at a measurement frequency of 5s, 4 days of training are required, and a recalibration / re-training is recommended every 30 days.</p><p>Our study presents a new approach to process and reconstruct observations from low cost CH<sub>4</sub> sensors and highlights its potential to quantify high concentration releases in industrial facilities.</p>


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Wei Yin ◽  
Qian Chen ◽  
Shijie Feng ◽  
Tianyang Tao ◽  
Lei Huang ◽  
...  

AbstractThe multi-frequency temporal phase unwrapping (MF-TPU) method, as a classical phase unwrapping algorithm for fringe projection techniques, has the ability to eliminate the phase ambiguities even while measuring spatially isolated scenes or the objects with discontinuous surfaces. For the simplest and most efficient case in MF-TPU, two groups of phase-shifting fringe patterns with different frequencies are used: the high-frequency one is applied for 3D reconstruction of the tested object and the unit-frequency one is used to assist phase unwrapping for the wrapped phase with high frequency. The final measurement precision or sensitivity is determined by the number of fringes used within the high-frequency pattern, under the precondition that its absolute phase can be successfully recovered without any fringe order errors. However, due to the non-negligible noises and other error sources in actual measurement, the frequency of the high-frequency fringes is generally restricted to about 16, resulting in limited measurement accuracy. On the other hand, using additional intermediate sets of fringe patterns can unwrap the phase with higher frequency, but at the expense of a prolonged pattern sequence. With recent developments and advancements of machine learning for computer vision and computational imaging, it can be demonstrated in this work that deep learning techniques can automatically realize TPU through supervised learning, as called deep learning-based temporal phase unwrapping (DL-TPU), which can substantially improve the unwrapping reliability compared with MF-TPU even under different types of error sources, e.g., intensity noise, low fringe modulation, projector nonlinearity, and motion artifacts. Furthermore, as far as we know, our method was demonstrated experimentally that the high-frequency phase with 64 periods can be directly and reliably unwrapped from one unit-frequency phase using DL-TPU. These results highlight that challenging issues in optical metrology can be potentially overcome through machine learning, opening new avenues to design powerful and extremely accurate high-speed 3D imaging systems ubiquitous in nowadays science, industry, and multimedia.


Sign in / Sign up

Export Citation Format

Share Document