scholarly journals Signal Processing Techniques for Optical Transmission Based on Eigenvalue Communication

Author(s):  
Jonas Koch ◽  
Ken Chan ◽  
Christian G. Schaeffer ◽  
Stephan Pachnicke

A minimum mean squared error (MMSE) equalizer is a way to effectively increase transmission performance for nonlinear Fourier transform (NFT) based communication systems. Other equalization schemes, based on nonlinear equalizer approaches or neural networks, are interesting for NFT transmission due to their ability to deal with nonlinear correlations of the NFTs’ eigenvalues and their coefficients. We experimentally investigated single- and dual-polarization long haul transmission with several modulation schemes and compared different equalization techniques including joint detection equalization and the use of neural networks. We observed that joint detection equalization provides range increases for shorter transmission distances while having low numeric complexity. We could further achieve bit error rates (BER) under HD-FEC for significant longer transmission distances in comparison to no equalization with different equalizers.<div><br></div><div>Manuscript received August 6, 2020; revised November 2, 2020; accepted December 8, 2020. Date of publication December 16, 2020;<br></div>

2021 ◽  
Author(s):  
Jonas Koch ◽  
Ken Chan ◽  
Christian G. Schaeffer ◽  
Stephan Pachnicke

A minimum mean squared error (MMSE) equalizer is a way to effectively increase transmission performance for nonlinear Fourier transform (NFT) based communication systems. Other equalization schemes, based on nonlinear equalizer approaches or neural networks, are interesting for NFT transmission due to their ability to deal with nonlinear correlations of the NFTs’ eigenvalues and their coefficients. We experimentally investigated single- and dual-polarization long haul transmission with several modulation schemes and compared different equalization techniques including joint detection equalization and the use of neural networks. We observed that joint detection equalization provides range increases for shorter transmission distances while having low numeric complexity. We could further achieve bit error rates (BER) under HD-FEC for significant longer transmission distances in comparison to no equalization with different equalizers.<div><br></div><div>Manuscript received August 6, 2020; revised November 2, 2020; accepted December 8, 2020. Date of publication December 16, 2020;<br></div>


2021 ◽  
Author(s):  
Youjie Ye ◽  
Yunfei Chen

Abstract Deep learning (DL) methods have been proved effective in improving the performance of channel estimation and signal detection. In this work, we propose three DL algorithms: fully connected deep neural network (FCDNN), convolutional neural networks (CNN), and long short-term memory (LSTM) neural networks for signal processing in multiuser orthogonal frequency-division multiplexing (OFDM) communications systems. The bit error rates (BERs) of these DL methods are compared with the conventional linear minimum mean squared error (LMMSE) detector. Additionally, the relationships between the BER and signal-to-interference ratio (SIR), signal-to-noise ratio (SNR), the number of interfering users (NoI) and modulation type are investigated. Numerical results show that all DL methods outperform LMMSE under different multiuser interference conditions, and FCDNN and LSTM give the best and robust anti-multiuser performance. This work shows that FCDNN and LSTM network have strong anti-interference ability and are useful in multiuser OFDM systems.


Network ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 50-74
Author(s):  
Divyanshu Pandey ◽  
Adithya Venugopal ◽  
Harry Leib

Most modern communication systems, such as those intended for deployment in IoT applications or 5G and beyond networks, utilize multiple domains for transmission and reception at the physical layer. Depending on the application, these domains can include space, time, frequency, users, code sequences, and transmission media, to name a few. As such, the design criteria of future communication systems must be cognizant of the opportunities and the challenges that exist in exploiting the multi-domain nature of the signals and systems involved for information transmission. Focussing on the Physical Layer, this paper presents a novel mathematical framework using tensors, to represent, design, and analyze multi-domain systems. Various domains can be integrated into the transceiver design scheme using tensors. Tools from multi-linear algebra can be used to develop simultaneous signal processing techniques across all the domains. In particular, we present tensor partial response signaling (TPRS) which allows the introduction of controlled interference within elements of a domain and also across domains. We develop the TPRS system using the tensor contracted convolution to generate a multi-domain signal with desired spectral and cross-spectral properties across domains. In addition, by studying the information theoretic properties of the multi-domain tensor channel, we present the trade-off between different domains that can be harnessed using this framework. Numerical examples for capacity and mean square error are presented to highlight the domain trade-off revealed by the tensor formulation. Furthermore, an application of the tensor framework to MIMO Generalized Frequency Division Multiplexing (GFDM) is also presented.


2020 ◽  
Vol 32 (18) ◽  
pp. 15249-15262
Author(s):  
Sid Ghoshal ◽  
Stephen Roberts

Abstract Much of modern practice in financial forecasting relies on technicals, an umbrella term for several heuristics applying visual pattern recognition to price charts. Despite its ubiquity in financial media, the reliability of its signals remains a contentious and highly subjective form of ‘domain knowledge’. We investigate the predictive value of patterns in financial time series, applying machine learning and signal processing techniques to 22 years of US equity data. By reframing technical analysis as a poorly specified, arbitrarily preset feature-extractive layer in a deep neural network, we show that better convolutional filters can be learned directly from the data, and provide visual representations of the features being identified. We find that an ensemble of shallow, thresholded convolutional neural networks optimised over different resolutions achieves state-of-the-art performance on this domain, outperforming technical methods while retaining some of their interpretability.


2017 ◽  
Vol 3 (1) ◽  
pp. 10
Author(s):  
Debby E. Sondakh

Classification has been considered as an important tool utilized for the extraction of useful information from healthcare dataset. It may be applied for recognition of disease over symptoms. This paper aims to compare and evaluate different approaches of neural networks classification algorithms for healthcare datasets. The algorithms considered here are Multilayer Perceptron, Radial Basis Function, and Voted Perceptron which are tested based on resulted classifiers accuracy, precision, mean absolute error and root mean squared error rates, and classifier training time. All the algorithms are applied for five multivariate healthcare datasets, Echocardiogram, SPECT Heart, Chronic Kidney Disease, Mammographic Mass, and EEG Eye State datasets. Among the three algorithms, this study concludes the best algorithm for the chosen datasets is Multilayer Perceptron. It achieves the highest for all performance parameters tested. It can produce high accuracy classifier model with low error rate, but suffer in training time especially of large dataset. Voted Perceptron performance is the lowest in all parameters tested. For further research, an investigation may be conducted to analyze whether the number of hidden layer in Multilayer Perceptron’s architecture has a significant impact on the training time.


Sign in / Sign up

Export Citation Format

Share Document