scholarly journals Design of Extended Hamming Code Technique Encryption for Audio Signals by Double Code Error Prediction

Author(s):  
R. Asokan ◽  
T. Vijayakumar

Noise can scramble a message that is sent. This is true for both voicemails and digital communications transmitted to and from computer systems. During transmission, mistakes tend to happen. Computer memory is the most commonplace to use Hamming code error correction. With extra parity/redundancy bits added to Hamming code, single-bit errors may be detected and corrected. Short-distance data transmissions often make use of Hamming coding. The redundancy bits are interspersed and evacuated subsequently when scaling it for longer data lengths. The new hamming code approach may be quickly and easily adapted to any situation. As a result, it's ideal for sending large data bitstreams since the overhead bits per data bit ratio is much lower. The investigation in this article is extended Hamming codes for product codes. The proposal particularly emphasises on how well it functions with low error rate, which is critical for multimedia wireless applications. It provides a foundation and a comprehensive set of methods for quantitatively evaluating this performance without the need of time-consuming simulations. It provides fresh theoretical findings on the well-known approximation, where the bit error rate roughly equal to the frame error rate times the minimal distance to the codeword length ratio. Moreover, the analytical method is applied to actual design considerations such as shorter and punctured codes along with the payload and redundancy bits calculation. Using the extended identity equation on the dual codes, decoding can be done at the first instance. The achievement of 43.48% redundancy bits is obtained during the testing process which is a huge proportion reduced in this research work.

2018 ◽  
Author(s):  
Matthias Meyer ◽  
Samuel Weber ◽  
Jan Beutel ◽  
Lothar Thiele

Abstract. Natural hazards, e.g. due to slope instabilities, are a significant risk for the population of mountainous regions. Monitoring of micro-seismic signals can be used for process analysis and risk assessment. However, these signals are subject to external influences, e.g anthropogenic or natural noise. Successful analysis depends strongly on the capability to cope with such external influences. For correct slope characterization it is thus important to be able to identify, quantify and take these influences into account. In long-term monitoring scenarios manual identification is infeasible due to large data quantities demanding accurate automated analysis methods. In this work we present a systematic strategy to identify multiple external influences, characterize their impact on micro-seismic analysis and develop methods for automated identification. We apply the developed strategy to a real-word, multi-sensor, multi-year micro-seismic monitoring experiment on the Matterhorn Hörnliridge (CH). We present a convolutional neural network for micro-seismic data to detect external influences originating in mountaineers, a major unwanted influence, with an error rate of less than 1 %, which is 3× lower than comparable algorithms. Moreover, we present an ensemble classifier for the same task obtaining an error rate of 0.79 % and an F1 score of 0.9383 by using images and micro-seismic data. Applying the classifiers to the experiment data reveals that approximately 1/4 of events detected with an event detector are not due to seismic activity but due to anthropogenic mountaineering influences and that time periods with mountaineer activity have a 9× higher event rate. Due to these findings we argue that a systematic identification of external influences, like presented in this paper, is a prerequisite for a qualitative analysis.


Author(s):  
GOUSIA NABI DAR ◽  
RAJAT JOSHI

An amazingly effective digital transit system that carries multiple carriers that meet in conjunction with each other over a period of time is known as the Orthogonal Frequency Division Multiplexing (OFDM) system. Among the traditional symbols, there is a need to include a group of guards. However, in OFDM systems this is not required. Although there is an overlap of side bands from each carrier, no interference is involved within the signals found here as they are orthogonal in relation to each other. This research work is based on a wireless channel to reduce the rate of error using space-time trellis codes. In this research project, the minimum error rate is reduced over wireless channels using space time codes and poly-phase filters. The proposed modular simulation is performed at MATLAB and the results show that the minimum bit error rate decreases in the network.


2014 ◽  
Vol 7 (2) ◽  
Author(s):  
Janwar Maulana Arini Feri Fahrianto

Seiring perkembangan zaman, komputer semakin dibutuhkan dalam kehidupan sehari-hari, baik pada bidang akademis maupun bidang non-akademis. Komunikasi bisa juga mempengaruhi proses informasi itu bisa disampaikan dengan baik dan tepat. Komunikasi yang dilakukan dengan melakukan proses pengiriman data dari komputer ke komputer lain dalam suatu jaringan. Dan pada saat ini sistem pengiriman data masih kurang maksimal sehingga sering terjadi kesalahan dalam proses pengirimannya. Sebagian besar sistem pengiriman data sekarang ini belum dapat mengurangi kesalahan pada pengiriman, dan juga masih kurangnya pengetahuan tentang bagaimana proses pengiriman dan pengkoreksian error. Pada penelitian ini akan dirancang sebuah aplikasi simulasi yang bertujuan untuk mengambarkan bagaimana proses pengkoreksian error pada proses pengiriman data yang berupa angka dalam bentuk integer dan akan diubah menjadi biner untuk memudahkan dalam proses pengecekan error. Metode yang digunakan dalam perancangan simulasi ini adalah menggunakan Hamming Code. Perancangan simulasi menggunakan Java. Dari hasil pengamatan aplikasi ini data dikirimkan akan di deteksi jika terjadi kesalahan maka aplikasi akan mengkoreksi kesalahan yang telah terdeteksi. Hasi pengamatan menunjukan bahawa error terjadi pada saat pengiriman data di karenakan kesalahan pada bit-bit yang dikirimkan, maka terjadilah error. Dan juga diharapkan dengan simulasi yang dibuat ini bisa membantu dalam memahami tentang proses pengiriman data dan bagaimana pengkoreksian error tersebut.


2021 ◽  
Vol 10 (02) ◽  
Author(s):  
SYED RIZWAN-UL-HASAN ◽  
Muhammad Tahir ◽  
SHAKIL AHMED

In this research work, we have developed a communication system (transmitter / receiver) to control peak to average power (PAPR) with  small bit error rate (BER) for a 4G system called multicode code division multiple access (MCCDMA). Proposed communication system works on modified Reed Muller encoded data (MRMED) string. In MRMED data is first encoded with Reed Muller (RM) code. Thereafter, encoded RM message is XORed with optimal binary string, which results lower Peak to Average Power ratio (PAPR).  A well-known fact is that, bit error rate (BER) is the best performance measurement tool for a communication system. To check the integrity of our communication system, we have run the simulation for monitoring BER using MRMED sequence. Simulation work conducted, with multipath Rayleigh fading, Minimum Shift Keying (MSK) modulation and several orders of RM codes. Our results show that implementing MRMED sequences of the suggested MCCDMA communication structure, returns noticeable lower BER. For instance, in case of RM(1,4), that has error improvement proficiency of 3 (three) errors , returns BER = 8.2x10−5 adopting MSK, at SNR = 12dB. Similarly, for RM(2,3), which has error improvement efficiency of 0  error and shows distinct BER of 4.9x10−4 at  12dB (SNR).In addition to using simulation for checking BER performance of our communication system, we have also shown in our results that, as the error improvement capability of different RM codes surges, correspondingly we get a lower BER.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Zhonghua Zhang ◽  
Xifei Song ◽  
Lei Liu ◽  
Jie Yin ◽  
Yu Wang ◽  
...  

Blockchain constructs a distributed point-to-point system, which is a secure and verifiable mechanism for decentralized transaction validation and is widely used in financial economy, Internet of Things, large data, cloud computing, and edge computing. On the other hand, artificial intelligence technology is gradually promoting the intelligent development of various industries. As two promising technologies today, there is a natural advantage in the convergence between blockchain and artificial intelligence technologies. Blockchain makes artificial intelligence more autonomous and credible, and artificial intelligence can prompt blockchain toward intelligence. In this paper, we analyze the combination of blockchain and artificial intelligence from a more comprehensive and three-dimensional point of view. We first introduce the background of artificial intelligence and the concept, characteristics, and key technologies of blockchain and subsequently analyze the feasibility of combining blockchain with artificial intelligence. Next, we summarize the research work on the convergence of blockchain and artificial intelligence in home and overseas within this category. After that, we list some related application scenarios about the convergence of both technologies and also point out existing problems and challenges. Finally, we discuss the future work.


Author(s):  
Akshay Daydar ◽  

As the machine learning algorithms evolve, there is a growing need of how to train the algorithm effectively for the large data with available resources in practically less time. The paper presents an idea of developing an effective model that focuses on the implementation of sequential sensitivity analysis and randomized training approach which can be one solution to this growing need. Many researchers focused on the implementation of sensitivity analysis to eliminate the insignificant features ands reduce the complexity in data selection. These sensitivity analysis methods relatively take a large time for validation through modeling and hence found impractical for large data. On the other hand, the randomized training approach was found to be the most popular approach for training the data but there is a very brief explanation available in research articles on how this training method is meaningful in getting higher accuracy. The current work focuses on the use of sequential sensitivity analysis and randomized training in an artificial neural network (ANN) for high dimensionality thermal power plant data. The sequential sensitivity analysis (SSA) technique includes the use of correlation analysis (CA), Analysis of variance (ANOVA), Akaike information criterion (AIC) in a sequential manner to reduce the validation time for all possible feature combinations. Only selected combinations are then tested against different training methods such as downward extrapolation, upward extrapolation, interpolation and randomized training in ANN. The paper also focuses on suggesting the significance of training with randomized training with comparison-based qualitative reasoning. The statistical parameters, mean square error (RMSE), Mean absolute relative difference (MARD) and R Square (R^2)were accessed for validation purposes. The research work mainly useful in the field of Ecommerce, Finance, industry and in facilities where large data is generated.


2021 ◽  
Vol 107 ◽  
pp. 194-200
Author(s):  
Theman Ibrahim Jirnadu ◽  
Adeyemi Abel Ajibesin ◽  
Ahmed T. Ishaq

Although, most researchers focus on some of the key components of good digital wireless communications which are the Bit Error Rate (BER) versus Signal to Noise Ratio (SNR) of modulation schemes. Energy consumption optimization is necessary for enhancing the performance of a wireless communication system as it offers numerous advantages to the system and the users. Therefore, this research focuses on analyzing the efficiency in the performance of the various QAM Modulation Schemes (4QAM, 16QAM, 32QAM & 64QAM) as they travel over noise/fading channels with the quest to obtain an energy-efficient scheme which will enhance system performance in terms of system runtime and quality of service. The efficiency of any given process, operation, or device is rated per the energy it consumes in carrying out an activity per unit output. Hence, the objective of this research work is to study and analyze comparatively the efficiency of these modulation schemes and to conclude with the most efficient scheme over the various channels. The evaluation of the Bit Error Rate (BER) versus energy per bit to noise spectral density (EbNo) for each communication scenario was carried out in MATLAB.


In this proposed research work we use a profound Data mining technique which is an automated procedure of discovering interesting patterns by means of comprehensible predictive models from large data sets by grouping them. Predicting a student's academic performance is very crucial especially for universities. Educational Data Mining (EDM) is an approach for extricating useful data that could possibly affect a firm. Nowadays student’s performance is swayed by a lot of aspects. These aspects might involve the academic performance of a student. This subject evaluates numerous factors probably suspected to alter a student’s empirical performance in scholastic, and discover a subjective design which classifies and forecast the student’s learning outcomes. The intention of this research is to conduct a case study on factors swayed by the student’s academic achievements and to dictate greater impact factors. In this paper we focus on the academic achievement evaluation on the basis of correct instances and incorrect instances by means of Naive Bayes and Random Forest algorithms. This paper intends to make a metaphorical assessment of Naive Bayes and random Forest classifier on student data and dictate the best algorithm.


Sign in / Sign up

Export Citation Format

Share Document