scholarly journals Signal processing techniques for multimedia information security

Author(s):  
Arunan Ramalingam

The digital representation of multimedia and the Internet allows for the unauthorized duplication, transmission, and wide distribution of copyrighted multimedia content in an effortless manner. Content providers are faced with the challenge of how to protect their electronic content. Fingerprinting and watermarking are two techniques that help identify content that are copied and distributed illegally. This thesis presents a novel algorithm for each of these two content protection techniques. In fingerprinting, a novel algorithm that model fingerprint using Gaussian mixtures is developed for both audio and video signals. Simulation studies are used to evaluate the effectiveness of the algorithm in generating fingerprints that show high discrimination among different fingerprints and at the same time invariant to different distortions of the same fingerprint. In the proposed watermarking scheme, linear chirps are used as watermark messages. The watermark is embedded and detected by spread-spectrum watermarking. At the receiver, a post processing tool represents the retrieved watermark in a time-frequency distribution and uses a line detection algorithm to detect the watermark. The robustness of the watermark is demonstrated by extracting the watermark after different image processing operations performed using a third party evaluation tool called checkmark.

2021 ◽  
Author(s):  
Arunan Ramalingam

The digital representation of multimedia and the Internet allows for the unauthorized duplication, transmission, and wide distribution of copyrighted multimedia content in an effortless manner. Content providers are faced with the challenge of how to protect their electronic content. Fingerprinting and watermarking are two techniques that help identify content that are copied and distributed illegally. This thesis presents a novel algorithm for each of these two content protection techniques. In fingerprinting, a novel algorithm that model fingerprint using Gaussian mixtures is developed for both audio and video signals. Simulation studies are used to evaluate the effectiveness of the algorithm in generating fingerprints that show high discrimination among different fingerprints and at the same time invariant to different distortions of the same fingerprint. In the proposed watermarking scheme, linear chirps are used as watermark messages. The watermark is embedded and detected by spread-spectrum watermarking. At the receiver, a post processing tool represents the retrieved watermark in a time-frequency distribution and uses a line detection algorithm to detect the watermark. The robustness of the watermark is demonstrated by extracting the watermark after different image processing operations performed using a third party evaluation tool called checkmark.


2021 ◽  
Vol 11 (22) ◽  
pp. 10812
Author(s):  
Jusung Kang ◽  
Younghak Shin ◽  
Hyunku Lee ◽  
Jintae Park ◽  
Heungno Lee

In a frequency hopping spread spectrum (FHSS) network, the hopping pattern plays an important role in user authentication at the physical layer. However, recently, it has been possible to trace the hopping pattern through a blind estimation method for frequency hopping (FH) signals. If the hopping pattern can be reproduced, the attacker can imitate the FH signal and send the fake data to the FHSS system. To prevent this situation, a non-replicable authentication system that targets the physical layer of an FHSS network is required. In this study, a radio frequency fingerprinting-based emitter identification method targeting FH signals was proposed. A signal fingerprint (SF) was extracted and transformed into a spectrogram representing the time–frequency behavior of the SF. This spectrogram was trained on a deep inception network-based classifier, and an ensemble approach utilizing the multimodality of the SFs was applied. A detection algorithm was applied to the output vectors of the ensemble classifier for attacker detection. The results showed that the SF spectrogram can be effectively utilized to identify the emitter with 97% accuracy, and the output vectors of the classifier can be effectively utilized to detect the attacker with an area under the receiver operating characteristic curve of 0.99.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3524
Author(s):  
Rongru Wan ◽  
Yanqi Huang ◽  
Xiaomei Wu

Ventricular fibrillation (VF) is a type of fatal arrhythmia that can cause sudden death within minutes. The study of a VF detection algorithm has important clinical significance. This study aimed to develop an algorithm for the automatic detection of VF based on the acquisition of cardiac mechanical activity-related signals, namely ballistocardiography (BCG), by non-contact sensors. BCG signals, including VF, sinus rhythm, and motion artifacts, were collected through electric defibrillation experiments in pigs. Through autocorrelation and S transform, the time-frequency graph with obvious information of cardiac rhythmic activity was obtained, and a feature set of 13 elements was constructed for each 7 s segment after statistical analysis and hierarchical clustering. Then, the random forest classifier was used to classify VF and non-VF, and two paradigms of intra-patient and inter-patient were used to evaluate the performance. The results showed that the sensitivity and specificity were 0.965 and 0.958 under 10-fold cross-validation, and they were 0.947 and 0.946 under leave-one-subject-out cross-validation. In conclusion, the proposed algorithm combining feature extraction and machine learning can effectively detect VF in BCG, laying a foundation for the development of long-term self-cardiac monitoring at home and a VF real-time detection and alarm system.


Energies ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1437
Author(s):  
Mahfoud Drouaz ◽  
Bruno Colicchio ◽  
Ali Moukadem ◽  
Alain Dieterlen ◽  
Djafar Ould-Abdeslam

A crucial step in nonintrusive load monitoring (NILM) is feature extraction, which consists of signal processing techniques to extract features from voltage and current signals. This paper presents a new time-frequency feature based on Stockwell transform. The extracted features aim to describe the shape of the current transient signal by applying an energy measure on the fundamental and the harmonic frequency voices. In order to validate the proposed methodology, classical machine learning tools are applied (k-NN and decision tree classifiers) on two existing datasets (Controlled On/Off Loads Library (COOLL) and Home Equipment Laboratory Dataset (HELD1)). The classification rates achieved are clearly higher than that for other related studies in the literature, with 99.52% and 96.92% classification rates for the COOLL and HELD1 datasets, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3515
Author(s):  
Sung-Ho Sim ◽  
Yoon-Su Jeong

As the development of IoT technologies has progressed rapidly recently, most IoT data are focused on monitoring and control to process IoT data, but the cost of collecting and linking various IoT data increases, requiring the ability to proactively integrate and analyze collected IoT data so that cloud servers (data centers) can process smartly. In this paper, we propose a blockchain-based IoT big data integrity verification technique to ensure the safety of the Third Party Auditor (TPA), which has a role in auditing the integrity of AIoT data. The proposed technique aims to minimize IoT information loss by multiple blockchain groupings of information and signature keys from IoT devices. The proposed technique allows IoT information to be effectively guaranteed the integrity of AIoT data by linking hash values designated as arbitrary, constant-size blocks with previous blocks in hierarchical chains. The proposed technique performs synchronization using location information between the central server and IoT devices to manage the cost of the integrity of IoT information at low cost. In order to easily control a large number of locations of IoT devices, we perform cross-distributed and blockchain linkage processing under constant rules to improve the load and throughput generated by IoT devices.


Network ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 50-74
Author(s):  
Divyanshu Pandey ◽  
Adithya Venugopal ◽  
Harry Leib

Most modern communication systems, such as those intended for deployment in IoT applications or 5G and beyond networks, utilize multiple domains for transmission and reception at the physical layer. Depending on the application, these domains can include space, time, frequency, users, code sequences, and transmission media, to name a few. As such, the design criteria of future communication systems must be cognizant of the opportunities and the challenges that exist in exploiting the multi-domain nature of the signals and systems involved for information transmission. Focussing on the Physical Layer, this paper presents a novel mathematical framework using tensors, to represent, design, and analyze multi-domain systems. Various domains can be integrated into the transceiver design scheme using tensors. Tools from multi-linear algebra can be used to develop simultaneous signal processing techniques across all the domains. In particular, we present tensor partial response signaling (TPRS) which allows the introduction of controlled interference within elements of a domain and also across domains. We develop the TPRS system using the tensor contracted convolution to generate a multi-domain signal with desired spectral and cross-spectral properties across domains. In addition, by studying the information theoretic properties of the multi-domain tensor channel, we present the trade-off between different domains that can be harnessed using this framework. Numerical examples for capacity and mean square error are presented to highlight the domain trade-off revealed by the tensor formulation. Furthermore, an application of the tensor framework to MIMO Generalized Frequency Division Multiplexing (GFDM) is also presented.


Author(s):  
Dang-Khoa Tran ◽  
Thanh-Hai Nguyen ◽  
Thanh-Nghia Nguyen

In the electroencephalography (EEG) study, eye blinks are a commonly known type of ocular artifact that appears most frequently in any EEG measurement. The artifact can be seen as spiking electrical potentials in which their time-frequency properties are varied across individuals. Their presence can negatively impact various medical or scientific research or be helpful when applying to brain-computer interface applications. Hence, detecting eye-blink signals is beneficial for determining the correlation between the human brain and eye movement in this paper. The paper presents a simple, fast, and automated eye-blink detection algorithm that did not require user training before algorithm execution. EEG signals were smoothed and filtered before eye-blink detection. We conducted experiments with ten volunteers and collected three different eye-blink datasets over three trials using Emotiv EPOC+ headset. The proposed method performed consistently and successfully detected spiking activities of eye blinks with a mean accuracy of over 96%.


Author(s):  
Aaron Duke ◽  
Dave Murk ◽  
Bill Byrd ◽  
Stuart Saulters

Since the publication of API Recommended Practice (RP) 1173: Pipeline Safety Management Systems, in July 2015, the energy pipeline trade groups in North America (API, AOPL, AGA, INGAA, APGA and CEPA) have worked collaboratively to develop tools and programs to assist energy pipeline operators with the development and implementation of appropriate programs and processes. These resources include a Planning Tool, Implementation Tool and Evaluation Tool, as well as a Maturity Model that describes a continuum of implementation levels. The Planning Tool is used to compare an operator’s existing management system to the RP requirements and develop action plans and assign responsibilities to close gaps. It is intended to help operators achieve Level 1 maturity (develop a plan and begin work). The Implementation Tool is used to evaluate and summarize implementation status by question, element and overall, and helps track development of program implementation to Level 3 maturity. The Evaluation Tool plays two key roles addressing the conformity and effectiveness of the system. This tool is used to assess and report the level of conformity to the requirements, the “shall” statements, of the RP and possible Level 4 maturity. The Evaluation Tool also provides the means to appraise the effectiveness of an operator’s programs in achieving the objectives of the RP, asking the key question, “Is the system helping and driving improvement?” These resources can be supplemented by the voluntary third-party audit program developed by API and the Peer-to-Peer sharing process.


Sign in / Sign up

Export Citation Format

Share Document