scholarly journals A Review on Audio Encryption Algorithms Using Chaos Maps-Based Techniques

Author(s):  
Ekhlas Abbas Albahrani ◽  
Tayseer Karam Alshekly ◽  
Sadeq H. Lafta

Due to the quick improvement in digital communications and multimedia applications during recent periods up to the current time, data protection of digital data such as image, audio and video becomes a significant challenge. The security of audio data that transfer through different networks was rated as a preferred research field in the preceding years. This review covers the recent contribution for audio encryption and gives the most evaluations for audio encryption algorithm involving security analysis, computational complexity and quality analysis and their requirements. This paper fundamentally concentrates on displaying the different types of audio encryption and decryption techniques based on chaotic maps. Digital and analog audio algorithms were displayed, discussed and compared with the illustration of the important features and drawbacks. Various digital and audio proposed projects for audio encryption using chaotic maps have been covered, which they showed extreme sensitivity to initial conditions, unpredictability and conducting in a quasi-random manner. A comparison among the proposed algorithms in the key space, chaotic maps sensitivity and statistical analysis were provided.

2020 ◽  
Vol 47 (1) ◽  
pp. 89-95 ◽  
Author(s):  
Garry D. Carnegie

ABSTRACT This response to the recent contribution by Matthews (2019) entitled “The Past, Present, and Future of Accounting History” specifically deals with the issues associated with concentrating on counting publication numbers in examining the state of a scholarly research field at the start of the 2020s. It outlines several pitfalls with the narrowly focused publications count analysis, in selected English language journals only, as provided by Matthews. The commentary is based on three key arguments: (1) accounting history research and publication is far more than a “numbers game”; (2) trends in the quality of the research undertaken and published are paramount; and (3) international publication and accumulated knowledge in accounting history are indeed more than a collection of English language publications. The author seeks to contribute to discussion and debate between accounting historians and other researchers for the benefit and development of the international accounting history community and global society.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5204
Author(s):  
Anastasija Nikiforova

Nowadays, governments launch open government data (OGD) portals that provide data that can be accessed and used by everyone for their own needs. Although the potential economic value of open (government) data is assessed in millions and billions, not all open data are reused. Moreover, the open (government) data initiative as well as users’ intent for open (government) data are changing continuously and today, in line with IoT and smart city trends, real-time data and sensor-generated data have higher interest for users. These “smarter” open (government) data are also considered to be one of the crucial drivers for the sustainable economy, and might have an impact on information and communication technology (ICT) innovation and become a creativity bridge in developing a new ecosystem in Industry 4.0 and Society 5.0. The paper inspects OGD portals of 60 countries in order to understand the correspondence of their content to the Society 5.0 expectations. The paper provides a report on how much countries provide these data, focusing on some open (government) data success facilitating factors for both the portal in general and data sets of interest in particular. The presence of “smarter” data, their level of accessibility, availability, currency and timeliness, as well as support for users, are analyzed. The list of most competitive countries by data category are provided. This makes it possible to understand which OGD portals react to users’ needs, Industry 4.0 and Society 5.0 request the opening and updating of data for their further potential reuse, which is essential in the digital data-driven world.


1984 ◽  
Vol 74 (5) ◽  
pp. 1623-1643
Author(s):  
Falguni Roy

Abstract A depth estimation procedure has been described which essentially attempts to identify depth phases by analyzing multi-station waveform data (hereafter called level II data) in various ways including deconvolution, prediction error filtering, and spectral analysis of the signals. In the absence of such observable phases, other methods based on S-P, ScS-P, and SKS-P travel times are tried to get an estimate of the source depth. The procedure was applied to waveform data collected from 31 globally distributed stations for the period between 1 and 15 October 1980. The digital data were analyzed at the temporary data center facilities of the National Defense Research Institute, Stockholm, Sweden. During this period, a total number of 162 events in the magnitude range 3.5 to 6.2 were defined by analyzing first arrival time data (hereafter called level I data) alone. For 120 of these events, it was possible to estimate depths using the present procedure. The applicability of the procedure was found to be 100 per cent for the events with mb > 4.8 and 88 per cent for the events with mb > 4. A comparison of level I depths and level II depths (the depths as obtained from level I and level II data, respectively) with that of the United States Geological Survey estimates indicated that it will be necessary to have at least one local station (Δ < 10°) among the level I data to obtain reasonable depth estimates from such data alone. Further, it has been shown that S wave travel times could be successfully utilized for the estimation of source depth.


2021 ◽  
pp. 240-248
Author(s):  
Mahmood Khalel Ibrahem ◽  
Hussein Ali Kassim

Recently, with the development multimedia technologies and wireless telecommunication, Voice over Internet Protocol, becomes widely used in communication between connecting people, VoIP allows people that are connected to the local network or the Internet to make voice calls using digital connection instead of based on the analog traditional telephone network. The technologies of Internet doesn’t give any security mechanism and there is no way to guarntee that the voice streams will be transmitted over Internet or network have not been intercepted in between. In this paper, VoIP is developed using stream cipher algorithm and the chaotic cryptography for key generator. It is based on the chaotic maps for generating a one-time random key used to encrypt each voice data in the RTP packet. Chaotic maps have been used successfully for encryption bulky data such as voice, image, and video, chaotic cryptography has good properties such as long periodicity, pseudo-randomness, and sensitivity to initial conditions and change in system parameters. A VoIP system was successfully implemented based on the on ITU-T G.729 for voice codec, as a multimedia encoding of Real-time Transport Protocol payload data, then, apply a proposed method to generate three-mixed logistic chaotic maps [1] and then analysis the encryption/ decryption quality measures for speech signal based this method. The experimental work demonstrates that the proposed scheme can provide confidentiality to voice data with voice over IP performance quality, minimum lost in transmitted packet, minimum average delay, and minimum jitter.


2021 ◽  
Author(s):  
Ricardo Ribeiro ◽  
Alina Trifan ◽  
António J. R. Neves

BACKGROUND The wide availability and small size, together with the decrease in pricing of different types of sensors, has made it possible, over the last decade, to acquire a huge amount of data about a person's life in real time. These sensors can be incorporated into personal electronic devices available at reasonable cost, such as smartphones and small wearable devices. They allow the acquisition of images, audio, location, physical activity and physiological signals, among other data. With these data, usually denoted as lifelog data, we can then analyze and understand personal experiences and behaviors. This process is called lifelogging. OBJECTIVE The goal of this article is to review the literature in the research area of lifelogging over the past decade and provide an historical overview on this research topic. To this purpose, we analyze lifelogging applications that monitor and assist people with memory problems. METHODS We follow a narrative review methodology to conduct a comprehensive search of relevant publications in Google Scholar and Scopus databases. In order to find these relevant publication, topic-related keywords were identified and combined based on different lifelogging type of data and applications. RESULTS A total of 124 publications were selected and included in this narrative review. 411 publications were retrieved and screened from the two scholar databases. Out of these, 114 publications were fully reviewed. In addition, 32 more publications were manually included based on our bibliographical knowledge in this research field. CONCLUSIONS The use of personal lifelogs can be beneficial to improve the life quality of people suffering from memory problems, such as dementia. Through the acquisition and analysis of lifelog data, lifelogging systems can create digital memories to be used as surrogate memory. Through this narrative we understand that contextual information can be extracted from the lifelogs and it provides significant information for understanding the daily life of people suffering from memory issues based on events, experiences and behaviors.


Author(s):  
Fleur Johns

Law and social science scholars have long elucidated ways of governing built around state governance of populations and subjects. Yet many are now grappling with the growing prevalence of practices of governance that depart, to varying degrees, from received models. The profusion of digital data, and the deployment of machine learning in its analysis, are redirecting states’ and international organizations’ attention away from the governance of populations as such and toward the amassing, analysis, and mobilization of hybrid data repositories and real-time data flows for governance. Much of this work does not depend on state data sources or on conventional statistical models. The subjectivities nurtured by these techniques of governance are frequently not those of choosing individuals. Digital objects and mediators are increasingly prevalent at all scales. This article surveys how scholars are beginning to understand the nascent political technologies associated with this shift toward governance by data. Expected final online publication date for the Annual Review of Law and Social Science, Volume 17 is October 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


The present chapter deals with the issue of information manipulation detection from an algorithmic point of view, examining a variety of authentication methods, which target assisting average users and media professionals to secure themselves from forged content. The specific domain forms a very interesting, highly interdisciplinary research field, where remarkable progress has been conducted during the last years. The chapter outlines the current state of the art, providing an overview of the different modalities, aiming at evaluating the various types of digital data (text, image, audio, video), in conjunction with the associated falsification attacks and the available forensic investigation tools. In the coming years, the problem of fake news is expected to become even more complicated, as journalism is heading towards an era of heightened automation. Overall, it is anticipated that machine-driven verification assistance means can speed up the required validation processes, reducing the spread of unverified reports.


Author(s):  
Mona F. M. Mursi ◽  
Hossam Eldin H. Ahmed ◽  
Fathi E. Abd El-Samie ◽  
Ayman H. Abd El-Aziem

In this paper, the authors propose an image encryption scheme based on the development of a Hénon chaotic map using fractional Fourier transform (FRFT) which is introduced to satisfy the necessity of high secure image. This proposed algorithm combines the main advantages of confusion and diffusion with (FRFT), it use Arnold Cat map for confusion and Hénon chaotic map or one of the proposed Hénon chaotic maps for diffusion. The proposed algorithm is compared with some image encryption algorithms based on Arnold Cat map, Baker chaotic map, Hénon chaotic map and RC6. The authors perform a comparison between them in several experimental tests as statistical analyses, processing time and security analysis. The authors find from these comparison tests that the proposed algorithm demonstrates good result even better than RC6 and other chaotic maps in some cases.


Entropy ◽  
2018 ◽  
Vol 20 (11) ◽  
pp. 843 ◽  
Author(s):  
Congxu Zhu ◽  
Guojun Wang ◽  
Kehui Sun

This paper presents an improved cryptanalysis of a chaos-based image encryption scheme, which integrated permutation, diffusion, and linear transformation process. It was found that the equivalent key streams and all the unknown parameters of the cryptosystem can be recovered by our chosen-plaintext attack algorithm. Both a theoretical analysis and an experimental validation are given in detail. Based on the analysis of the defects in the original cryptosystem, an improved color image encryption scheme was further developed. By using an image content–related approach in generating diffusion arrays and the process of interweaving diffusion and confusion, the security of the cryptosystem was enhanced. The experimental results and security analysis demonstrate the security superiority of the improved cryptosystem.


Sign in / Sign up

Export Citation Format

Share Document