scholarly journals Computation of Psycho-Acoustic Annoyance Using Deep Neural Networks

2019 ◽  
Vol 9 (15) ◽  
pp. 3136 ◽  
Author(s):  
Jesus Lopez-Ballester ◽  
Adolfo Pastor-Aparicio ◽  
Jaume Segura-Garcia ◽  
Santiago Felici-Castell ◽  
Maximo Cobos

Psycho-acoustic parameters have been extensively used to evaluate the discomfort or pleasure produced by the sounds in our environment. In this context, wireless acoustic sensor networks (WASNs) can be an interesting solution for monitoring subjective annoyance in certain soundscapes, since they can be used to register the evolution of such parameters in time and space. Unfortunately, the calculation of the psycho-acoustic parameters involved in common annoyance models implies a significant computational cost, and makes difficult the acquisition and transmission of these parameters at the nodes. As a result, monitoring psycho-acoustic annoyance becomes an expensive and inefficient task. This paper proposes the use of a deep convolutional neural network (CNN) trained on a large urban sound dataset capable of efficiently predicting psycho-acoustic annoyance from raw audio signals continuously. We evaluate the proposed regression model and compare the resulting computation times with the ones obtained by the conventional direct calculation approach. The results confirm that the proposed model based on CNN achieves high precision in predicting psycho-acoustic annoyance, predicting annoyance values with an average quadratic error of around 3%. It also achieves a very significant reduction in processing time, which is up to 300 times faster than direct calculation, making CNN designed a clear exponent to work in IoT devices.

Author(s):  
Chen Qi ◽  
Shibo Shen ◽  
Rongpeng Li ◽  
Zhifeng Zhao ◽  
Qing Liu ◽  
...  

AbstractNowadays, deep neural networks (DNNs) have been rapidly deployed to realize a number of functionalities like sensing, imaging, classification, recognition, etc. However, the computational-intensive requirement of DNNs makes it difficult to be applicable for resource-limited Internet of Things (IoT) devices. In this paper, we propose a novel pruning-based paradigm that aims to reduce the computational cost of DNNs, by uncovering a more compact structure and learning the effective weights therein, on the basis of not compromising the expressive capability of DNNs. In particular, our algorithm can achieve efficient end-to-end training that transfers a redundant neural network to a compact one with a specifically targeted compression rate directly. We comprehensively evaluate our approach on various representative benchmark datasets and compared with typical advanced convolutional neural network (CNN) architectures. The experimental results verify the superior performance and robust effectiveness of our scheme. For example, when pruning VGG on CIFAR-10, our proposed scheme is able to significantly reduce its FLOPs (floating-point operations) and number of parameters with a proportion of 76.2% and 94.1%, respectively, while still maintaining a satisfactory accuracy. To sum up, our scheme could facilitate the integration of DNNs into the common machine-learning-based IoT framework and establish distributed training of neural networks in both cloud and edge.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 287
Author(s):  
Ioannis E. Livieris ◽  
Niki Kiriakidou ◽  
Stavros Stavroyiannis ◽  
Panagiotis Pintelas

Nowadays, cryptocurrencies are established and widely recognized as an alternative exchange currency method. They have infiltrated most financial transactions and as a result cryptocurrency trade is generally considered one of the most popular and promising types of profitable investments. Nevertheless, this constantly increasing financial market is characterized by significant volatility and strong price fluctuations over a short-time period therefore, the development of an accurate and reliable forecasting model is considered essential for portfolio management and optimization. In this research, we propose a multiple-input deep neural network model for the prediction of cryptocurrency price and movement. The proposed forecasting model utilizes as inputs different cryptocurrency data and handles them independently in order to exploit useful information from each cryptocurrency separately. An extensive empirical study was performed using three consecutive years of cryptocurrency data from three cryptocurrencies with the highest market capitalization i.e., Bitcoin (BTC), Etherium (ETH), and Ripple (XRP). The detailed experimental analysis revealed that the proposed model has the ability to efficiently exploit mixed cryptocurrency data, reduces overfitting and decreases the computational cost in comparison with traditional fully-connected deep neural networks.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 676
Author(s):  
Andrej Zgank

Animal activity acoustic monitoring is becoming one of the necessary tools in agriculture, including beekeeping. It can assist in the control of beehives in remote locations. It is possible to classify bee swarm activity from audio signals using such approaches. A deep neural networks IoT-based acoustic swarm classification is proposed in this paper. Audio recordings were obtained from the Open Source Beehive project. Mel-frequency cepstral coefficients features were extracted from the audio signal. The lossless WAV and lossy MP3 audio formats were compared for IoT-based solutions. An analysis was made of the impact of the deep neural network parameters on the classification results. The best overall classification accuracy with uncompressed audio was 94.09%, but MP3 compression degraded the DNN accuracy by over 10%. The evaluation of the proposed deep neural networks IoT-based bee activity acoustic classification showed improved results if compared to the previous hidden Markov models system.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1511
Author(s):  
Taylor Simons ◽  
Dah-Jye Lee

There has been a recent surge in publications related to binarized neural networks (BNNs), which use binary values to represent both the weights and activations in deep neural networks (DNNs). Due to the bitwise nature of BNNs, there have been many efforts to implement BNNs on ASICs and FPGAs. While BNNs are excellent candidates for these kinds of resource-limited systems, most implementations still require very large FPGAs or CPU-FPGA co-processing systems. Our work focuses on reducing the computational cost of BNNs even further, making them more efficient to implement on FPGAs. We target embedded visual inspection tasks, like quality inspection sorting on manufactured parts and agricultural produce sorting. We propose a new binarized convolutional layer, called the neural jet features layer, that learns well-known classic computer vision kernels that are efficient to calculate as a group. We show that on visual inspection tasks, neural jet features perform comparably to standard BNN convolutional layers while using less computational resources. We also show that neural jet features tend to be more stable than BNN convolution layers when training small models.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1589
Author(s):  
Yongkeun Hwang ◽  
Yanghoon Kim ◽  
Kyomin Jung

Neural machine translation (NMT) is one of the text generation tasks which has achieved significant improvement with the rise of deep neural networks. However, language-specific problems such as handling the translation of honorifics received little attention. In this paper, we propose a context-aware NMT to promote translation improvements of Korean honorifics. By exploiting the information such as the relationship between speakers from the surrounding sentences, our proposed model effectively manages the use of honorific expressions. Specifically, we utilize a novel encoder architecture that can represent the contextual information of the given input sentences. Furthermore, a context-aware post-editing (CAPE) technique is adopted to refine a set of inconsistent sentence-level honorific translations. To demonstrate the efficacy of the proposed method, honorific-labeled test data is required. Thus, we also design a heuristic that labels Korean sentences to distinguish between honorific and non-honorific styles. Experimental results show that our proposed method outperforms sentence-level NMT baselines both in overall translation quality and honorific translations.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6546
Author(s):  
Kazi Masum Sadique ◽  
Rahim Rahmani ◽  
Paul Johannesson

The Internet of things (IoT) will accommodate several billions of devices to the Internet to enhance human society as well as to improve the quality of living. A huge number of sensors, actuators, gateways, servers, and related end-user applications will be connected to the Internet. All these entities require identities to communicate with each other. The communicating devices may have mobility and currently, the only main identity solution is IP based identity management which is not suitable for the authentication and authorization of the heterogeneous IoT devices. Sometimes devices and applications need to communicate in real-time to make decisions within very short times. Most of the recently proposed solutions for identity management are cloud-based. Those cloud-based identity management solutions are not feasible for heterogeneous IoT devices. In this paper, we have proposed an edge-fog based decentralized identity management and authentication solution for IoT devices (IoTD) and edge IoT gateways (EIoTG). We have also presented a secure communication protocol for communication between edge IoT devices and edge IoT gateways. The proposed security protocols are verified using Scyther formal verification tool, which is a popular tool for automated verification of security protocols. The proposed model is specified using the PROMELA language. SPIN model checker is used to confirm the specification of the proposed model. The results show different message flows without any error.


2019 ◽  
Vol 9 (15) ◽  
pp. 3097 ◽  
Author(s):  
Diego Renza ◽  
Jaime Andres Arango ◽  
Dora Maria Ballesteros

This paper addresses a problem in the field of audio forensics. With the aim of providing a solution that helps Chain of Custody (CoC) processes, we propose an integrity verification system that includes capture (mobile based), hash code calculation and cloud storage. When the audio is recorded, a hash code is generated in situ by the capture module (an application), and it is sent immediately to the cloud. Later, the integrity of the audio recording given as evidence can be verified according to the information stored in the cloud. To validate the properties of the proposed scheme, we conducted several tests to evaluate if two different inputs could generate the same hash code (collision resistance), and to evaluate how much the hash code changes when small changes occur in the input (sensitivity analysis). According to the results, all selected audio signals provide different hash codes, and these values are very sensitive to small changes over the recorded audio. On the other hand, in terms of computational cost, less than 2 s per minute of recording are required to calculate the hash code. With the above results, our system is useful to verify the integrity of audio recordings that may be relied on as digital evidence.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 203 ◽  
Author(s):  
Kalathiripi Rambabu ◽  
N Venkatram

The phenomenal and continuous growth of diversified IOT (Internet of Things) dependent networks has open for security and connectivity challenges. This is due to the nature of IOT devices, loosely coupled behavior of internetworking, and heterogenic structure of the networks.  These factors are highly vulnerable to traffic flow based DDOS (distributed-denial of services) attacks. The botnets such as “mirae” noticed in recent past exploits the IoT devises and tune them to flood the traffic flow such that the target network exhaust to response to benevolent requests. Hence the contribution of this manuscript proposed a novel learning-based model that learns from the traffic flow features defined to distinguish the DDOS attack prone traffic flows and benevolent traffic flows. The performance analysis was done empirically by using the synthesized traffic flows that are high in volume and source of attacks. The values obtained for statistical metrics are evincing the significance and robustness of the proposed model


Author(s):  
S. Arokiaraj ◽  
Dr. N. Viswanathan

With the advent of Internet of things(IoT),HA (HA) recognition has contributed the more application in health care in terms of diagnosis and Clinical process. These devices must be aware of human movements to provide better aid in the clinical applications as well as user’s daily activity.Also , In addition to machine and deep learning algorithms, HA recognition systems has significantly improved in terms of high accurate recognition. However, the most of the existing models designed needs improvisation in terms of accuracy and computational overhead. In this research paper, we proposed a BAT optimized Long Short term Memory (BAT-LSTM) for an effective recognition of human activities using real time IoT systems. The data are collected by implanting the Internet of things) devices invasively. Then, proposed BAT-LSTM is deployed to extract the temporal features which are then used for classification to HA. Nearly 10,0000 dataset were collected and used for evaluating the proposed model. For the validation of proposed framework, accuracy, precision, recall, specificity and F1-score parameters are chosen and comparison is done with the other state-of-art deep learning models. The finding shows the proposed model outperforms the other learning models and finds its suitability for the HA recognition.


2020 ◽  
Vol 33 (18) ◽  
pp. 7777-7786
Author(s):  
Kaiyue Shan ◽  
Xiping Yu

AbstractThe establishment of a tropical cyclone (TC) trajectory model that can represent the basic physics and is practically advantageous considering both accuracy and computational cost is essential to the climatological studies of various global TC activities. In this study, a simple deterministic model is proposed based on a newly developed semiempirical formula for the beta drift under known conditions of the environmental steering flow. To verify the proposed model, all historical TC tracks in the western North Pacific and the North Atlantic Ocean basins during the period 1979–2018 are simulated and statistically compared with the relevant results derived from observed data. The proposed model is shown to well capture the spatial distribution patterns of the TC occurrence frequency in the two ocean basins. Prevailing TC tracks as well as the latitudinal distribution of the landfall TC number in the western North Pacific Ocean basin are also shown to agree better with the results derived from observed data, as compared to the existing models that took different strategies to include the effect of the beta drift. It is then concluded that the present model is advantageous in terms of not only the accuracy but also the capacity to accommodate the varying climate. It is thus believed that the proposed TC trajectory model has the potential to be used for assessing possible impacts of climate change on tropical cyclone activities.


Sign in / Sign up

Export Citation Format

Share Document