fully connected
Recently Published Documents


TOTAL DOCUMENTS

1303
(FIVE YEARS 835)

H-INDEX

38
(FIVE YEARS 12)

Author(s):  
A. Pramod Reddy ◽  
Vijayarajan V.

Automatic emotion recognition from Speech (AERS) systems based on acoustical analysis reveal that some emotional classes persist with ambiguity. This study employed an alternative method aimed at providing deep understanding into the amplitude–frequency, impacts of various emotions in order to aid in the advancement of near term, more effectively in classifying AER approaches. The study was undertaken by converting narrow 20 ms frames of speech into RGB or grey-scale spectrogram images. The features have been used to fine-tune a feature selection system that had previously been trained to recognise emotions. Two different Linear and Mel spectral scales are used to demonstrate a spectrogram. An inductive approach for in sighting the amplitude and frequency features of various emotional classes. We propose a two-channel profound combination of deep fusion network model for the efficient categorization of images. Linear and Mel- spectrogram is acquired from Speech-signal, which is prepared in the recurrence area to input Deep Neural Network. The proposed model Alex-Net with five convolutional layers and two fully connected layers acquire most vital features form spectrogram images plotted on the amplitude-frequency scale. The state-of-the-art is compared with benchmark dataset (EMO-DB). RGB and saliency images are fed to pre-trained Alex-Net tested both EMO-DB and Telugu dataset with an accuracy of 72.18% and fused image features less computations reaching to an accuracy 75.12%. The proposed model show that Transfer learning predict efficiently than Fine-tune network. When tested on Emo-DB dataset, the propȯsed system adequately learns discriminant features from speech spectrȯgrams and outperforms many stȧte-of-the-art techniques.


2022 ◽  
Vol 15 (3) ◽  
pp. 1-25
Author(s):  
Stefan Brennsteiner ◽  
Tughrul Arslan ◽  
John Thompson ◽  
Andrew McCormick

Machine learning in the physical layer of communication systems holds the potential to improve performance and simplify design methodology. Many algorithms have been proposed; however, the model complexity is often unfeasible for real-time deployment. The real-time processing capability of these systems has not been proven yet. In this work, we propose a novel, less complex, fully connected neural network to perform channel estimation and signal detection in an orthogonal frequency division multiplexing system. The memory requirement, which is often the bottleneck for fully connected neural networks, is reduced by ≈ 27 times by applying known compression techniques in a three-step training process. Extensive experiments were performed for pruning and quantizing the weights of the neural network detector. Additionally, Huffman encoding was used on the weights to further reduce memory requirements. Based on this approach, we propose the first field-programmable gate array based, real-time capable neural network accelerator, specifically designed to accelerate the orthogonal frequency division multiplexing detector workload. The accelerator is synthesized for a Xilinx RFSoC field-programmable gate array, uses small-batch processing to increase throughput, efficiently supports branching neural networks, and implements superscalar Huffman decoders.


Author(s):  
Yaesr Khamayseh ◽  
Rabiah Al-qudah

<p>Wireless networks are designed to provide the enabling infrastructure for emerging technological advancements. The main characteristics of wireless networks are: Mobility, power constraints, high packet loss, and lower bandwidth. Nodes’ mobility is a crucial consideration for wireless networks, as nodes are moving all the time, and this may result in loss of connectivity in the network. The goal of this work is to explore the effect of replacing the generally held assumption of symmetric radii for wireless networks with asymmetric radii. This replacement may have a direct impact on the connectivity, throughput, and collision avoidance mechanism of mobile networks. The proposed replacement may also impact other mobile protocol’s functionality. In this work, we are mainly concerned with building and maintaining fully connected wireless network with the asymmetric assumption. For this extent, we propose to study the effect of the asymmetric links assumption on the network performance using extensive simulation experiments. Extensive simulation experiments were performed to measure the impact of these parameters. Finally, a resource allocation scheme for wireless networks is proposed for the dual rate scenario. The performance of the proposed framework is evaluated using simulation.</p>


2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-30
Author(s):  
Jacob Laurel ◽  
Rem Yang ◽  
Gagandeep Singh ◽  
Sasa Misailovic

We present a novel abstraction for bounding the Clarke Jacobian of a Lipschitz continuous, but not necessarily differentiable function over a local input region. To do so, we leverage a novel abstract domain built upon dual numbers, adapted to soundly over-approximate all first derivatives needed to compute the Clarke Jacobian. We formally prove that our novel forward-mode dual interval evaluation produces a sound, interval domain-based over-approximation of the true Clarke Jacobian for a given input region. Due to the generality of our formalism, we can compute and analyze interval Clarke Jacobians for a broader class of functions than previous works supported – specifically, arbitrary compositions of neural networks with Lipschitz, but non-differentiable perturbations. We implement our technique in a tool called DeepJ and evaluate it on multiple deep neural networks and non-differentiable input perturbations to showcase both the generality and scalability of our analysis. Concretely, we can obtain interval Clarke Jacobians to analyze Lipschitz robustness and local optimization landscapes of both fully-connected and convolutional neural networks for rotational, contrast variation, and haze perturbations, as well as their compositions.


Author(s):  
S Sumedha ◽  
Mustansir Barma

Abstract We use large deviation theory to obtain the free energy of the XY model on a fully connected graph on each site of which there is a randomly oriented field of magnitude $h$. The phase diagram is obtained for two symmetric distributions of the random orientations: (a) a uniform distribution and (b) a distribution with cubic symmetry. In both cases, the ordered state reflects the symmetry of the underlying disorder distribution. The phase boundary has a multicritical point which separates a locus of continuous transitions (for small values of $h$) from a locus of first order transitions (for large $h$). The free energy is a function of a single variable in case (a) and a function of two variables in case (b), leading to different characters of the multicritical points in the two cases.


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 154
Author(s):  
Yuan Bao ◽  
Zhaobin Liu ◽  
Zhongxuan Luo ◽  
Sibo Yang

In this paper, a novel smooth group L1/2 (SGL1/2) regularization method is proposed for pruning hidden nodes of the fully connected layer in convolution neural networks. Usually, the selection of nodes and weights is based on experience, and the convolution filter is symmetric in the convolution neural network. The main contribution of SGL1/2 is to try to approximate the weights to 0 at the group level. Therefore, we will be able to prune the hidden node if the corresponding weights are all close to 0. Furthermore, the feasibility analysis of this new method is carried out under some reasonable assumptions due to the smooth function. The numerical results demonstrate the superiority of the SGL1/2 method with respect to sparsity, without damaging the classification performance.


Author(s):  
Miguel G. Folgado ◽  
Veronica Sanz

AbstractIn this paper we illustrate the use of Data Science techniques to analyse complex human communication. In particular, we consider tweets from leaders of political parties as a dynamical proxy to political programmes and ideas. We also study the temporal evolution of their contents as a reaction to specific events. We analyse levels of positive and negative sentiment in the tweets using new tools adapted to social media. We also train a Fully-Connected Neural Network (FCNN) to recognise the political affiliation of a tweet. The FCNN is able to predict the origin of the tweet with a precision in the range of 71–75%, and the political leaning (left or right) with a precision of around 90%. This study is meant to be viewed as an example of how to use Twitter data and different types of Data Science tools for a political analysis.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Silvio Franz ◽  
Flavio Nicoletti ◽  
Giorgio Parisi ◽  
Federico Ricci-Tersenghi

We study the energy minima of the fully-connected mm-components vector spin glass model at zero temperature in an external magnetic field for m\ge 3m≥3. The model has a zero temperature transition from a paramagnetic phase at high field to a spin glass phase at low field. We study the eigenvalues and eigenvectors of the Hessian in the minima of the Hamiltonian. The spectrum is gapless both in the paramagnetic and in the spin glass phase, with a pseudo-gap behaving as \lambda^{m-1}λm−1 in the paramagnetic phase and as \sqrt{\lambda}λ at criticality and in the spin glass phase. Despite the long-range nature of the model, the eigenstates close to the edge of the spectrum display quasi-localization properties. We show that the paramagnetic to spin glass transition corresponds to delocalization of the edge eigenvectors. We solve the model by the cavity method in the thermodynamic limit. We also perform numerical minimization of the Hamiltonian for N\le 2048N≤2048 and compute the spectral properties, that show very strong corrections to the asymptotic scaling approaching the critical point.


2022 ◽  
Author(s):  
Amogh Palasamudram

<p>This research introduces and evaluates the Neural Layer Bypassing Network (NLBN), a new neural network architecture to improve the speed and effectiveness of forward propagation in deep learning. This architecture utilizes 1 additional (fully connected) neural network layer after every layer in the main network. This new layer determines whether finishing the rest of the forward propagation is required to predict the output of the given input. To test the effectiveness of the NLBN, I programmed coding examples for this architecture with 3 different image classification models trained on 3 different datasets: MNIST Handwritten Digits Dataset, Horses or Humans Dataset, and Colorectal Histology Dataset. After training 1 standard convolutional neural network (CNN) and 1 NLBN per dataset (both of equivalent architectures), I performed 5 trials per dataset to analyze the performance of these two architectures. For the NLBN, I also collected data regarding the accuracy, time period, and speed of the network with respect to the percentage of the model the inputs are passed through. It was found that this architecture increases the speed of forward propagation by 6% - 25% while the accuracy tended to decrease by 0% - 4%; the results vary based on the dataset and structure of the model, but the increase in speed was normally at least twice the decrease in accuracy. In addition to the NLBN’s performance during predictions, it takes roughly 40% longer to train and requires more memory due to its complexity. However, the architecture can be made more efficient if integrated into TensorFlow libraries. Overall, by being able to autonomously skip neural network layers, this architecture can potentially be a foundation for neural networks to teach themselves to become more efficient for applications that require fast, accurate, and less computationally intensive predictions.<br></p>


Sign in / Sign up

Export Citation Format

Share Document