Deconvoluting acoustic beamforming maps with a deep neural network

2021 ◽  
Vol 263 (1) ◽  
pp. 5397-5408
Author(s):  
Wagner Gonçalves Pinto ◽  
Michaël Bauerheim ◽  
Hélène Parisot-Dupuis

Localization and quantification of noise sources is an important scientific and industrial problem, the use of phased arrays of microphones being the standard techniques in many applications. Non-physical artifacts appears on the output due to the nature of the method, thus, a supplementary step known as deconvolution is often performed. The use of data-driven machine learning can be a candidate to solve such problem. Neural networks can be extremely advantageous since no hypothesis concerning the environment or the characteristics of the sources are necessary, different from classical deconvolution techniques. Information on the acoustic propagation is implicitly extracted from pairs of source-output maps. On this work, a convolutional neural network is trained to deconvolute the beamforming map obtained from synthetic data simulating the response of an array of microphones. Quality of the estimation and the computational cost are compared to those of classical deconvolution methods (DAMAS, CLEAN-SC). Constraints associated with the size of the dataset used for training the neural network are also investigated and presented.

Mining ◽  
2021 ◽  
Vol 1 (3) ◽  
pp. 279-296
Author(s):  
Marc Elmouttie ◽  
Jane Hodgkinson ◽  
Peter Dean

Geotechnical complexity in mining often leads to geotechnical uncertainty which impacts both safety and productivity. However, as mining progresses, particularly for strip mining operations, a body of knowledge is acquired which reduces this uncertainty and can potentially be used by mining engineers to improve the prediction of future mining conditions. In this paper, we describe a new method to support this approach based on modelling and neural networks. A high-level causal model of the mining operations based on historical data for a number of parameters was constructed which accounted for parameter interactions, including hydrogeological conditions, weather, and prior operations. An artificial neural network was then trained on this historical data, including production data. The network can then be used to predict future production based on presently observed mining conditions as mining proceeds and compared with the model predictions. Agreement with the predictions indicates confidence that the neural network predictions are properly supported by the newly available data. The efficacy of this approach is demonstrated using semi-synthetic data based on an actual mine.


2018 ◽  
Vol 15 (2) ◽  
pp. 294-301
Author(s):  
Reddy Sreenivasulu ◽  
Chalamalasetti SrinivasaRao

Drilling is a hole making process on machine components at the time of assembly work, which are identify everywhere. In precise applications, quality and accuracy play a wide role. Nowadays’ industries suffer due to the cost incurred during deburring, especially in precise assemblies such as aerospace/aircraft body structures, marine works and automobile industries. Burrs produced during drilling causes dimensional errors, jamming of parts and misalignment. Therefore, deburring operation after drilling is often required. Now, reducing burr size is a serious topic. In this study experiments are conducted by choosing various input parameters selected from previous researchers. The effect of alteration of drill geometry on thrust force and burr size of drilled hole was investigated by the Taguchi design of experiments and found an optimum combination of the most significant input parameters from ANOVA to get optimum reduction in terms of burr size by design expert software. Drill thrust influences more on burr size. The clearance angle of the drill bit causes variation in thrust. The burr height is observed in this study.  These output results are compared with the neural network software @easy NN plus. Finally, it is concluded that by increasing the number of nodes the computational cost increases and the error in nueral network decreases. Good agreement was shown between the predictive model results and the experimental responses.  


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Xiaochao Fan ◽  
Hongfei Lin ◽  
Liang Yang ◽  
Yufeng Diao ◽  
Chen Shen ◽  
...  

Humor refers to the quality of being amusing. With the development of artificial intelligence, humor recognition is attracting a lot of research attention. Although phonetics and ambiguity have been introduced by previous studies, existing recognition methods still lack suitable feature design for neural networks. In this paper, we illustrate that phonetics structure and ambiguity associated with confusing words need to be learned for their own representations via the neural network. Then, we propose the Phonetics and Ambiguity Comprehension Gated Attention network (PACGA) to learn phonetic structures and semantic representation for humor recognition. The PACGA model can well represent phonetic information and semantic information with ambiguous words, which is of great benefit to humor recognition. Experimental results on two public datasets demonstrate the effectiveness of our model.


2015 ◽  
Vol 764-765 ◽  
pp. 863-867
Author(s):  
Yih Chuan Lin ◽  
Pu Jian Hsu

In this paper, an error concealment scheme for neural-network based compression of depth image in 3D videos is proposed. In the neural-network based compression, each depth image is represented by one or more neural networks. The advantage of neural-network based compression lies in the parallel processing ability of multiple neurons, which can handle the massive data volume of 3D videos. The similarity of neuron weights of neighboring nodes is exploited to recover the loss neuron weights when transmitting with an error-prone communication channel. With a simulated noisy channel, the quality of compressed 3D video, which is reconstructed undergoing the noisy channel, can be recovered well by the proposed error concealment scheme.


2021 ◽  
Vol 8 (3) ◽  
pp. 15-27
Author(s):  
Mohamed N. Sweilam ◽  
Nikolay Tolstokulakov

Depth estimation has made great progress in the last few years due to its applications in robotics science and computer vision. Various methods have been implemented and enhanced to estimate the depth without flickers and missing holes. Despite this progress, it is still one of the main challenges for researchers, especially for the video applications which have more complexity of the neural network which af ects the run time. Moreover to use such input like monocular video for depth estimation is considered an attractive idea, particularly for hand-held devices such as mobile phones, they are very popular for capturing pictures and videos, in addition to having a limited amount of RAM. Here in this work, we focus on enhancing the existing consistent depth estimation for monocular videos approach to be with less usage of RAM and with using less number of parameters without having a significant reduction in the quality of the depth estimation.


Author(s):  
S O Stepanenko ◽  
P Y Yakimov

Object classification with use of neural networks is extremely current today. YOLO is one of the most often used frameworks for object classification. It produces high accuracy but the processing speed is not high enough especially in conditions of limited performance of a computer. This article researches use of a framework called NVIDIA TensorRT to optimize YOLO with the aim of increasing the image processing speed. Saving efficiency and quality of the neural network work TensorRT allows us to increase the processing speed using an optimization of the architecture and an optimization of calculations on a GPU.


Author(s):  
Saad Mohamed Darwish

Cheminformatics plays a vital role to maintain a large amount of chemical data. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in domains such as cosmetics, drug design, food safety, and manufacturing chemical compounds. Toxicity prediction topic requires several new approaches for knowledge discovery from data to paradigm composite associations between the modules of the chemical compound; such techniques need more computational cost as the number of chemical compounds increases. State-of-the-art prediction methods such as neural network and multi-layer regression that requires either tuning parameters or complex transformations of predictor or outcome variables are not achieving high accuracy results.  This paper proposes a Quantum Inspired Genetic Programming “QIGP” model to improve the prediction accuracy. Genetic Programming is utilized to give a linear equation for calculating toxicity degree more accurately. Quantum computing is employed to improve the selection of the best-of-run individuals and handles parsimony pressure to reduce the complexity of the solutions. The results of the internal validation analysis indicated that the QIGP model has the better goodness of fit statistics and significantly outperforms the Neural Network model.


2021 ◽  
Author(s):  
Usman Ghani ◽  
Israel Desta ◽  
Akhil Jindal ◽  
Omeir Khan ◽  
George Jones ◽  
...  

AbstractIt has been demonstrated earlier that the neural network based program AlphaFold2 can be used to dock proteins given the two sequences separated by a gap as the input. The protocol presented here combines AlphaFold2 with the physics based docking program ClusPro. The monomers of the model generated by AlphaFold2 are separated, re-docked using ClusPro, and the resulting 10 models are refined by AlphaFold2. Finally, the five original AlphaFold2 models are added to the 10 AlphaFold2 refined ClusPro models, and the 15 models are ranked by their predicted aligned error (PAE) values obtained by AlphaFold2. The protocol is applied to two benchmark sets of complexes, the first based on the established protein-protein docking benchmark, and the second consisting of only structures released after May 2018, the cut-off date for training AlphaFold2. It is shown that the quality of the initial AlphaFold2 models improves with each additional step of the protocol. In particular, adding the AlphaFold2 refined ClusPro models to the AlphaFold2 models increases the success rate by 23% in the top 5 predictions, whereas considering the 10 models obtained by the combined protocol increases the success rate to close to 40%. The improvement is similar for the second benchmark that includes only complexes distinct from the proteins used for training the neural network.


Author(s):  
Wahyu Srimulyani ◽  
Aina Musdholifah

 Indonesia has many food varieties, one of which is rice varieties. Each rice variety has physical characteristics that can be recognized through color, texture, and shape. Based on these physical characteristics, rice can be identified using the Neural Network. Research using 12 features has not optimal results. This study proposes the addition of geometry features with Learning Vector Quantization and Backpropagation algorithms that are used separately.The trial uses data from 9 rice varieties taken from several regions in Yogyakarta. The acquisition of rice was carried out using a camera Canon D700 with a kit lens and maximum magnification, 55 mm. Data sharing is carried out for training and testing, and the training data was sharing with the quality of the rice. Preprocessing of data was carried out before feature extraction with the trial and error thresholding process of segmentation. Evaluation is done by comparing the results of the addition of 6 geometry features and before adding geometry features.The test results show that the addition of 6 geometry features gives an increase in the value of accuracy. This is evidenced by the Backpropagation algorithm resulting in increased accuracy of 100% and 5.2% the result of the LVQ algorithm.


Sign in / Sign up

Export Citation Format

Share Document