A New Method of Curve Fitting for Calculation of Reverberation Time From Impulse Responses With Insufficient Length

Author(s):  
Çağlar Fırat Özgenel ◽  
Arzu Gönenç Sorguç

Most of the room acoustics evaluation parameters are calculated from the energy decay curve obtained from the room impulse response. Schroeder’s backwards integration method is one of the most commonly used methods to obtain room impulse response. Although, the method holds its validity since 1964 and used extensively, obtaining room impulse response with sufficient length to observe total energy decay requires high computational cost especially in highly reverberant rooms. In such cases, present acoustical analysis and simulation tools either use data extrapolation and linear fitting methods or they fail to provide any reliable output. Hence, in order to provide reliable data based on such an impulse response, high computational cost and effort are required. In this context, a modification for acoustical analysis methods based on impulse response is proposed, comprising a linear fitting algorithm and extrapolation together with data culling. Proposed method is based on the linear energy decay assumption of Schroeder and ideal energy decay according to global reverberation time estimates. Method is proposed for diffuse field conditions regardless of the length of room impulse response. Validity of the proposed method is checked via a developed room acoustics tool, namely RAT, and case studies conducted with the mentioned tool.

Author(s):  
Heather L. Lai ◽  
Brian Hamilton

Abstract This paper investigates the use of two room acoustics metrics designed to evaluate the degree to which the linearity assumptions of the energy density curves are valid. The study focuses on measured and computer-modeled energy density curves derived from the room impulse response of a space exhibiting a highly non-diffuse sound field due to flutter echo. In conjunction with acoustical remediation, room impulse response measurements were taken before and after the installation of the acoustical panels. A very dramatic decrease in the reverberation time was experienced due to the addition of the acoustical panels. The two non-linearity metrics used in this study are the non-linearity parameter and the curvature. These metrics are calculated from the energy decay curves computed per octave band, based on the definitions presented in ISO 3382-2. The non-linearity parameter quantifies the deviation of the EDC from a straight line fit used to generated T20 and T30 reverberation times. Where the reverberation times are calculated based on a linear regression of the data relating to either −5 to −25 dB for T20 or −5 to −35 dB for T30 reverberation time calculations. This deviation is quantified using the correlation coefficient between the energy decay curve and the linear regression for the specified data. In order to graphically demonstrate these non-linearity metrics, the energy decay curves are plotted along with the linear regression curves for the T20 and T30 reverberation time for both the measured data and two different room acoustics computer-modeling techniques, geometric acoustics modeling and finite-difference wave-based modeling. The intent of plotting these curves together is to demonstrate the relationship between these metrics and the energy decay curve, and to evaluate their use for quantifying degree of non-linearity in non-diffuse sound fields. Observations of these graphical representations are used to evaluate the accuracy of reverberation time estimations in non-diffuse environments, and to evaluate the use of these non-linearity parameters for comparison of different computer-modeling techniques or room configurations. Using these techniques, the non-linearity parameter based on both T20 and T30 linear regression curves and the curvature parameter were calculated over 250–4000 Hz octave bands for the measured and computer-modeled room impulse response curves at two different locations and two different room configurations. Observations of these calculated results are used to evaluate the consistency of these metrics, and the application of these metrics to quantifying the degree of non-linearity of the energy decay curve derived from a non-diffuse sound field. These calculated values are also used to evaluate the differences in the degree of diffusivity between the measured and computer-modeled room impulse response. Acoustical computer modeling is often based on geometrical acoustics using ray-tracing and image-source algorithms, however, in non-diffuse sound fields, wave based methods are often able to better model the characteristic sound wave patterns that are developed. It is of interest to study whether these improvements in the wave based computer-modeling are also reflected in the non-linearity parameter calculations. The results showed that these metrics provide an effective criteria for identifying non-linearity in the energy decay curve, however for highly non-diffuse sound fields, the resulting values were found to be very sensitive to fluctuations in the energy decay curves and therefore, contain inconsistencies due to these differences.


2012 ◽  
Vol 2 (1) ◽  
pp. 7-9 ◽  
Author(s):  
Satinderjit Singh

Median filtering is a commonly used technique in image processing. The main problem of the median filter is its high computational cost (for sorting N pixels, the temporal complexity is O(N·log N), even with the most efficient sorting algorithms). When the median filter must be carried out in real time, the software implementation in general-purpose processorsdoes not usually give good results. This Paper presents an efficient algorithm for median filtering with a 3x3 filter kernel with only about 9 comparisons per pixel using spatial coherence between neighboring filter computations. The basic algorithm calculates two medians in one step and reuses sorted slices of three vertical neighboring pixels. An extension of this algorithm for 2D spatial coherence is also examined, which calculates four medians per step.


1995 ◽  
Vol 32 (2) ◽  
pp. 95-103
Author(s):  
José A. Revilla ◽  
Kalin N. Koev ◽  
Rafael Díaz ◽  
César Álvarez ◽  
Antonio Roldán

One factor in determining the transport capacity of coastal interceptors in Combined Sewer Systems (CSS) is the reduction of Dissolved Oxygen (DO) in coastal waters originating from the overflows. The study of the evolution of DO in coastal zones is complex. The high computational cost of using mathematical models discriminates against the required probabilistic analysis being undertaken. Alternative methods, based on such mathematical modelling, employed in a limited number of cases, are therefore needed. In this paper two alternative methods are presented for the study of oxygen deficit resulting from overflows of CSS. In the first, statistical analyses focus on the causes of the deficit (the volume discharged). The second concentrates on the effects (the concentrations of oxygen in the sea). Both methods have been applied in a study of the coastal interceptor at Pasajes Estuary (Guipúzcoa, Spain) with similar results.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 891
Author(s):  
Aurea Grané ◽  
Alpha A. Sow-Barry

This work provides a procedure with which to construct and visualize profiles, i.e., groups of individuals with similar characteristics, for weighted and mixed data by combining two classical multivariate techniques, multidimensional scaling (MDS) and the k-prototypes clustering algorithm. The well-known drawback of classical MDS in large datasets is circumvented by selecting a small random sample of the dataset, whose individuals are clustered by means of an adapted version of the k-prototypes algorithm and mapped via classical MDS. Gower’s interpolation formula is used to project remaining individuals onto the previous configuration. In all the process, Gower’s distance is used to measure the proximity between individuals. The methodology is illustrated on a real dataset, obtained from the Survey of Health, Ageing and Retirement in Europe (SHARE), which was carried out in 19 countries and represents over 124 million aged individuals in Europe. The performance of the method was evaluated through a simulation study, whose results point out that the new proposal solves the high computational cost of the classical MDS with low error.


Author(s):  
Seyede Vahide Hashemi ◽  
Mahmoud Miri ◽  
Mohsen Rashki ◽  
Sadegh Etedali

This paper aims to carry out sensitivity analyses to study how the effect of each design variable on the performance of self-centering buckling restrained brace (SC-BRB) and the corresponding buckling restrained brace (BRB) without shape memory alloy (SMA) rods. Furthermore, the reliability analyses of BRB and SC-BRB are performed in this study. Considering the high computational cost of the simulation methods, three Meta-models including the Kriging, radial basis function (RBF), and polynomial response surface (PRSM) are utilized to construct the surrogate models. For this aim, the nonlinear dynamic analyses are conducted on both BRB and SC-BRB by using OpenSees software. The results showed that the SMA area, SMA length ratio, and BRB core area have the most effect on the failure probability of SC-BRB. It is concluded that Kriging-based Monte Carlo Simulation (MCS) gives the best performance to estimate the limit state function (LSF) of BRB and SC-BRB in the reliability analysis procedures. Considering the effects of changing the maximum cyclic loading on the failure probability computation and comparison of the failure probability for different LSFs, it is also found that the reliability indices of SC-BRB were always higher than the corresponding reliability indices determined for BRB which confirms the performance superiority of SC-BRB than BRB.


Author(s):  
Yuki Takashima ◽  
Toru Nakashika ◽  
Tetsuya Takiguchi ◽  
Yasuo Ariki

Abstract Voice conversion (VC) is a technique of exclusively converting speaker-specific information in the source speech while preserving the associated phonemic information. Non-negative matrix factorization (NMF)-based VC has been widely researched because of the natural-sounding voice it achieves when compared with conventional Gaussian mixture model-based VC. In conventional NMF-VC, models are trained using parallel data which results in the speech data requiring elaborate pre-processing to generate parallel data. NMF-VC also tends to be an extensive model as this method has several parallel exemplars for the dictionary matrix, leading to a high computational cost. In this study, an innovative parallel dictionary-learning method using non-negative Tucker decomposition (NTD) is proposed. The proposed method uses tensor decomposition and decomposes an input observation into a set of mode matrices and one core tensor. The proposed NTD-based dictionary-learning method estimates the dictionary matrix for NMF-VC without using parallel data. The experimental results show that the proposed method outperforms other methods in both parallel and non-parallel settings.


2006 ◽  
Vol 04 (03) ◽  
pp. 639-647 ◽  
Author(s):  
ELEAZAR ESKIN ◽  
RODED SHARAN ◽  
ERAN HALPERIN

The common approaches for haplotype inference from genotype data are targeted toward phasing short genomic regions. Longer regions are often tackled in a heuristic manner, due to the high computational cost. Here, we describe a novel approach for phasing genotypes over long regions, which is based on combining information from local predictions on short, overlapping regions. The phasing is done in a way, which maximizes a natural maximum likelihood criterion. Among other things, this criterion takes into account the physical length between neighboring single nucleotide polymorphisms. The approach is very efficient and is applied to several large scale datasets and is shown to be successful in two recent benchmarking studies (Zaitlen et al., in press; Marchini et al., in preparation). Our method is publicly available via a webserver at .


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 511
Author(s):  
Syed Mohammad Minhaz Hossain ◽  
Kaushik Deb ◽  
Pranab Kumar Dhar ◽  
Takeshi Koshiba

Proper plant leaf disease (PLD) detection is challenging in complex backgrounds and under different capture conditions. For this reason, initially, modified adaptive centroid-based segmentation (ACS) is used to trace the proper region of interest (ROI). Automatic initialization of the number of clusters (K) using modified ACS before recognition increases tracing ROI’s scalability even for symmetrical features in various plants. Besides, convolutional neural network (CNN)-based PLD recognition models achieve adequate accuracy to some extent. However, memory requirements (large-scaled parameters) and the high computational cost of CNN-based PLD models are burning issues for the memory restricted mobile and IoT-based devices. Therefore, after tracing ROIs, three proposed depth-wise separable convolutional PLD (DSCPLD) models, such as segmented modified DSCPLD (S-modified MobileNet), segmented reduced DSCPLD (S-reduced MobileNet), and segmented extended DSCPLD (S-extended MobileNet), are utilized to represent the constructive trade-off among accuracy, model size, and computational latency. Moreover, we have compared our proposed DSCPLD recognition models with state-of-the-art models, such as MobileNet, VGG16, VGG19, and AlexNet. Among segmented-based DSCPLD models, S-modified MobileNet achieves the best accuracy of 99.55% and F1-sore of 97.07%. Besides, we have simulated our DSCPLD models using both full plant leaf images and segmented plant leaf images and conclude that, after using modified ACS, all models increase their accuracy and F1-score. Furthermore, a new plant leaf dataset containing 6580 images of eight plants was used to experiment with several depth-wise separable convolution models.


2021 ◽  
Vol 12 (4) ◽  
pp. 118-131
Author(s):  
Jaya Krishna Raguru ◽  
Devi Prasad Sharma

The problem of identifying a seed set composed of K nodes that increase influence spread over a social network is known as influence maximization (IM). Past works showed this problem to be NP-hard and an optimal solution to this problem using greedy algorithms achieved only 63% of spread. However, this approach is expensive and suffered from performance issues like high computational cost. Furthermore, in a network with communities, IM spread is not always certain. In this paper, heterogeneous influence maximization through community detection (HIMCD) algorithm is proposed. This approach addresses initial seed nodes selection in communities using various centrality measures, and these seed nodes act as sources for influence spread. A parallel influence maximization is applied with the aid of seed node set contained in each group. In this approach, graph is partitioned and IM computations are done in a distributed manner. Extensive experiments with two real-world datasets reveals that HCDIM achieves substantial performance improvement over state-of-the-art techniques.


2021 ◽  
Author(s):  
Jie Wang ◽  
Peng Wang ◽  
Nahiène Hamila ◽  
Philippe Boisse

During the forming stage in the RTM process, deformations and orientations of yarns at the mesoscopic scale are essential to evaluate mechanical behaviors of final composite products and calculate the permeability of the reinforcement. However, due to the high computational cost, it is very difficult to carry out a mesoscopic draping simulation for the entire reinforcement. In this paper, a macro-meso scale simulation of composite reinforcements is presented in order to predict mesoscopic deformations of the fabric in a reasonable calculation time. The proposed multi-scale method allows linking the macroscopic simulation of the reinforcement with the mesoscopic modelling of the RVE through a macromeso embedded analysis. On the base of macroscopic simulations using a hyperelastic constitutive law of the reinforcement, an embedded mesoscopic geometry is first deduced from the macroscopic simulation of the draping. To overcome the inconvenience of the macro-meso embedded solution which leads to unreal excessive yarn extensions, local mesoscopic simulations based on the embedded analysis are carried out on a single RVE by defining specific boundary conditions. Finally, the multi-scale forming simulations are investigated in comparison with the experimental results, illustrating the efficiency of the proposed approach, in terms of accuracy and CPU time.


Sign in / Sign up

Export Citation Format

Share Document