acoustic classification
Recently Published Documents


TOTAL DOCUMENTS

124
(FIVE YEARS 30)

H-INDEX

14
(FIVE YEARS 4)

Phonetica ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Qandeel Hussain ◽  
Alexei Kochetov

Abstract Punjabi is an Indo-Aryan language which contrasts a rich set of coronal stops at dental and retroflex places of articulation across three laryngeal configurations. Moreover, all these stops occur contrastively in various positions (word-initially, -medially, and -finally). The goal of this study is to investigate how various coronal place and laryngeal contrasts are distinguished acoustically both within and across word positions. A number of temporal and spectral correlates were examined in data from 13 speakers of Eastern Punjabi: Voice Onset Time, release and closure durations, fundamental frequency, F1-F3 formants, spectral center of gravity and standard deviation, H1*-H2*, and cepstral peak prominence. The findings indicated that higher formants and spectral measures were most important for the classification of place contrasts across word positions, whereas laryngeal contrasts were reliably distinguished by durational and voice quality measures. Word-medially and -finally, F2 and F3 of the preceding vowels played a key role in distinguishing the dental and retroflex stops, while spectral noise measures were more important word-initially. The findings of this study contribute to a better understanding of factors involved in the maintenance of typologically rare and phonetically complex sets of place and laryngeal contrasts in the coronal stops of Indo-Aryan languages.


2021 ◽  
Vol 13 (23) ◽  
pp. 4771
Author(s):  
Karolina Trzcinska ◽  
Jaroslaw Tegowski ◽  
Pawel Pocwiardowski ◽  
Lukasz Janowski ◽  
Jakub Zdroik ◽  
...  

Acoustic seafloor measurements with multibeam echosounders (MBESs) are currently often used for submarine habitat mapping, but the MBESs are usually not acoustically calibrated for backscattering strength (BBS) and cannot be used to infer absolute seafloor angular dependence. We present a study outlining the calibration and showing absolute backscattering strength values measured at a frequency of 150 kHz at around 10–20 m water depth. After recording bathymetry, the co-registered backscattering strength was corrected for true incidence and footprint reverberation area on a rough and tilted seafloor. Finally, absolute backscattering strength angular response curves (ARCs) for several seafloor types were constructed after applying sonar backscattering strength calibration and specific water column absorption for 150 kHz correction. Thus, we inferred specific 150 kHz angular backscattering responses that can discriminate among very fine sand, sandy gravel, and gravelly sand, as well as between bare boulders and boulders partially overgrown by red algae, which was validated by video ground-truthing. In addition, we provide backscatter mosaics using our algorithm (BBS-Coder) to correct the angle varying gain (AVG). The results of the work are compared and discussed with the published results of BBS measurements in the 100–400 kHz frequency range. The presented results are valuable in extending the very sparse angular response curves gathered so far and could contribute to a better understanding of the dependence of backscattering on the type of bottom habitat and improve their acoustic classification.


2021 ◽  
Author(s):  
Maksim Kukushkin ◽  
Stavros Ntalampiras

2021 ◽  
Vol 263 (4) ◽  
pp. 2793-2800
Author(s):  
Birgit Rasmussen ◽  
Teresa Carrascal Garcia ◽  
Simone Secchi

Regulatory acoustic requirements for hospitals exist in several countries in Europe, but many countries have either insufficient regulatory limits or only recommendations. The main purpose of limit values is to ensure optimal acoustic conditions for patients under treatment and for personnel for the various tasks taking place in many different rooms, e.g. bedrooms, examination and treatment rooms, corridors, stairwells, waiting and reception areas, canteens, offices, all with different acoustic needs. In addition, some rooms require special considerations like psychiatric rooms and noisy MR-scanning rooms. The extent of limit values varies considerably between countries. Some specify a few, others several criteria. The findings from a comparative study carried out in selected countries in various geographical parts of Europe show a diversity of acoustic descriptors and limit values. The paper includes examples of criteria for reverberation time, airborne and impact sound insulation, noise from traffic and from service equipment. The discrepancies between countries are discussed, aiming at potential learning and implementation of optimized limits for more room types. In addition to regulations or guidelines, some countries have hospitals included in national acoustic classification schemes with different acoustic quality levels. Indications of such classification criteria will be included in the paper.


2021 ◽  
Author(s):  
Stavros Ntalampiras ◽  
Danylo Kosmin ◽  
Javier Sanchez

Author(s):  
Roza G. Kamiloğlu ◽  
George Boateng ◽  
Alisa Balabanova ◽  
Chuting Cao ◽  
Disa A. Sauter

AbstractThe human voice communicates emotion through two different types of vocalizations: nonverbal vocalizations (brief non-linguistic sounds like laughs) and speech prosody (tone of voice). Research examining recognizability of emotions from the voice has mostly focused on either nonverbal vocalizations or speech prosody, and included few categories of positive emotions. In two preregistered experiments, we compare human listeners’ (total n = 400) recognition performance for 22 positive emotions from nonverbal vocalizations (n = 880) to that from speech prosody (n = 880). The results show that listeners were more accurate in recognizing most positive emotions from nonverbal vocalizations compared to prosodic expressions. Furthermore, acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for nonverbal vocalizations as compared to speech prosody. Overall, the results suggest that vocal expressions of positive emotions are communicated more successfully when expressed as nonverbal vocalizations compared to speech prosody.


2021 ◽  
Vol 1 (7) ◽  
pp. 071201
Author(s):  
Jennifer L. K. McCullough ◽  
Anne E. Simonis ◽  
Taiki Sakai ◽  
Erin M. Oleson

Sign in / Sign up

Export Citation Format

Share Document