sound features
Recently Published Documents


TOTAL DOCUMENTS

113
(FIVE YEARS 46)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
pp. 180-188
Author(s):  
Liubov Ostash ◽  
Roman Ostash

The purpose of the article is to suggest new approaches to the lexicographical processing of the modern speech lexis of residents from the particular village. Dialect of the village Stryhantsi of Tysmenytsya district, Ivano‑Frankivsk region was chosen as an object of the research, it is currently included into the dialect of the South-West Naddnistrianshchyna. The village is situated 30 km from the regional center – the city of Ivano-Frankivsk (driving through Roshniv, Klubivtsi, Tysmenytsia). It is supposed that the village was founded in 1624. The village is marked on the map of the French engineer- map-maker Le Vasseur de Beauplan dated 1650. The source base of the research is long-term records of dialect speech of villagers made by the authors of the article. The article contains the first part of the material starting from ‘З’ (Z) letter. The glossary article provides all the relations that express the combination of different grammatical forms of nouns with the preposition З/Z known to the authors and the maximum quantity of the examples with the quotes from the colloquial dialect speech, especially with meanings which can differ from the meaning of the same lexeme in the standard language. Meanings are separated by the Arabic numerals. Words in the quotes from the colloquial dialect speech provides accent marks and other sound features of the lexeme. Common phrases are also presented in addition to idioms. Each common phrase and idiom provides meaning in the colloquial dialect speech. The article illustrates the lexical and phraseological richness of speech of the inhabitants of the village of Strygantsi, interesting grammatical forms with the specified preposition. The collected authentic factual material becomes a valuable source for the analysis the Ukrainian dialect language. Some of the lexemes present a significant interest for the researchers of the historical grammar of Ukrainian language.


2021 ◽  
Vol 11 (2) ◽  
pp. 137-142
Author(s):  
Simona Stanca

Abstract One of the most significant aspects which need to be analysed in the case of a building consists in finding that the sound level perceived by listeners is a proper one (Daniela-Roxana Tămaş-Gavrea et all., 2012). Their inconsistent spreading can develop problems in audition which can be solved only by putting in work a number of measures of acoustic rehabilitation. The evaluation of the acoustic quality of a building is a delicate issue, because of the complex system of the sound field contained in closed spaces and the sound features of the outlining surfaces. This paper presents a research on improving the acoustic conditions of a building which initially had a technical-administrative destination and was then converted into an office building (Stanca S.E., 2021). The measures of acoustic protection were recommended with a view to mitigate the noise level under admissible limits in the functional unit under consideration.


IRBM ◽  
2021 ◽  
Author(s):  
Junyi Fu ◽  
Wei-Nung Teng ◽  
Wenyu Li ◽  
Yu-Wei Chiou ◽  
Desheng Huang ◽  
...  

2021 ◽  
Author(s):  
Stephen Michael Town ◽  
Katherine C Wood ◽  
Katarina C Poole ◽  
Jennifer Kim Bizley

A central question in auditory neuroscience is how far brain regions are functionally specialized for processing specific sound features such as sound location and identity. In auditory cortex, correlations between neural activity and sounds support both the specialization of distinct cortical subfields, and encoding of multiple sound features within individual cortical areas. However, few studies have tested the causal contribution of auditory cortex to hearing in multiple contexts. Here we tested the role of auditory cortex in both spatial and non-spatial hearing. We reversibly inactivated the border between middle and posterior ectosylvian gyrus using cooling (n = 2) or optogenetics (n=1) as ferrets discriminated vowel sounds in clean and noisy conditions. Animals with cooling loops were then retrained to localize noise-bursts from multiple locations and retested with cooling. In both ferrets, cooling impaired sound localization and vowel discrimination in noise, but not discrimination in clean conditions. We also tested the effects of cooling on vowel discrimination in noise when vowel and noise were colocated or spatially separated. Here, cooling exaggerated deficits discriminating vowels with colocalized noise, resulting in increased performance benefits from spatial separation of sounds and thus stronger spatial release from masking during cortical inactivation. Together our results show that auditory cortex contributes to both spatial and non-spatial hearing, consistent with single unit recordings in the same brain region. The deficits we observed did not reflect general impairments in hearing, but rather account for performance in more realistic behaviors that require use of information about both sound location and identity.


2021 ◽  
Vol 8 ◽  
Author(s):  
Rizwana Zulfiqar ◽  
Fiaz Majeed ◽  
Rizwana Irfan ◽  
Hafiz Tayyab Rauf ◽  
Elhadj Benkhelifa ◽  
...  

Respiratory sound (RS) attributes and their analyses structure a fundamental piece of pneumonic pathology, and it gives symptomatic data regarding a patient's lung. A couple of decades back, doctors depended on their hearing to distinguish symptomatic signs in lung audios by utilizing the typical stethoscope, which is usually considered a cheap and secure method for examining the patients. Lung disease is the third most ordinary cause of death worldwide, so; it is essential to classify the RS abnormality accurately to overcome the death rate. In this research, we have applied Fourier analysis for the visual inspection of abnormal respiratory sounds. Spectrum analysis was done through Artificial Noise Addition (ANA) in conjunction with different deep convolutional neural networks (CNN) to classify the seven abnormal respiratory sounds—both continuous (CAS) and discontinuous (DAS). The proposed framework contains an adaptive mechanism of adding a similar type of noise to unhealthy respiratory sounds. ANA makes sound features enough reach to be identified more accurately than the respiratory sounds without ANA. The obtained results using the proposed framework are superior to previous techniques since we simultaneously considered the seven different abnormal respiratory sound classes.


2021 ◽  
Vol 11 (1) ◽  
pp. 8
Author(s):  
Stelios A. Mitilineos ◽  
Nicolas-Alexander Tatlas ◽  
Georgia Korompili ◽  
Lampros Kokkalas ◽  
Stelios M. Potirakis

Obstructive sleep apnea hypopnea syndrome (OSAHS) is a widespread chronic disease that mostly remains undetected, mainly due to the fact that it is diagnosed via polysomnography, which is a time and resource-intensive procedure. Screening the disease’s symptoms at home could be used as an alternative approach in order to alert individuals that potentially suffer from OSAHS without compromising their everyday routine. Since snoring is usually linked to OSAHS, developing a snore detector is appealing as an enabling technology for screening OSAHS at home using ubiquitous equipment like commodity microphones (included in, e.g., smartphones). In this context, we developed a snore detection tool and herein present our approach and selection of specific sound features that discriminate snoring vs. environmental sounds, as well as the performance of the proposed tool. Furthermore, a real-time snore detector (RTSD) is built upon the snore detection tool and employed in whole-night sleep sound recordings, resulting in a large dataset of snoring sound excerpts that are made freely available to the public. The RTSD may be used either as a stand-alone tool that offers insight concerning an individual’s sleep quality or as an independent component of OSAHS screening applications in future developments.


2021 ◽  
Vol 11 (19) ◽  
pp. 9226
Author(s):  
Burooj Ghani ◽  
Sarah Hallerberg

The automatic classification of bird sounds is an ongoing research topic, and several results have been reported for the classification of selected bird species. In this contribution, we use an artificial neural network fed with pre-computed sound features to study the robustness of bird sound classification. We investigate, in detail, if and how the classification results are dependent on the number of species and the selection of species in the subsets presented to the classifier. In more detail, a bag-of-birds approach is employed to randomly create balanced subsets of sounds from different species for repeated classification runs. The number of species present in each subset is varied between 10 and 300 by randomly drawing sounds of species from a dataset of 659 bird species taken from the Xeno-Canto database. We observed that the shallow artificial neural network trained on pre-computed sound features was able to classify the bird sounds. The quality of classifications were at least comparable to some previously reported results when the number of species allowed for a direct comparison. The classification performance is evaluated using several common measures, such as the precision, recall, accuracy, mean average precision, and area under the receiver operator characteristics curve. All of these measures indicate a decrease in classification success as the number of species present in the subsets is increased. We analyze this dependence in detail and compare the computed results to an analytic explanation assuming dependencies for an idealized perfect classifier. Moreover, we observe that the classification performance depended on the individual composition of the subset and varied across 20 randomly drawn subsets.


Author(s):  
Burooj Ghani ◽  
Sarah Hallerberg

The automatic classification of bird sounds is an ongoing research topic and several results have been reported for the classification of selected bird species. In this contribution we use an artificial neural network fed with pre-computed sound features to study the robustness of bird sound classification. We investigate in detail if and how classification results are dependent on the number of species and the selection of species in the subsets presented to the classifier. In more detail, a bag-of-birds approach is employed to randomly create balanced subsets of sounds from different species for repeated classification runs. The number of species present in each subset is varied between 10 and 300, randomly drawing sounds of species from a dataset of 659 bird species taken from Xeno-Canto database. We observe that the shallow artificial neural network trained on pre-computed sound features is able to classify the bird sounds relatively well. The classification performance is evaluated using several common measures such as precision, recall, accuracy, mean average precision and area under the receiver operator characteristics curve. All of these measures indicate a decrease in classification success as the number of species present in the subsets is increased. We analyze this dependence in detail and compare the computed results to an analytic explanation assuming dependencies for an idealized perfect classifier. Moreover, we observe that the classification performance depends on the individual composition of the subset and varies across 20 randomly drawn subsets.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Francisco J. Bravo Sanchez ◽  
Md Rahat Hossain ◽  
Nathan B. English ◽  
Steven T. Moore

AbstractThe use of autonomous recordings of animal sounds to detect species is a popular conservation tool, constantly improving in fidelity as audio hardware and software evolves. Current classification algorithms utilise sound features extracted from the recording rather than the sound itself, with varying degrees of success. Neural networks that learn directly from the raw sound waveforms have been implemented in human speech recognition but the requirements of detailed labelled data have limited their use in bioacoustics. Here we test SincNet, an efficient neural network architecture that learns from the raw waveform using sinc-based filters. Results using an off-the-shelf implementation of SincNet on a publicly available bird sound dataset (NIPS4Bplus) show that the neural network rapidly converged reaching accuracies of over 65% with limited data. Their performance is comparable with traditional methods after hyperparameter tuning but they are more efficient. Learning directly from the raw waveform allows the algorithm to select automatically those elements of the sound that are best suited for the task, bypassing the onerous task of selecting feature extraction techniques and reducing possible biases. We use publicly released code and datasets to encourage others to replicate our results and to apply SincNet to their own datasets; and we review possible enhancements in the hope that algorithms that learn from the raw waveform will become useful bioacoustic tools.


Sign in / Sign up

Export Citation Format

Share Document