harmonic frequencies
Recently Published Documents


TOTAL DOCUMENTS

170
(FIVE YEARS 25)

H-INDEX

28
(FIVE YEARS 1)

2021 ◽  
Vol 2015 (1) ◽  
pp. 012156
Author(s):  
O Tsilipakos ◽  
A Theodosi ◽  
C M Soukoulis ◽  
E N Economou ◽  
M Kafesaki

Abstract We theoretically study graphene-based metasurfaces for efficient third-harmonic generation in the THz regime. The graphene sheet is judiciously patterned into patch and cross geometries, in order to support sharp metasurface resonances at the fundamental and third harmonic frequencies. The reported conversion efficiencies reach -19dB (1.2%).


2021 ◽  
pp. 546-555
Author(s):  
Carlos Puerto-Santana ◽  
Pedro Larrañaga ◽  
Javier Diaz-Rozo ◽  
Concha Bielza

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Arunit Maity ◽  
P. Prakasam ◽  
Sarthak Bhargava

PurposeDue to the continuous and rapid evolution of telecommunication equipment, the demand for more efficient and noise-robust detection of dual-tone multi-frequency (DTMF) signals is most significant.Design/methodology/approachA novel machine learning-based approach to detect DTMF tones affected by noise, frequency and time variations by employing the k-nearest neighbour (KNN) algorithm is proposed. The features required for training the proposed KNN classifier are extracted using Goertzel's algorithm that estimates the absolute discrete Fourier transform (DFT) coefficient values for the fundamental DTMF frequencies with or without considering their second harmonic frequencies. The proposed KNN classifier model is configured in four different manners which differ in being trained with or without augmented data, as well as, with or without the inclusion of second harmonic frequency DFT coefficient values as features.FindingsIt is found that the model which is trained using the augmented data set and additionally includes the absolute DFT values of the second harmonic frequency values for the eight fundamental DTMF frequencies as the features, achieved the best performance with a macro classification F1 score of 0.980835, a five-fold stratified cross-validation accuracy of 98.47% and test data set detection accuracy of 98.1053%.Originality/valueThe generated DTMF signal has been classified and detected using the proposed KNN classifier which utilizes the DFT coefficient along with second harmonic frequencies for better classification. Additionally, the proposed KNN classifier has been compared with existing models to ascertain its superiority and proclaim its state-of-the-art performance.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Ellen Jannereth ◽  
Lisa Esch

A sound’s unique timbre is based on the various harmonic frequencies present within its waveform. Through Fast Fourier Transform software, waveforms can be easily decomposed into their component frequencies and a spectral analysis of frequency can be conducted as a method of quantitatively describing timbral characteristics of a sound. In this investigation, the range of frequencies present in a spectrum as well as the average intensity of the first 10 overtones in a sound will be used to classify the timbres of various instruments relative to one another. This will be done by generating a Range-Intensity graph of harmonic frequencies present in sound samples of each instrument. The results of this investigation reveal that it is not only possible to quantitatively analyze instrumental timbre by generating and mapping out the harmonic frequency data of a specific sound, but that such a quantitative analysis is also incredibly useful. Unlike the traditional, qualitative method of describing timbre, a quantitative analysis would allow for timbral qualities to be transformed into information that can be understood by computers. Today, timbral classification and the decomposition of waveforms has many applications in science and sound engineering. By refining methods for quantitative timbral analysis, it becomes possible to further enhance timbre recognition software and apply such methods to a wider range of technological developments.


2021 ◽  
pp. 095745652110004
Author(s):  
Lun Xuejian ◽  
Li Can ◽  
Zhai Zhiping ◽  
Lan Yuezheng ◽  
Gan Shiming

In order to predict the radiation noise of the vibrating shell of the straw crusher in the design phase, the computational fluid dynamics–discrete element coupling method is initially used to simulate the airflow–straw coupled flow field in the straw crusher, and the pulsating pressure generated by the coupled flow field is applied to the inner surface of the shell of the crusher. Then, the harmonic response of the shell is analyzed, and its results are used to be the acoustic boundary condition. Finally, the finite element and acoustic boundary element combined test methods are used to predict vibration noise of the straw crusher shell. The results indicate that the produced vibration noise of the straw crusher shell changes as the excitation frequency of the rotor rotation changes. The maximum vibration noise is achieved at the excitation fundamental frequency, and radiation noises at the harmonic frequencies decrease as the frequency increases. The simulated value of the sound pressure level of each measuring point at the excitation fundamental frequency and harmonic frequencies is basically the same as those of the experiment. Moreover, it is found that the maximum difference between the simulated and experimental value of measuring points is 1.69 dB(A). Therefore, it is concluded that the numerical model of the vibration radiation noise is accurate. The vibration noise of the shell at the inlet is the largest, and the main noise source of the vibration radiation noise is the dipole sound source of the rotating hammer rotor. The corresponding design method provides the reference for the low-noise design of straw crushers.


2021 ◽  
pp. 1-16
Author(s):  
Sean A. Gilmore ◽  
Frank A. Russo

The ability to synchronize movements to a rhythmic stimulus, referred to as sensorimotor synchronization (SMS), is a behavioral measure of beat perception. Although SMS is generally superior when rhythms are presented in the auditory modality, recent research has demonstrated near-equivalent SMS for vibrotactile presentations of isochronous rhythms [Ammirante, P., Patel, A. D., & Russo, F. A. Synchronizing to auditory and tactile metronomes: A test of the auditory–motor enhancement hypothesis. Psychonomic Bulletin & Review, 23, 1882–1890, 2016]. The current study aimed to replicate and extend this study by incorporating a neural measure of beat perception. Nonmusicians were asked to tap to rhythms or to listen passively while EEG data were collected. Rhythmic complexity (isochronous, nonisochronous) and presentation modality (auditory, vibrotactile, bimodal) were fully crossed. Tapping data were consistent with those observed by Ammirante et al. (2016), revealing near-equivalent SMS for isochronous rhythms across modality conditions and a drop-off in SMS for nonisochronous rhythms, especially in the vibrotactile condition. EEG data revealed a greater degree of neural entrainment for isochronous compared to nonisochronous trials as well as for auditory and bimodal compared to vibrotactile trials. These findings led us to three main conclusions. First, isochronous rhythms lead to higher levels of beat perception than nonisochronous rhythms across modalities. Second, beat perception is generally enhanced for auditory presentations of rhythm but still possible under vibrotactile presentation conditions. Finally, exploratory analysis of neural entrainment at harmonic frequencies suggests that beat perception may be enhanced for bimodal presentations of rhythm.


Doctor Ru ◽  
2021 ◽  
Vol 20 (5) ◽  
pp. 49-54
Author(s):  
E.A. Grigorieva ◽  
◽  
A.L. Dyakonov ◽  
◽  

Study Objective: To attempt to impair steady pathological condition (depression with depersonalization) with harmonic sounds in order to potentially reduce or eliminate both depression and depersonalization. Study Design: descriptive study, clinical and physiological study. Materials and Methods. We examined 31 patients aged 18 to 40 years (mean age: 29.3 ± 1.2 years) with a depressive episode in recurrent depressive disorder. All patients had background electroencephalogram (EEG) (16 channels) recorded. Then, EEGs were subjected to spectral analysis using Brainlog, which identified maximum extremes (with the amplitude exceeding adjacent harmonic frequencies) and minimum extremes (with the amplitude lower than adjacent harmonic frequencies). Clinical condition of patients was assessed following each piece of sounds (using subjective feelings). Hamilton depression scale was filled out before first sounds and after the session (4–6 sound pieces). Each patient had 5 to 15 sound exposure sessions. Study Results. A stable result after 15 sessions of harmonic sounds was absent in 9 (29.03%) cases. 5 (16.13%) subjects had complete remission with depersonalization reduction after harmonic sound exposure, that did not recur during 6-month follow-up. Partial remission was recorded in 17 (54.84%) individuals. During the 6-month follow-up, only one patient with asthenic remission did not have any antirelapse treatment. The other 16 subjects had anti-relapse treatment. Complete remission with elimination of depersonalization was recorded in 4 cases; 10 other observations demonstrated fluctuating depressive symptoms. Bad mood could cause depersonalization, but it was less marked. Stable depersonalization disorders, independent of depressed mood, persisted only in 2 individuals. Conclusion. Exposure to harmonic sounds in accordance with minimum and maximum extremes (repetition factor 2n) results in reduction or complete disappearance of depression with depersonalization in 70.97% of cases. The recorded reorganisation of amplitude-frequency fluctuations and all EEG rhythm intensity facilitate impairment of stable pathological associations in brain. Keywords: depression, depersonalization, harmonic sound.


Sign in / Sign up

Export Citation Format

Share Document