time alignment
Recently Published Documents


TOTAL DOCUMENTS

180
(FIVE YEARS 29)

H-INDEX

22
(FIVE YEARS 2)

Author(s):  
Chaoqian Luo ◽  
Christopher Chung ◽  
Christopher M. Yakacki ◽  
Kevin Long ◽  
Kai Yu

2021 ◽  
Author(s):  
Ilya Silvestrov ◽  
Emad Hemyari ◽  
Andrey Bakulin ◽  
Yi Luo ◽  
Ali Aldawood ◽  
...  

Abstract We present processing details of seismic-while-drilling data recently acquired on one of the onshore wells by a prototype DrillCAM system with wireless geophones, top-drive, and downhole vibration sensors. The general flow follows an established practice and consists of correlation with a drillbit pilot signal, vertical stacking, and pilot deconvolution. This work's novelty is the usage of the memory-based near-bit sensor with a significant time drift reaching 30-40 minutes at the end of each drilling run. A data-driven automatic time alignment procedure is developed to accurately eliminate time drift error by utilizing the top-drive acceleration sensor as a reference. After the alignment, the processing flow can utilize the top-drive or the near-bit pilots similarly. We show each processing step's effect on the final data quality and discuss some implementation details.


2021 ◽  
Vol 3 ◽  
Author(s):  
Florian Henkel ◽  
Gerhard Widmer

The task of real-time alignment between a music performance and the corresponding score (sheet music), also known as score following, poses a challenging multi-modal machine learning problem. Training a system that can solve this task robustly with live audio and real sheet music (i.e., scans or score images) requires precise ground truth alignments between audio and note-coordinate positions in the score sheet images. However, these kinds of annotations are difficult and costly to obtain, which is why research in this area mainly utilizes synthetic audio and sheet images to train and evaluate score following systems. In this work, we propose a method that does not solely rely on note alignments but is additionally capable of leveraging data with annotations of lower granularity, such as bar or score system alignments. This allows us to use a large collection of real-world piano performance recordings coarsely aligned to scanned score sheet images and, as a consequence, improve over current state-of-the-art approaches.


2021 ◽  
Vol 11 (16) ◽  
pp. 7489
Author(s):  
Mohammed Salah Al-Radhi ◽  
Tamás Gábor Csapó ◽  
Géza Németh

Voice conversion (VC) transforms the speaking style of a source speaker to the speaking style of a target speaker by keeping linguistic information unchanged. Traditional VC techniques rely on parallel recordings of multiple speakers uttering the same sentences. Earlier approaches mainly find a mapping between the given source–target speakers, which contain pairs of similar utterances spoken by different speakers. However, parallel data are computationally expensive and difficult to collect. Non-parallel VC remains an interesting but challenging speech processing task. To address this limitation, we propose a method that allows a non-parallel many-to-many voice conversion by using a generative adversarial network. To the best of the authors’ knowledge, our study is the first one that employs a sinusoidal model with continuous parameters to generate converted speech signals. Our method involves only several minutes of training examples without parallel utterances or time alignment procedures, where the source–target speakers are entirely unseen by the training dataset. Moreover, empirical study is carried out on the publicly available CSTR VCTK corpus. Our conclusions indicate that the proposed method reached the state-of-the-art results in speaker similarity to the utterance produced by the target speaker, while suggesting important structural ones to be further analyzed by experts.


2021 ◽  
Vol 15 ◽  
Author(s):  
Kristin K. Sellers ◽  
Ro’ee Gilron ◽  
Juan Anso ◽  
Kenneth H. Louie ◽  
Prasad R. Shirvalkar ◽  
...  

Closed-loop neurostimulation is a promising therapy being tested and clinically implemented in a growing number of neurological and psychiatric indications. This therapy is enabled by chronically implanted, bidirectional devices including the Medtronic Summit RC+S system. In order to successfully optimize therapy for patients implanted with these devices, analyses must be conducted offline on the recorded neural data, in order to inform optimal sense and stimulation parameters. The file format, volume, and complexity of raw data from these devices necessitate conversion, parsing, and time reconstruction ahead of time-frequency analyses and modeling common to standard neuroscientific analyses. Here, we provide an open-source toolbox written in Matlab which takes raw files from the Summit RC+S and transforms these data into a standardized format amenable to conventional analyses. Furthermore, we provide a plotting tool which can aid in the visualization of multiple data streams and sense, stimulation, and therapy settings. Finally, we describe an analysis module which replicates RC+S on-board power computations, a functionality which can accelerate biomarker discovery. This toolbox aims to accelerate the research and clinical advances made possible by longitudinal neural recordings and adaptive neurostimulation in people with neurological and psychiatric illnesses.


2021 ◽  
Vol 196 ◽  
pp. 107258
Author(s):  
Felipe V. Lopes ◽  
Tiago R. Honorato ◽  
Gustavo A. Cunha ◽  
Nilo S.S. Ribeiro ◽  
Paulo Lima ◽  
...  

2021 ◽  
Author(s):  
Kristin K Sellers ◽  
Ro'ee Gilron ◽  
Juan Anso ◽  
Kenneth H Louie ◽  
Prasad R Shirvalkar ◽  
...  

Closed-loop neurostimulation is a promising therapy being tested and clinically implemented in a growing number of neurological and psychiatric indications. This therapy is enabled by chronically implanted, bidirectional devices including the Medtronic Summit RC+S system. In order to successfully optimize therapy for patients implanted with these devices, analyses must be conducted offline on the recorded neural data, in order to inform optimal sense and stimulation parameters. The file format, volume, and complexity of raw data from these device necessitate conversion, parsing, and time reconstruction ahead of time-frequency analyses and modeling common to standard neuroscientific analyses. Here, we provide an open-source toolbox written in Matlab which takes raw files from the Summit RC+S and transforms these data into a standardized format amenable to conventional analyses. Furthermore, we provide a plotting tool which can aid in the visualization of multiple data streams and sense, stimulation, and therapy settings. Finally, we describe an analysis module which replicates RC+S on-board power computations, functionality which can accelerate biomarker discovery. This toolbox aims to accelerate the research and clinical advances made possible by longitudinal neural recordings and adaptive neurostimulation in people with neurological and psychiatric illnesses.


2021 ◽  
Author(s):  
Peter Horvatovich ◽  
Alejandro Sánchez Brotons ◽  
Jonatan Eriksson ◽  
Marcel Kwiatkowski ◽  
Justina Wolters ◽  
...  

Abstract The accurate processing of complex LC-MS/MS data from biological samples is a major challenge for metabolomics, proteomics and related approaches. Here we present the Pipelines and Systems for Threshold Avoiding Quantification (PASTAQ) LC-MS/MS pre-processing toolset, which allows highly accurate quantification of data-dependent acquisition (DDA) LC-MS/MS datasets. PASTAQ performs compound quantification using single-stage (MS1) data and implements novel algorithms for high-performance and accurate quantification, retention time alignment, feature detection, and linking annotations from multiple identification engines. PASTAQ offers straightforward parametrization and automatic generation of quality control plots for data and pre-processing assessment. This design results in smaller variance when analyzing replicates of proteomes mixed with known ratios, and allows the detection of peptides over a larger dynamic concentration range compared to widely used proteomics preprocessing tools. The performance of the pipeline is also demonstrated in a biological human serum dataset for the identification of gender related proteins.


Metabolites ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 214
Author(s):  
Aneta Sawikowska ◽  
Anna Piasecka ◽  
Piotr Kachlicki ◽  
Paweł Krajewski

Peak overlapping is a common problem in chromatography, mainly in the case of complex biological mixtures, i.e., metabolites. Due to the existence of the phenomenon of co-elution of different compounds with similar chromatographic properties, peak separation becomes challenging. In this paper, two computational methods of separating peaks, applied, for the first time, to large chromatographic datasets, are described, compared, and experimentally validated. The methods lead from raw observations to data that can form inputs for statistical analysis. First, in both methods, data are normalized by the mass of sample, the baseline is removed, retention time alignment is conducted, and detection of peaks is performed. Then, in the first method, clustering is used to separate overlapping peaks, whereas in the second method, functional principal component analysis (FPCA) is applied for the same purpose. Simulated data and experimental results are used as examples to present both methods and to compare them. Real data were obtained in a study of metabolomic changes in barley (Hordeum vulgare) leaves under drought stress. The results suggest that both methods are suitable for separation of overlapping peaks, but the additional advantage of the FPCA is the possibility to assess the variability of individual compounds present within the same peaks of different chromatograms.


Sign in / Sign up

Export Citation Format

Share Document