scholarly journals From beat tracking to beat expectation: Cognitive-based beat tracking for capturing pulse clarity through time

PLoS ONE ◽  
2020 ◽  
Vol 15 (11) ◽  
pp. e0242207
Author(s):  
Martin Alejandro Miguel ◽  
Mariano Sigman ◽  
Diego Fernandez Slezak

Pulse is the base timing to which western music is commonly notated, generally expressed by a listener by performing periodic taps with their hand or foot. This cognitive construction helps organize the perception of timed events in music and is the most basic expectation in rhythms. The analysis of expectations, and more specifically the strength with which the beat is felt—the pulse clarity—has been used to analyze affect in music. Most computational models of pulse clarity, and rhythmic expectation in general, analyze the input as a whole, without exhibiting changes through a rhythmic passage. We present Tactus Hypothesis Tracker (THT), a model of pulse clarity over time intended for symbolic rhythmic stimuli. The model was developed based on ideas of beat tracking models that extract beat times from musical stimuli. Our model also produces possible beat interpretations for the rhythm, a fitness score for each interpretation and how these evolve in time. We evaluated the model’s pulse clarity by contrasting against tapping variability of human annotators achieving results comparable to a state-of-the-art pulse clarity model. We also analyzed the clarity metric dynamics on synthetic data that introduced changes in the beat, showing that our model presented doubt in the pulse estimation process and adapted accordingly to beat changes. Finally, we assessed if the beat tracking generated by the model was correct regarding listeners tapping data. We compared our beat tracking results with previous beat tracking models. The THT model beat tracking output showed generally correct estimations in phase but exhibits a bias towards a musically correct subdivision of the beat.

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ermanno Cordelli ◽  
Paolo Soda ◽  
Giulio Iannello

Abstract Background Biological phenomena usually evolves over time and recent advances in high-throughput microscopy have made possible to collect multiple 3D images over time, generating $$3D+t$$ 3 D + t (or 4D) datasets. To extract useful information there is the need to extract spatial and temporal data on the particles that are in the images, but particle tracking and feature extraction need some kind of assistance. Results This manuscript introduces our new freely downloadable toolbox, the Visual4DTracker. It is a MATLAB package implementing several useful functionalities to navigate, analyse and proof-read the track of each particle detected in any $$3D+t$$ 3 D + t stack. Furthermore, it allows users to proof-read and to evaluate the traces with respect to a given gold standard. The Visual4DTracker toolbox permits the users to visualize and save all the generated results through a user-friendly graphical user interface. This tool has been successfully used in three applicative examples. The first processes synthetic data to show all the software functionalities. The second shows how to process a 4D image stack showing the time-lapse growth of Drosophila cells in an embryo. The third example presents the quantitative analysis of insulin granules in living beta-cells, showing that such particles have two main dynamics that coexist inside the cells. Conclusions Visual4DTracker is a software package for MATLAB to visualize, handle and manually track $$3D+t$$ 3 D + t stacks of microscopy images containing objects such cells, granules, etc.. With its unique set of functions, it remarkably permits the user to analyze and proof-read 4D data in a friendly 3D fashion. The tool is freely available at https://drive.google.com/drive/folders/19AEn0TqP-2B8Z10kOavEAopTUxsKUV73?usp=sharing


2021 ◽  
Vol 11 (13) ◽  
pp. 6078
Author(s):  
Tiffany T. Ly ◽  
Jie Wang ◽  
Kanchan Bisht ◽  
Ukpong Eyo ◽  
Scott T. Acton

Automatic glia reconstruction is essential for the dynamic analysis of microglia motility and morphology, notably so in research on neurodegenerative diseases. In this paper, we propose an automatic 3D tracing algorithm called C3VFC that uses vector field convolution to find the critical points along the centerline of an object and trace paths that traverse back to the soma of every cell in an image. The solution provides detection and labeling of multiple cells in an image over time, leading to multi-object reconstruction. The reconstruction results can be used to extract bioinformatics from temporal data in different settings. The C3VFC reconstruction results found up to a 53% improvement on the next best performing state-of-the-art tracing method. C3VFC achieved the highest accuracy scores, in relation to the baseline results, in four of the five different measures: Entire structure average, the average bi-directional entire structure average, the different structure average, and the percentage of different structures.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3099
Author(s):  
V. Javier Traver ◽  
Judith Zorío ◽  
Luis A. Leiva

Temporal salience considers how visual attention varies over time. Although visual salience has been widely studied from a spatial perspective, its temporal dimension has been mostly ignored, despite arguably being of utmost importance to understand the temporal evolution of attention on dynamic contents. To address this gap, we proposed Glimpse, a novel measure to compute temporal salience based on the observer-spatio-temporal consistency of raw gaze data. The measure is conceptually simple, training free, and provides a semantically meaningful quantification of visual attention over time. As an extension, we explored scoring algorithms to estimate temporal salience from spatial salience maps predicted with existing computational models. However, these approaches generally fall short when compared with our proposed gaze-based measure. Glimpse could serve as the basis for several downstream tasks such as segmentation or summarization of videos. Glimpse’s software and data are publicly available.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 674
Author(s):  
Kushani De De Silva ◽  
Carlo Cafaro ◽  
Adom Giffin

Attaining reliable gradient profiles is of utmost relevance for many physical systems. In many situations, the estimation of the gradient is inaccurate due to noise. It is common practice to first estimate the underlying system and then compute the gradient profile by taking the subsequent analytic derivative of the estimated system. The underlying system is often estimated by fitting or smoothing the data using other techniques. Taking the subsequent analytic derivative of an estimated function can be ill-posed. This becomes worse as the noise in the system increases. As a result, the uncertainty generated in the gradient estimate increases. In this paper, a theoretical framework for a method to estimate the gradient profile of discrete noisy data is presented. The method was developed within a Bayesian framework. Comprehensive numerical experiments were conducted on synthetic data at different levels of noise. The accuracy of the proposed method was quantified. Our findings suggest that the proposed gradient profile estimation method outperforms the state-of-the-art methods.


2021 ◽  
Vol 376 (1821) ◽  
pp. 20190765 ◽  
Author(s):  
Giovanni Pezzulo ◽  
Joshua LaPalme ◽  
Fallon Durant ◽  
Michael Levin

Nervous systems’ computational abilities are an evolutionary innovation, specializing and speed-optimizing ancient biophysical dynamics. Bioelectric signalling originated in cells' communication with the outside world and with each other, enabling cooperation towards adaptive construction and repair of multicellular bodies. Here, we review the emerging field of developmental bioelectricity, which links the field of basal cognition to state-of-the-art questions in regenerative medicine, synthetic bioengineering and even artificial intelligence. One of the predictions of this view is that regeneration and regulative development can restore correct large-scale anatomies from diverse starting states because, like the brain, they exploit bioelectric encoding of distributed goal states—in this case, pattern memories. We propose a new interpretation of recent stochastic regenerative phenotypes in planaria, by appealing to computational models of memory representation and processing in the brain. Moreover, we discuss novel findings showing that bioelectric changes induced in planaria can be stored in tissue for over a week, thus revealing that somatic bioelectric circuits in vivo can implement a long-term, re-writable memory medium. A consideration of the mechanisms, evolution and functionality of basal cognition makes novel predictions and provides an integrative perspective on the evolution, physiology and biomedicine of information processing in vivo . This article is part of the theme issue ‘Basal cognition: multicellularity, neurons and the cognitive lens’.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 227
Author(s):  
Eckart Michaelsen ◽  
Stéphane Vujasinovic

Representative input data are a necessary requirement for the assessment of machine-vision systems. For symmetry-seeing machines in particular, such imagery should provide symmetries as well as asymmetric clutter. Moreover, there must be reliable ground truth with the data. It should be possible to estimate the recognition performance and the computational efforts by providing different grades of difficulty and complexity. Recent competitions used real imagery labeled by human subjects with appropriate ground truth. The paper at hand proposes to use synthetic data instead. Such data contain symmetry, clutter, and nothing else. This is preferable because interference with other perceptive capabilities, such as object recognition, or prior knowledge, can be avoided. The data are given sparsely, i.e., as sets of primitive objects. However, images can be generated from them, so that the same data can also be fed into machines requiring dense input, such as multilayered perceptrons. Sparse representations are preferred, because the author’s own system requires such data, and in this way, any influence of the primitive extraction method is excluded. The presented format allows hierarchies of symmetries. This is important because hierarchy constitutes a natural and dominant part in symmetry-seeing. The paper reports some experiments using the author’s Gestalt algebra system as symmetry-seeing machine. Additionally included is a comparative test run with the state-of-the-art symmetry-seeing deep learning convolutional perceptron of the PSU. The computational efforts and recognition performance are assessed.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Ranjan Kumar Mishra ◽  
G. Y. Sandesh Reddy ◽  
Himanshu Pathak

Deep learning is a computer-based modeling approach, which is made up of many processing layers that are used to understand the representation of data with several levels of abstraction. This review paper presents the state of the art in deep learning to highlight the major challenges and contributions in computer vision. This work mainly gives an overview of the current understanding of deep learning and their approaches in solving traditional artificial intelligence problems. These computational models enhanced its application in object detection, visual object recognition, speech recognition, face recognition, vision for driverless cars, virtual assistants, and many other fields such as genomics and drug discovery. Finally, this paper also showcases the current developments and challenges in training deep neural network.


2020 ◽  
Author(s):  
Alceu Bissoto ◽  
Sandra Avila

Melanoma is the most lethal type of skin cancer. Early diagnosis is crucial to increase the survival rate of those patients due to the possibility of metastasis. Automated skin lesion analysis can play an essential role by reaching people that do not have access to a specialist. However, since deep learning became the state-of-the-art for skin lesion analysis, data became a decisive factor in pushing the solutions further. The core objective of this M.Sc. dissertation is to tackle the problems that arise by having limited datasets. In the first part, we use generative adversarial networks to generate synthetic data to augment our classification model’s training datasets to boost performance. Our method generates high-resolution clinically-meaningful skin lesion images, that when compound our classification model’s training dataset, consistently improved the performance in different scenarios, for distinct datasets. We also investigate how our classification models perceived the synthetic samples and how they can aid the model’s generalization. Finally, we investigate a problem that usually arises by having few, relatively small datasets that are thoroughly re-used in the literature: bias. For this, we designed experiments to study how our models’ use data, verifying how it exploits correct (based on medical algorithms), and spurious (based on artifacts introduced during image acquisition) correlations. Disturbingly, even in the absence of any clinical information regarding the lesion being diagnosed, our classification models presented much better performance than chance (even competing with specialists benchmarks), highly suggesting inflated performances.


A Data mining is the method of extracting useful information from various repositories such as Relational Database, Transaction database, spatial database, Temporal and Time-series database, Data Warehouses, World Wide Web. Various functionalities of Data mining include Characterization and Discrimination, Classification and prediction, Association Rule Mining, Cluster analysis, Evolutionary analysis. Association Rule mining is one of the most important techniques of Data Mining, that aims at extracting interesting relationships within the data. In this paper we study various Association Rule mining algorithms, also compare them by using synthetic data sets, and we provide the results obtained from the experimental analysis


Sign in / Sign up

Export Citation Format

Share Document