scholarly journals Statistically significant contrasts between EMG waveforms revealed using wavelet-based functional ANOVA

2013 ◽  
Vol 109 (2) ◽  
pp. 591-602 ◽  
Author(s):  
J. Lucas McKay ◽  
Torrence D. J. Welch ◽  
Brani Vidakovic ◽  
Lena H. Ting

We developed wavelet-based functional ANOVA (wfANOVA) as a novel approach for comparing neurophysiological signals that are functions of time. Temporal resolution is often sacrificed by analyzing such data in large time bins, increasing statistical power by reducing the number of comparisons. We performed ANOVA in the wavelet domain because differences between curves tend to be represented by a few temporally localized wavelets, which we transformed back to the time domain for visualization. We compared wfANOVA and ANOVA performed in the time domain (tANOVA) on both experimental electromyographic (EMG) signals from responses to perturbation during standing balance across changes in peak perturbation acceleration (3 levels) and velocity (4 levels) and on simulated data with known contrasts. In experimental EMG data, wfANOVA revealed the continuous shape and magnitude of significant differences over time without a priori selection of time bins. However, tANOVA revealed only the largest differences at discontinuous time points, resulting in features with later onsets and shorter durations than those identified using wfANOVA ( P < 0.02). Furthermore, wfANOVA required significantly fewer (∼¼×; P < 0.015) significant F tests than tANOVA, resulting in post hoc tests with increased power. In simulated EMG data, wfANOVA identified known contrast curves with a high level of precision ( r2 = 0.94 ± 0.08) and performed better than tANOVA across noise levels ( P < <0.01). Therefore, wfANOVA may be useful for revealing differences in the shape and magnitude of neurophysiological signals (e.g., EMG, firing rates) across multiple conditions with both high temporal resolution and high statistical power.

2020 ◽  
Vol 106 (9-10) ◽  
pp. 3849-3857
Author(s):  
S. Saliba ◽  
J. C. Kirkman-Brown ◽  
L. E. J. Thomas-Seale

AbstractAdditive manufacturing (AM) is expected to generate huge economic revenue by 2025; however, this will only be realised by overcoming the barriers that are preventing its increased adoption to end-use parts. Design for AM (DfAM) is recognised as a multi-faceted problem, exasperated by constraints to creativity, knowledge propagation, insufficiencies in education and a fragmented software pipeline. This study proposes a novel approach to increase the creativity in DfAM. Through comparison between DfAM and in utero human development, the unutilised potential of design through the time domain was identified. Therefore, the aim of the research is to develop a computer-aided manufacturing (CAM) programme to demonstrate design through the time domain, known as Temporal DfAM (TDfAM). This was achieved through a bespoke MATLAB code which applies a linear function to a process parameter, discretised across the additive build. TDfAM was demonstrated through the variation of extrusion speed combined with the infill angle, through the axial and in-plane directions. It is widely accepted in the literature that AM processing parameters change the properties of AM materials. Thus, the application of the TDfAM approach offers the engineer increased creative scope and control, whilst inherently upskilling knowledge, in the design of AM materials.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jianqin Hang ◽  
Xu Zhang

This study proposes a novel approach that incorporates rolling-window estimation and a quantile causality test. Using this approach, Google Trends and Bitcoin price data are used to empirically investigate the time-varying quantile causality between investor attention and Bitcoin returns. The results show that the parameters of the causality tests are unstable during the sample period. The results also show strong evidence of quantile- and time-varying causality between investor attention and Bitcoin returns. Specifically, our results show that causality appears only in high volatility periods within the time domain, and causality presents various patterns across quantiles within the quantile domain.


2016 ◽  
Author(s):  
Katie E. Lotterhos ◽  
Olivier François ◽  
Michael G.B. Blum

AbstractGenome scan approaches promise to map genomic regions involved in adaptation of individuals to their environment. Outcomes of genome scans have been shown to depend on several factors including the underlying demography, the adaptive scenario, and the software or method used. We took advantage of a pedagogical experiment carried out during a summer school to explore the effect of an unexplored source of variability, which is the degree of user expertise.Participants were asked to analyze three simulated data challenges with methods presented during the summer school. In addition to submitting lists, participants evaluated a priori their level of expertise. We measured the quality of each genome scan analysis by computing a score that depends on false discovery rate and statistical power. In an easy and a difficult challenge, less advanced participants obtained similar scores compared to advanced ones, demonstrating that participants with little background in genome scan methods were able to learn how to use complex software after short introductory tutorials. However, in a challenge ofintermediate difficulty, advanced participants obtained better scores. To explain the difference, we introduce a probabilistic model that shows that a larger variation in scores is expected for SNPs of intermediate difficulty of detection. We conclude that practitioners should develop their statistical and computational expertise to follow the development of complex methods. To encourage training, we release the website of the summer school where users can submit lists of candidate loci, which will be scored and compared to the scores obtained by previous users.


2018 ◽  
Author(s):  
Raffaella Franciotti ◽  
Nicola Walter Falasca

Background. Brain function requires a coordinated flow of information among functionally specialized areas. Quantitative methods provide a multitude of metrics to quantify the oscillatory interactions measured by invasive or non-invasive recording techniques. Granger causality (G-causality) has emerged as a useful tool to investigate the directions of information flows, but challenges remain on the ability of G-causality when applying on biological data. In addition it is not clear if G-causality can distinguish between direct and indirect influences and if G-causality reliability was related to the strength of the neural interactions. Methods. In this study time domain G-causality connectivity analysis was performed on simulated electrophysiological signals. A network of 19 nodes was constructed with a designed structure of direct and indirect information flows among nodes, which we referred to as a ground truth structure. G-causality reliability was evaluated on two sets of simulated data while varying one of the following variables: the number of time points in the temporal window, the lags between causally interacting nodes, the connection strength between the links, and the noise. Results. Results showed that the number of time points in the temporal window affects G-causality reliability substantially. A large number of time points could decrease the reliability of the G-causality results, increasing the number of false positive (type I errors). In the presence of stationary signals, G-causality results are reliable showing all true positive links (absence of type II errors), when the underlying structure has the delays between the interacting nodes lower than 100 ms, the connection strength higher to 0.1 time the amplitude of the driver signal and good signal to noise ratio. Finally, indirect links were revealed by G-causality analysis for connection strength higher than the direct link and lags lower than the direct link. Discussion. Conditional multivariate vector autoregressive model was applied to 19 virtual time series to estimate the reliability of the G-causality analysis on the identification of the true positive link, on the presence of spurious links and on the effects of indirect links. Simulated data revealed that weak direct but not weak indirect causal effects could be identified by G-causality analysis. These results demonstrate a good sensitivity and specificity of the conditional G-causality analysis in the time domain when applied on covariance stationary, non-correlated electrophysiological signals.


2021 ◽  
Vol 17 (9) ◽  
pp. e1009456
Author(s):  
Bruce C. Hansen ◽  
Michelle R. Greene ◽  
David J. Field

A number of neuroimaging techniques have been employed to understand how visual information is transformed along the visual pathway. Although each technique has spatial and temporal limitations, they can each provide important insights into the visual code. While the BOLD signal of fMRI can be quite informative, the visual code is not static and this can be obscured by fMRI’s poor temporal resolution. In this study, we leveraged the high temporal resolution of EEG to develop an encoding technique based on the distribution of responses generated by a population of real-world scenes. This approach maps neural signals to each pixel within a given image and reveals location-specific transformations of the visual code, providing a spatiotemporal signature for the image at each electrode. Our analyses of the mapping results revealed that scenes undergo a series of nonuniform transformations that prioritize different spatial frequencies at different regions of scenes over time. This mapping technique offers a potential avenue for future studies to explore how dynamic feedforward and recurrent processes inform and refine high-level representations of our visual world.


Nanomaterials ◽  
2018 ◽  
Vol 8 (12) ◽  
pp. 1023 ◽  
Author(s):  
Dandan Ju ◽  
Feng Song ◽  
Adnan Khan ◽  
Feifei Song ◽  
Aihua Zhou ◽  
...  

The dual-mode emission and multicolor outputs in the time domain from core-shell microcrystals are presented. The core-shell microcrystals, with NaYF4:Yb/Er as the core and NaYF4:Ce/Tb/Eu as the shell, were successfully fabricated by employing the hydrothermal method, which confines the activator ions into a separate region and minimizes the effect of surface quenching. The material is capable of both upconversion and downshifting emission, and their multicolor outputs in response to 980 nm near-infrared (NIR) excitation laser and 252 nm, and 395 nm ultraviolet (UV) excitation light have been investigated. Furthermore, the tunable color emissions by controlling the Tb3+- Eu3+ ratio in shells and the energy transfer of Ce3+→Tb3+→ Eu3+ were discussed in details. In addition, color tuning of core-shell-structured microrods from green to red region in the time domain could be obtained by setting suitable delay time. Due to downshifting multicolor outputs (time-resolved and pump-wavelength-induced downshifting) coupled with the upconversion mode, the core-shell microrods can be potentially applied to displays and high-level security.


2012 ◽  
Vol 60 (6) ◽  
pp. 381 ◽  
Author(s):  
Evan Watkins ◽  
Julian Di Stefano

Hypotheses relating to the annual frequency distribution of mammalian births are commonly tested using a goodness-of-fit procedure. Several interacting factors influence the statistical power of these tests, but no power studies have been conducted using scenarios derived from biological hypotheses. Corresponding to theories relating reproductive output to seasonal resource fluctuation, we simulated data reflecting a winter reduction in birth frequency to test the effect of four factors (sample size, maximum effect size, the temporal pattern of response and the number of categories used for analysis) on the power of three goodness-of-fit procedures – the G and Chi-square tests and Watson’s U2 test. Analyses resulting in high power all had a large maximum effect size (60%) and were associated with a sample size of 200 on most occasions. The G-test was the most powerful when data were analysed using two temporal categories (winter and other) while Watson’s U2 test achieved the highest power when 12 monthly categories were used. Overall, the power of most modelled scenarios was low. Consequently, we recommend using power analysis as a research planning tool, and have provided a spreadsheet enabling a priori power calculations for the three tests considered.


Author(s):  
GARY WILLIAM GREWAL ◽  
THOMAS CHARLES WILSON

This paper presents a novel approach to the concurrent solution of three High-Level Synthesis (HLS) problems that are modeled as a Constraint-Satisfaction Problem (CSP) and solved using an Enhanced Genetic Algorithm (EGA). We focus on the core problems of high-level synthesis: Scheduling, Allocation, and Binding. Scheduling consists of assigning of operations in a Data-Flow Graph (DFG) to control steps or clock cycles. Allocation selects specific numbers and types of functional units from a hardware library to perform the operations specified in the DFG. Binding assigns constituent operations of the DFG to specific unit instances. A very general version of this problem is considered where functional units may perform different operations in different numbers of control steps. The EGA is designed to solve CSPs quickly and does not require a user to specify appropriate mutation and crossover rates a priori; these are determined automatically during the course of the genetic search. The enhancements include a directed mutation operator and a new type of elitism that avoids premature convergence. The HLS problems are solved by applying two EGAs in a hierarchical manner. The first performs allocation, while the second performs scheduling and binding and serves as the fitness function for the second. When compared to other, well-known techniques, our results show a reduction in time to obtain optimal solutions for standard benchmarks.


Sign in / Sign up

Export Citation Format

Share Document