millisecond accuracy
Recently Published Documents


TOTAL DOCUMENTS

18
(FIVE YEARS 3)

H-INDEX

7
(FIVE YEARS 0)

2022 ◽  
Vol 12 ◽  
Author(s):  
Antenor Rodrigues ◽  
Luc Janssens ◽  
Daniel Langer ◽  
Umi Matsumura ◽  
Dmitry Rozenberg ◽  
...  

Background: Respiratory muscle electromyography (EMG) can identify whether a muscle is activated, its activation amplitude, and timing. Most studies have focused on the activation amplitude, while differences in timing and duration of activity have been less investigated. Detection of the timing of respiratory muscle activity is typically based on the visual inspection of the EMG signal. This method is time-consuming and prone to subjective interpretation.Aims: Our main objective was to develop and validate a method to assess the respective timing of different respiratory muscle activity in an objective and semi-automated manner.Method: Seven healthy adults performed an inspiratory threshold loading (ITL) test at 50% of their maximum inspiratory pressure until task failure. Surface EMG recordings of the costal diaphragm/intercostals, scalene, parasternal intercostals, and sternocleidomastoid were obtained during ITL. We developed a semi-automated algorithm to detect the onset (EMG, onset) and offset (EMG, offset) of each muscle’s EMG activity breath-by-breath with millisecond accuracy and compared its performance with manual evaluations from two independent assessors. For each muscle, the Intraclass Coefficient correlation (ICC) of the EMG, onset detection was determined between the two assessors and between the algorithm and each assessor. Additionally, we explored muscle differences in the EMG, onset, and EMG, offset timing, and duration of activity throughout the ITL.Results: More than 2000 EMG, onset s were analyzed for algorithm validation. ICCs ranged from 0.75–0.90 between assessor 1 and 2, 0.68–0.96 between assessor 1 and the algorithm, and 0.75–0.91 between assessor 2 and the algorithm (p < 0.01 for all). The lowest ICC was shown for the diaphragm/intercostal and the highest for the parasternal intercostal (0.68 and 0.96, respectively). During ITL, diaphragm/intercostal EMG, onset occurred later during the inspiratory cycle and its activity duration was shorter than the scalene, parasternal intercostal, and sternocleidomastoid (p < 0.01). EMG, offset occurred synchronously across all muscles (p ≥ 0.98). EMG, onset, and EMG, offset timing, and activity duration was consistent throughout the ITL for all muscles (p > 0.63).Conclusion: We developed an algorithm to detect EMG, onset of several respiratory muscles with millisecond accuracy that is time-efficient and validated against manual measures. Compared to the inherent bias of manual measures, the algorithm enhances objectivity and provides a strong standard for determining the respiratory muscle EMG, onset.


Author(s):  
Ryo Tachibana ◽  
Kazumichi Matsumiya

AbstractVirtual reality (VR) is a new methodology for behavioral studies. In such studies, the millisecond accuracy and precision of stimulus presentation are critical for data replicability. Recently, Python, which is a widely used programming language for scientific research, has contributed to reliable accuracy and precision in experimental control. However, little is known about whether modern VR environments have millisecond accuracy and precision for stimulus presentation, since most standard methods in laboratory studies are not optimized for VR environments. The purpose of this study was to systematically evaluate the accuracy and precision of visual and auditory stimuli generated in modern VR head-mounted displays (HMDs) from HTC and Oculus using Python 2 and 3. We used the newest Python tools for VR and Black Box Toolkit to measure the actual time lag and jitter. The results showed that there was an 18-ms time lag for visual stimulus in both HMDs. For the auditory stimulus, the time lag varied between 40 and 60 ms, depending on the HMD. The jitters of those time lags were 1 ms for visual stimulus and 4 ms for auditory stimulus, which are sufficiently low for general experiments. These time lags were robustly equal, even when auditory and visual stimuli were presented simultaneously. Interestingly, all results were perfectly consistent in both Python 2 and 3 environments. Thus, the present study will help establish a more reliable stimulus control for psychological and neuroscientific research controlled by Python environments.


2020 ◽  
Author(s):  
Thomas Hartmann ◽  
Nathan Weisz

The Psychophysics Toolbox (PTB) is one of the most popular toolboxes for the development of experimental paradigms. It is a very powerful library, providing low-level, platform independent access to the devices used in an experiment such as the graphics and the sound card. While this low-level design results in a high degree of flexibility and power, writing paradigms that interface the PTB directly might lead to code that is hard to read, maintain, reuse and debug. Running an experiment in different facilities or organizations further requires it to work with various setups that differ in the availability of specialized hardware for response collection, triggering and presentation of auditory stimuli. The Objective Psychophysics Toolbox (o_ptb) provides an intuitive, unified and clear interface, built on top of the PTB that enables researchers to write readable, clean and concise code. In addition to presenting the architecture of the o_ptb, the results of a timing accuracy test are presented. Exactly the same Matlab code was run on two different systems, one of those using the VPixx system. Both systems showed sub-millisecond accuracy.


2017 ◽  
Author(s):  
Hadrien Caron ◽  
Pierre Pouget

AbstractTo elicit complex and rich graphical displays, and record neuronal phenomena ofinterest while all simultaneously being capable to interact in a closed-loop with external devises is a challenging task to all neurophysiologists. To facilitate this process, we have developed an Open-Source software system using a single computer running a well established Linux architecture (Ubuntu) associated to a kernel duo providing hard real-time support (Xenomai). We show that a single computer using our API is capable, for any tasks that require OpenGL displaying, to acheive millisecond accuracy programmed events. In this report, we describe the design of our system, benchmark and its performance in a real-world setting, and describe some key features.


Sign in / Sign up

Export Citation Format

Share Document