scholarly journals The timing mega-study: comparing a range of experiment generators, both lab-based and online

Author(s):  
David Bridges ◽  
Alain Pitiot ◽  
Michael R. MacAskill ◽  
Jonathan Westley Peirce

Many researchers in the behavioral sciences depend on research software that presents stimuli, and records response times, with sub-millisecond precision. There are a large number of software packages with which to conduct these behavioural experiments and measure response times and performance of participants. Very little information is available, however, on what timing performance they achieve in practice. Here we report a wide-ranging study looking at the precision and accuracy of visual and auditory stimulus timing and response times, measured with a Black Box Toolkit. We compared a range of popular packages: PsychoPy, E-Prime®, NBS Presentation®, Psychophysics Toolbox, OpenSesame, Expyriment, Gorilla, jsPsych, Lab.js and Testable. Where possible, the packages were tested on Windows, MacOS, and Ubuntu, and in a range of browsers for the online studies, to try to identify common patterns in performance. Among the lab-based experiments, Psychtoolbox, PsychoPy, Presentation and E-Prime provided the best timing, all with mean precision under 1 millisecond across the visual, audio and response measures. OpenSesame had slightly less precision across the board, but most notably in audio stimuli and Expyriment had rather poor precision. Across operating systems, the pattern was that precision was generally very slightly better under Ubuntu than Windows, and that Mac OS was the worst, at least for visual stimuli, for all packages. Online studies did not deliver the same level of precision as lab-based systems, with slightly more variability in all measurements. That said, PsychoPy and Gorilla, broadly the best performers, were achieving very close to millisecond precision on a number of browser configurations. For response times (using a high-performance button box), most of the packages achieved precision at least under 10 ms in all browsers, with PsychoPy achieving a precision under 3.5 ms in all. There was considerable variability between operating systems and browsers, especially in audio-visual synchrony which is the least precise aspect of the browser-based experiments. Nonetheless, the data indicate that online methods can be suitable for a wide range of studies, with due thought about the sources of variability that result.The results, from over 110,000 trials, highlight the wide range of timing qualities that can occur even in these dedicated software packages for the task. We stress the importance of scientists making their own timing validation measurements for their own stimuli and computer configuration.

PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9414 ◽  
Author(s):  
David Bridges ◽  
Alain Pitiot ◽  
Michael R. MacAskill ◽  
Jonathan W. Peirce

Many researchers in the behavioral sciences depend on research software that presents stimuli, and records response times, with sub-millisecond precision. There are a large number of software packages with which to conduct these behavioral experiments and measure response times and performance of participants. Very little information is available, however, on what timing performance they achieve in practice. Here we report a wide-ranging study looking at the precision and accuracy of visual and auditory stimulus timing and response times, measured with a Black Box Toolkit. We compared a range of popular packages: PsychoPy, E-Prime®, NBS Presentation®, Psychophysics Toolbox, OpenSesame, Expyriment, Gorilla, jsPsych, Lab.js and Testable. Where possible, the packages were tested on Windows, macOS, and Ubuntu, and in a range of browsers for the online studies, to try to identify common patterns in performance. Among the lab-based experiments, Psychtoolbox, PsychoPy, Presentation and E-Prime provided the best timing, all with mean precision under 1 millisecond across the visual, audio and response measures. OpenSesame had slightly less precision across the board, but most notably in audio stimuli and Expyriment had rather poor precision. Across operating systems, the pattern was that precision was generally very slightly better under Ubuntu than Windows, and that macOS was the worst, at least for visual stimuli, for all packages. Online studies did not deliver the same level of precision as lab-based systems, with slightly more variability in all measurements. That said, PsychoPy and Gorilla, broadly the best performers, were achieving very close to millisecond precision on several browser/operating system combinations. For response times (measured using a high-performance button box), most of the packages achieved precision at least under 10 ms in all browsers, with PsychoPy achieving a precision under 3.5 ms in all. There was considerable variability between OS/browser combinations, especially in audio-visual synchrony which is the least precise aspect of the browser-based experiments. Nonetheless, the data indicate that online methods can be suitable for a wide range of studies, with due thought about the sources of variability that result. The results, from over 110,000 trials, highlight the wide range of timing qualities that can occur even in these dedicated software packages for the task. We stress the importance of scientists making their own timing validation measurements for their own stimuli and computer configuration.


Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2270
Author(s):  
Sina Zangbari Koohi ◽  
Nor Asilah Wati Abdul Hamid ◽  
Mohamed Othman ◽  
Gafurjan Ibragimov

High-performance computing comprises thousands of processing powers in order to deliver higher performance computation than a typical desktop computer or workstation in order to solve large problems in science, engineering, or business. The scheduling of these machines has an important impact on their performance. HPC’s job scheduling is intended to develop an operational strategy which utilises resources efficiently and avoids delays. An optimised schedule results in greater efficiency of the parallel machine. In addition, processes and network heterogeneity is another difficulty for the scheduling algorithm. Another problem for parallel job scheduling is user fairness. One of the issues in this field of study is providing a balanced schedule that enhances efficiency and user fairness. ROA-CONS is a new job scheduling method proposed in this paper. It describes a new scheduling approach, which is a combination of an updated conservative backfilling approach further optimised by the raccoon optimisation algorithm. This algorithm also proposes a technique of selection that combines job waiting and response time optimisation with user fairness. It contributes to the development of a symmetrical schedule that increases user satisfaction and performance. In comparison with other well-known job scheduling algorithms, the simulation assesses the effectiveness of the proposed method. The results demonstrate that the proposed strategy offers improved schedules that reduce the overall system’s job waiting and response times.


Author(s):  
Chad L. Jacoby ◽  
Young Suk Jo ◽  
Jake Jurewicz ◽  
Guillermo Pamanes ◽  
Joshua E. Siegel ◽  
...  

There exists the potential for major simplifications to current hybrid transmission architectures, which can lead to advances in powertrain performance. This paper assesses the technical merits of various hybrid powertrains in the context of high-performance vehicles and introduces a new transmission concept targeted at high performance hybrid applications. While many hybrid transmission configurations have been developed and implemented in mainstream and even luxury vehicles, ultra high performance sports cars have only recently begun to hybridize. The unique performance requirements of such vehicles place novel constraints on their transmissions designs. The goals become less about improved efficiency and smoothness and more centered on weight reduction, complexity reduction, and performance improvement. To identify the most critical aspects of a high performance transmission, a wide range of existing technologies is studied in concert with basic physical performance analysis of electrical motors and an internal combustion engine. The new transmission concepts presented here emphasize a reduction in inertial, frictional, and mechanical losses. A series of conceptual powertrain designs are evaluated against the goals of reducing mechanical complexity and maintaining functionality. The major innovation in these concepts is the elimination of a friction clutch to engage and disengage gears. Instead, the design proposes that the inclusion of a large electric motor enables the gears to be speed-matched and torque-zeroed without the inherent losses associated with a friction clutch. Additionally, these transmission concepts explore the merits of multiple electric motors and their placement as well as the reduction in synchronization interfaces. Ultimately, two strategies for speed-matched gear sets are considered, and a speed-matching prototype of the chosen methodology is presented to validate the feasibility of the proposed concept. The power flow and operational modes of both transmission architectures are studied to ensure required functionality and identify further areas of optimization. While there are still many unanswered questions about this concept, this paper introduces the base analysis and proof of concept for a technology that has great potential to advance hybrid vehicles at all levels.


2019 ◽  
Vol 18 (4) ◽  
pp. 31-42 ◽  
Author(s):  
Carlos Arango ◽  
Rémy Dernat ◽  
John Sanabria

Virtualization technologies have evolved along with the development of computational environments. Virtualization offered needed features at that time such as isolation, accountability, resource allocation, resource fair sharing and so on. Novel processor technologies bring to commodity computers the possibility to emulate diverse environments where a wide range of computational scenarios can be run. Along with processors evolution, developers have implemented different virtualization mechanisms exhibiting enhanced performance from previous virtualized environments. Recently, operating system-based virtualization technologies captured the attention of communities abroad because their important improvements on performance area. In this paper, the features of three container-based operating systems virtualization tools (LXC, Docker and Singularity) are presented. LXC, Docker, Singularity and bare metal are put under test through a customized single node HPL-Benchmark and a MPI-based application for the multi node testbed. Also the disk I/O performance, Memory (RAM) performance, Network bandwidth and GPU performance are tested for the COS technologies vs bare metal. Preliminary results and conclusions around them are presented and discussed.


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 1192
Author(s):  
Caroline Jay ◽  
Robert Haines ◽  
Daniel S. Katz ◽  
Jeffrey C. Carver ◽  
Sandra Gesing ◽  
...  

Background: Software is now ubiquitous within research. In addition to the general challenges common to all software development projects, research software must also represent, manipulate, and provide data for complex theoretical constructs. Ensuring this process of theory-software translation is robust is essential to maintaining the integrity of the science resulting from it, and yet there has been little formal recognition or exploration of the challenges associated with it. Methods: We thematically analyse the outputs of the discussion sessions at the Theory-Software Translation Workshop 2019, where academic researchers and research software engineers from a variety of domains, and with particular expertise in high performance computing, explored the process of translating between scientific theory and software. Results: We identify a wide range of challenges to implementing scientific theory in research software and using the resulting data and models for the advancement of knowledge. We categorise these within the emergent themes of design, infrastructure, and culture, and map them to associated research questions. Conclusions: Systematically investigating how software is constructed and its outputs used within science has the potential to improve the robustness of research software and accelerate progress in its development. We propose that this issue be examined within a new research area of theory-software translation, which would aim to significantly advance both knowledge and scientific practice.


Author(s):  
David Pfander ◽  
Gregor Daiß ◽  
Dirk Pflüger

Clustering is an important task in data mining that has become more challenging due to the ever-increasing size of available datasets. To cope with these big data scenarios, a high-performance clustering approach is required. Sparse grid clustering is a density-based clustering method that uses a sparse grid density estimation as its central building block. The underlying density estimation approach enables the detection of clusters with non-convex shapes and without a predetermined number of clusters. In this work, we introduce a new distributed and performance-portable variant of the sparse grid clustering algorithm that is suited for big data settings. Our compute kernels were implemented in OpenCL to enable portability across a wide range of architectures. For distributed environments, we added a manager-worker scheme that was implemented using MPI. In experiments on two supercomputers, Piz Daint and Hazel Hen, with up to 100 million data points in a 10-dimensional dataset, we show the performance and scalability of our approach. The dataset with 100 million data points was clustered in 1198s using 128 nodes of Piz Daint. This translates to an overall performance of 352TFLOPS. On the node-level, we provide results for two GPUs, Nvidia's Tesla P100 and the AMD FirePro W8100, and one processor-based platform that uses Intel Xeon E5-2680v3 processors. In these experiments, we achieved between 43% and 66% of the peak performance across all compute kernels and devices, demonstrating the performance portability of our approach.


Algorithms ◽  
2019 ◽  
Vol 12 (3) ◽  
pp. 60 ◽  
Author(s):  
David Pfander ◽  
Gregor Daiß ◽  
Dirk Pflüger

Clustering is an important task in data mining that has become more challenging due to the ever-increasing size of available datasets. To cope with these big data scenarios, a high-performance clustering approach is required. Sparse grid clustering is a density-based clustering method that uses a sparse grid density estimation as its central building block. The underlying density estimation approach enables the detection of clusters with non-convex shapes and without a predetermined number of clusters. In this work, we introduce a new distributed and performance-portable variant of the sparse grid clustering algorithm that is suited for big data settings. Our computed kernels were implemented in OpenCL to enable portability across a wide range of architectures. For distributed environments, we added a manager–worker scheme that was implemented using MPI. In experiments on two supercomputers, Piz Daint and Hazel Hen, with up to 100 million data points in a ten-dimensional dataset, we show the performance and scalability of our approach. The dataset with 100 million data points was clustered in 1198 s using 128 nodes of Piz Daint. This translates to an overall performance of 352 TFLOPS . On the node-level, we provide results for two GPUs, Nvidia’s Tesla P100 and the AMD FirePro W8100, and one processor-based platform that uses Intel Xeon E5-2680v3 processors. In these experiments, we achieved between 43% and 66% of the peak performance across all computed kernels and devices, demonstrating the performance portability of our approach.


Electronics ◽  
2019 ◽  
Vol 8 (5) ◽  
pp. 528 ◽  
Author(s):  
Julian Viejo ◽  
Jorge Juan-Chico ◽  
Manuel J. Bellido ◽  
Paulino Ruiz-de-Clavijo ◽  
David Guerrero ◽  
...  

This paper presents the complete design and implementation of a low-cost, low-footprint, network time protocol server core for field programmable gate arrays. The core uses a carefully designed modular architecture, which is fully implemented in hardware using digital circuits and systems. Most remarkable novelties introduced are a hardware-optimized timekeeping algorithm implementation, and a full-hardware protocol stack and automatic network configuration. As a result, the core is able to achieve similar accuracy and performance to typical high-performance network time protocol server equipment. The core uses a standard global positioning system receiver as time reference, has a small footprint and can easily fit in a low-range field-programmable chip, greatly scaling down from previous system-on-chip time synchronization systems. Accuracy and performance results show that the core can serve hundreds of thousands of network time clients with negligible accuracy degradation, in contrast to state-of-the-art high-performance time server equipment. Therefore, this core provides a valuable time server solution for a wide range of emerging embedded and distributed network applications such as the Internet of Things and the smart grid, at a fraction of the cost and footprint of current discrete and embedded solutions.


Sports ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 128
Author(s):  
Pedro E. Alcaraz ◽  
Robert Csapo ◽  
Tomás T. Freitas ◽  
Elena Marín-Cascales ◽  
Anthony J. Blazevich ◽  
...  

On behalf of the Strength & Conditioning Society (SCS) and the European Sport Nutrition Society (ESNS), we are pleased to present the abstracts of the 2019 International Sport Forum on Strength & Conditioning & Nutrition, which took place in Madrid, Spain from November 15th–16th 2019. The meeting provided evidence-based education to advance the science and practice on the fields of sport nutrition, training, rehabilitation and performance. It also disseminated cutting-edge sport nutrition and strength and conditioning research, promoted the translation of basic science into the field and fostered the future of the field by providing young practitioners and researchers with the opportunity to present their findings through oral and poster communications, the abstracts of which can be found in this Special Issue of Sports. Renowned international and national speakers provided comprehensive updates, workshops and insights into novel scientific topics covering various areas of sport nutrition and strength and conditioning science. We were fortunate to have a wide range of speakers and presenters from all areas—strength training, conditioning to prevent injuries and improve performance, nutrition and supplementation for fitness and high-performance sports. A data-flash and poster session allowed for the presentation of the latest results of current research. Most importantly, the meeting provided ample opportunities to bring people together to discuss practical questions related to training and nutrition and plan scientific projects. With cutting-edge research and best practice in mind, this joint conference was an important means to pursue the missions of the SCS and ESNS. Rather than being a single event, the forum in Madrid was the starting point for a series of regular meetings on Strength & Conditioning & Nutrition to be held worldwide, so make sure to visit the websites of the SCS and ESNS and follow us on social media to receive updates and connect with our members. We proudly look back on an exciting, inspiring and informative meeting in Madrid!


2021 ◽  
Vol 1 (192) ◽  
pp. 129-132
Author(s):  
Natalia Sveshchynska ◽  

The article analyzes the question of the peculiarities of concertmaster activity of future teachers of music art in the process of learning in the class of accompaniment and improvisation and in further practical activities. The author identifies the constituent structural elements in the training of pianist-accompanist in higher educational institutions of art, which is manifested in various aspects of the activities of future professionals. The field of activity for a musician-performer is concertmaster's work, which by its specificity requires from the specialist not only the possession of traditional performing skills, but also a whole set of personal qualities and the ability to work creatively with musical material. It should be noted that in the concertmaster class due to its specificity is the concentration and systematization of knowledge and skills acquired by students in piano, accordion, accordion, conducting, solo singing and music theory (N. Borisov, D. Gostev, T. Glyadkovskaya, M. Moiseeva, F. Khalilova and others.). This is especially evident in singing to one's own accompaniment and selection by ear, where the narrow professional skills acquired in various specialties are concentrated in one direction - the embodiment of the artistic image of a musical work. Such a variety of tasks requires a comprehensive improvement of the theory and technology of concertmaster activity. Сoncertmaster class is one of the performing disciplines that is important for the general professional training of music teachers. In the concertmaster's class, students' musical and performance activities are studying a wide range of vocal, instrumental, children's music literature, accompanied by a soloist-vocalist or instrumentalist, playing in an ensemble, singing to their own accompaniment, reading from a letter, transposing, listening. The concertmaster class in the structure of professional training of a musician-pianist is an important subject of a special cycle, along with other disciplines, is responsible for the holistic training of a specialist, provides a basis for work in the field of ensemble activities. The tasks of the concertmaster class in music education are to form and improve performance skills and abilities designed for the professional work of a pianist with soloists. Modernity requires the training of competent concertmasters with high performance potential, developed artistic and organizational qualities, possessing the means of artistic processing of musical material, accustomed to work in the stage.


Sign in / Sign up

Export Citation Format

Share Document