video capture
Recently Published Documents


TOTAL DOCUMENTS

283
(FIVE YEARS 52)

H-INDEX

20
(FIVE YEARS 4)

Author(s):  
Wendy Nielsen ◽  
Annette Turney ◽  
Helen Georgiou ◽  
Pauline Jones

AbstractThe construction of dynamic multimedia products requires the selection and integration of a range of semiotic resources. As an assessment task for preservice teachers, this construction process is complex but has significant potential for learning. To investigate how weaving together multiple representations in such tasks enables learners to develop conceptual understanding, the paper presents an indicative case study of a 2nd-year preservice primary (K-6) teacher who created a digital explanation on the topic of ‘transparency’ for stage 3 children (ages 11–12). We focus on data gathered during the 3-h construction process including artefacts such as images, online searches, websites accessed and paper records used for planning; the digital explanation as product; audio and video capture of the construction process and pre- and post-construction interviews. Using multimodal analysis, we examine these data to understand how meanings are negotiated as the maker moves iteratively among multiple representations and through semiotic choices within these representations to explain the science concept. The analyses illustrate the complexity of the construction process while providing insight into the creator’s decision-making and to her developing semiotic and conceptual understandings. These findings allow us to build on the concept of cumulative semiotic progression (Hoban & Nielsen, Research in Science Education, 35, 1101-1119, 2013) by explicating the role of iterative reasoning in the production of pedagogic multimedia.


2021 ◽  
Author(s):  
Yuchen Yue ◽  
Hua Li ◽  
Jianhua Luo

Establishing structured reconstruction models and efficient reconstruction algorithms according to practical engineering needs is of great concern in the applied research of Compressed Sensing (CS) theory. Targeting problems during high-speed video capture, the paper proposes a set of video CS scheme based on intra-frame and inter-frame constraints and Genetic Algorithm (GA). Firstly, it employs the intra-frame and inter-frame correlation of the video signals as the priori information, creating a video CS reconstruction model on the basis of temporal and spatial similarity constraints. Then it utilizes overcomplete dictionary of Ridgelet to divide the video frames into three structures, smooth, single-oriented, or multijointed. Video frames cluster according to the structure using Affinity Propagation (AP) algorithm, and finally clusters are reconstructed using evolutionary algorithm. It is proved efficient in terms of reconstruction result in the experiment.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8115
Author(s):  
Alessandro Schaer ◽  
Oskar Helander ◽  
Francesco Buffa ◽  
Alexis Müller ◽  
Kevin Schneider ◽  
...  

We present a system capable of providing visual feedback for ergometer training, allowing detailed analysis and gamification. The presented solution can easily upgrade any existing ergometer device. The system consists of a set of pedals with embedded sensors, readout electronics and wireless communication modules and a tablet device for interaction with the users, which can be mounted on any ergometer, transforming it into a full analytical assessment tool with interactive training capabilities. The methods to capture the forces and moments applied to the pedal, as well as the pedal’s angular position, were validated using reference sensors and high-speed video capture systems. The mean-absolute error (MAE) for load is found to be 18.82 N, 25.35 N, 0.153 Nm for Fx, Fz and Mx respectively and the MAE for the pedal angle is 13.2°. A fully gamified experience of ergometer training has been demonstrated with the presented system to enhance the rehabilitation experience with audio visual feedback, based on measured cycling parameters.


2021 ◽  
pp. 194173812110509
Author(s):  
Lindsay Lafferty ◽  
John Wawrzyniak ◽  
Morgan Chambers ◽  
Todd Pagliarulo ◽  
Arthur Berg ◽  
...  

Background: Traditional running gait analysis is limited to artificial environments, but whether treadmill running approximates overground running is debated. This study aimed to compare treadmill gait analysis using fixed video with outdoor gait analysis using drone video capture. Hypothesis: Measured kinematics would be similar between natural outdoor running and traditional treadmill gait analysis. Study Design: Crossover study. Level of Evidence: Level 2. Methods: The study population included cross-country, track and field, and recreational athletes with current running mileage of at least 15 km per week. Participants completed segments in indoor and outdoor environments. Indoor running was completed on a treadmill with static video capture, and outdoor segments were obtained via drone on an outdoor track. Three reviewers independently performed clinical gait analysis on footage for 32 runners using kinematic measurements with published acceptable intra- and interrater reliability. Results: Of the 8 kinematic variables measured, 2 were found to have moderate agreement indoor versus outdoor, while 6 had fair to poor agreement. Foot strike at initial contact and rearfoot position at midstance had moderate agreement indoor versus outdoor, with a kappa of 0.54 and 0.49, respectively. The remaining variables: tibial inclination at initial contact, knee flexion angle initial contact, forward trunk lean full gait cycle, knee center position midstance, knee separation midstance, and lateral pelvic drop at midstance were found to have fair to poor agreement, ranging from 0.21 to 0.36. Conclusion: This study suggests that kinematics may differ between natural outdoor running and traditional treadmill gait analysis. Clinical Relevance: Providing recommendations for altering gait based on treadmill gait analysis may prove to be harmful if treadmill analysis does not approximate natural running environments. Drone technology could provide advancement in clinical running recommendations by capturing runners in natural environments.


2021 ◽  
Vol 1 ◽  
pp. 27-32
Author(s):  
Joy Oti

The ongoing COVID-19 pandemic has disrupted teaching and learning in higher education institutions, presenting novel challenges for both staff and students alike. These challenges have had an immense impact in the way postgraduate research (PGR) teachers perform their dual responsibilities as both students and teachers. Achieving a seamless transition from in-person to virtual learning was an arduous task. To this end, pedagogies evolved to accommodate the use of remote conferencing, video capture and other real time communication tools that facilitate virtual collaboration between staff and students. In this paper, I highlight the challenges of integrating online learning with a problem-based learning (PBL), a signature pedagogy employed by law and business schools. I draw on my personal experiences as a student and PGR teacher during the pandemic, and suggest proactive mitigation responses.


2021 ◽  
pp. 3-12
Author(s):  
Alexander S. Pechurin ◽  
Sergey F. Jatsun ◽  
Andrey V. Fedorov ◽  
A. S. Jatsun
Keyword(s):  

2021 ◽  
Vol 1 ◽  
pp. 761-770
Author(s):  
Nicolas Gio ◽  
Ross Brisco ◽  
Tijana Vuletic

AbstractDrones are becoming more popular within military applications and civil aviation by hobbyists and business. Achieving a natural Human-Drone Interaction (HDI) would enable unskilled drone pilots to take part in the flying of these devices and more generally easy the use of drones. The research within this paper focuses on the design and development of a Natural User Interface (NUI) allowing a user to pilot a drone with body gestures. A Microsoft Kinect was used to capture the user’s body information which was processed by a motion recognition algorithm and converted into commands for the drone. The implementation of a Graphical User Interface (GUI) gives feedback to the user. Visual feedback from the drone’s onboard camera is provided on a screen and an interactive menu controlled by body gestures and allowing the choice of functionalities such as photo and video capture or take-off and landing has been implemented. This research resulted in an efficient and functional system, more instinctive, natural, immersive and fun than piloting using a physical controller, including innovative aspects such as the implementation of additional functionalities to the drone's piloting and control of the flight speed.


Author(s):  
A. Frolov ◽  
G. Rendle ◽  
A. Kreskowski ◽  
M. Kaisheva ◽  
B. Froehlich ◽  
...  

Abstract. The ability to capture and explore complex real-world dynamic scenes is crucial for their detailed analysis. Tools which allow retrospective exploration of such scenes may support training of new employees or be used to evaluate industrial processes. In our work, we share insights and practical details for end-to-end acquisition of Free-Viewpoint Videos (FVV) in challenging environments and their potential for exploration in collaborative immersive virtual environments. Our lightweight capturing approach makes use of commodity DSLR cameras and focuses on improving both density and accuracy of Structure-from-Motion (SfM) reconstructions from small sets of images under difficult conditions. The integration of captured 3D models over time into a compact representation allows for efficient visualization of detailed FVVs in an immersive multi-user virtual reality system. We demonstrate our workflow on a representative acquisition of a suction excavation process and outline a use-case for exploration and interaction between collocated users and the FVV in a collaborative virtual environment.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-25
Author(s):  
Maximilian Speicher ◽  
Katy Lewis ◽  
Michael Nebeling

While augmented and virtual reality technologies are becoming mainstream, it is still technically challenging and time-consuming to create new applications. Many designers draw from traditional low-fidelity prototyping methods that do not lend themselves well to designing in 3D. Developers use high-end programming frameworks such as Unity and Unreal which require significant hardware/software setups and coding skills. We see a gap in the medium-fidelity range where there is an opportunity for new tools to leverage the advantages of 360° content for AR/VR prototyping. Existing tools, however, have only limited support for 3D geometry, spatial and proxemic interactions, puppeteering, and storytelling. We present 360theater, a new method and a tool for rapid prototyping of AR/VR experiences, which takes dioramas into the virtual realm by enhancing 360° video capture with 3D geometry and simulating spatial interactions via Wizard of Oz. Our comparative evaluation of techniques with novice and experienced AR/VR designers shows that 360theater can close the gap and achieve a higher fidelity and more realistic AR/VR prototypes than comparable methods.


Sign in / Sign up

Export Citation Format

Share Document