Paramedics received training in point-of-care ultrasound (POCUS) to assess for cardiac contractility during management of medical out-of-hospital cardiac arrest (OHCA). The primary outcome was the percentage of adequate POCUS video acquisition and accurate video interpretation during OHCA resuscitations. Secondary outcomes included POCUS impact on patient management and resuscitation protocol adherence.
A prospective, observational cohort study of paramedics was performed following a four-hour training session, which included a didactic lecture and hands-on POCUS instruction. The Prehospital Echocardiogram in Cardiac Arrest (PECA) protocol was developed and integrated into the resuscitation algorithm for medical non-shockable OHCA. The ultrasound (US) images were reviewed by a single POCUS expert investigator to determine the adequacy of the POCUS video acquisition and accuracy of the video interpretation. Change in patient management and resuscitation protocol adherence data, including end-tidal carbon dioxide (EtCO2) monitoring following advanced airway placement, adrenaline administration, and compression pauses under ten seconds, were queried from the prehospital electronic health record (EHR).
Captured images were deemed adequate in 42/49 (85.7%) scans and paramedic interpretation of sonography was accurate in 43/49 (87.7%) scans. The POCUS results altered patient management in 14/49 (28.6%) cases. Paramedics adhered to EtCO2 monitoring in 36/36 (100.0%) patients with an advanced airway, adrenaline administration for 38/38 (100.0%) patients, and compression pauses under ten seconds for 36/38 (94.7%) patients.
Paramedics were able to accurately obtain and interpret cardiac POCUS videos during medical OHCA while adhering to a resuscitation protocol. These findings suggest that POCUS can be effectively integrated into paramedic protocols for medical OHCA.
Power system facility calibration is a compulsory task that requires in-site operations. In this work, we propose a remote calibration device that incorporates edge intelligence so that the required calibration can be accomplished with little human intervention. Our device entails a wireless serial port module, a Bluetooth module, a video acquisition module, a text recognition module, and a message transmission module. First, the wireless serial port is used to communicate with edge node, the Bluetooth is used to search for nearby Bluetooth devices to obtain their state information and the video is used to monitor the calibration process in the calibration lab. Second, to improve the intelligence, we propose a smart meter reading method in our device that is based on artificial intelligence to obtain information about calibration meters. We use a mini camera to capture images of calibration meters, then we adopt the Efficient and Accurate Scene Text Detector (EAST) to complete text detection, finally we built the Convolutional Recurrent Neural Network (CRNN) to complete the recognition of the meter data. Finally, the message transmission module is used to transmit the recognized data to the database through Extensible Messaging and Presence Protocol (XMPP). Our device solves the problem that some calibration meters cannot return information, thereby improving the remote calibration intelligence.
This paper introduces Deep4D a compact generative representation of shape and appearance from captured 4D volumetric video sequences of people. 4D volumetric video achieves highly realistic reproduction, replay and free-viewpoint rendering of actor performance from multiple view video acquisition systems. A deep generative network is trained on 4D video sequences of an actor performing multiple motions to learn a generative model of the dynamic shape and appearance. We demonstrate the proposed generative model can provide a compact encoded representation capable of high-quality synthesis of 4D volumetric video with two orders of magnitude compression. A variational encoder-decoder network is employed to learn an encoded latent space that maps from 3D skeletal pose to 4D shape and appearance. This enables high-quality 4D volumetric video synthesis to be driven by skeletal motion, including skeletal motion capture data. This encoded latent space supports the representation of multiple sequences with dynamic interpolation to transition between motions. Therefore we introduce Deep4D motion graphs, a direct application of the proposed generative representation. Deep4D motion graphs allow real-tiome interactive character animation whilst preserving the plausible realism of movement and appearance from the captured volumetric video. Deep4D motion graphs implicitly combine multiple captured motions from a unified representation for character animation from volumetric video, allowing novel character movements to be generated with dynamic shape and appearance detail.
Diagnosis and treatment for Parkinson's disease rely on the evaluation of motor functions, which is expensive and time consuming when performing at clinics. It is also difficult for patients to record correct movements at home without the guidance from experienced physicians. To help patients with Parkinson’s disease get better evaluation from in-home recorded movement videos, we developed an interactive video acquisition and learning system for clinical motor assessments. The system provides real-time guidance with multi-level body keypoint tracking and analysis to patients, which guarantees correct understanding and performing of clinical tasks. We tested its effectiveness on healthy subjects, and the efficiency and usability on patient groups. Experiments showed that our system enabled high quality video recordings following clinical standards, benefiting both patients and physicians. Our system provides a novel learning-based telemedicine approach for the care of patients with Parkinson’s disease.
Human gender recognition is one the most challenging task in computer vision, especially in pedestrians, due to so much variation in human poses, video acquisition, illumination, occlusion, and human clothes, etc. In this article, we have considered gender recognition which is very important to be considered in video surveillance. To make the system automated to recognize the gender, we have provided a novel technique based on the extraction of features through different methodologies. Our technique consists of 4 steps a) preprocessing, b) feature extraction, c) feature fusion, d) classification. The exciting area is separated in the first step, which is the full body from the images. After that, images are divided into two halves on the ratio of 2:3 to acquire sets of upper body and lower body. In the second step, three handcrafted feature extractors, HOG, Gabor, and granulometry, extract the feature vectors using different score values. These feature vectors are fused to create one strong feature vector on which results are evaluated. Experiments are performed on full-body datasets to make the best configuration of features. The features are extracted through different feature extractors in different numbers to generate their feature vectors. Those features are fused to create a strong feature vector. This feature vector is then utilized for classification. For classification, SVM and KNN classifiers are used. Results are evaluated on five performance measures: Accuracy, Precision, Sensitivity, Specificity, and Area under the curve. The best results that have been acquired are on the upper body, which is 88.7% accuracy and 0.96 AUC. The results are compared with the existing methodologies, and hence it is concluded that the proposed method has significantly achieved higher results.
BACKGROUND: Work-related musculoskeletal disorders are prevalent in dental hygienists. Although engineering controls and ergonomic training is available, it is unclear why this intransigent problem continues. One possible barrier is that a comprehensive, standardized protocol for evaluating dental hygiene work does not exist. OBJECTIVE: This study aimed to generate a valid and reliable observational protocol for the assessment of dental hygiene work. METHODS: An iterative process was used to establish and refine an ecologically valid video acquisition and observation protocol to assess key activities, tasks, and performance components of dental hygiene work. RESULTS: Good inter-rater reliability was achieved across all variables when the final coding scheme was completed by three independent raters. CONCLUSIONS: This work provides an exemplar of the process required to generate a comprehensive protocol for evaluating the work components of a particular job, and provides standardized nomenclature for use by scientists and practitioners interested in understanding and addressing the pervasive issue of work-related disorders in dental hygienists.
In every cycle of harvesting operation, farmer does not have any information on how many bunches and which oil palm tree will be harvested. By introducing the 360ᵒ camera imaging system, number of Fresh Fruit Bunch (FFB) can be determined for every tree in a plantation area. Black bunch census was done manually to estimate yield. This was improved by video acquisition using a high resolution 360ᵒ camera integrated with an image processing software for video image processing to calculate number of FFB. Based on the standard planting pattern, it is time consuming process to circle each tree to acquire the 360ᵒ view of each tree. Current technology to approach bunches is destructive and conventional since the process involve physical contact between workers and FFB. Thus, a new method was established by the execution of All-Terrain Vehicle (ATV) between rows in plantation area for video acquisition. Images were extracted and threshold by using MATLAB software. L*, a*, and b* color space was used for the bunch identification throughout 90 samples of images to identify the mean intensity value. Model threshold verification for another 48 samples of images resulted with Coefficient of Determination, R2 of 0.8029 for bunch identification. As a result, a new method for video acquisition was established as well as processing method for bunch identification for large scale plantation area.