Machine learning for video event recognition

2021 ◽  
pp. 1-24
Author(s):  
Danilo Avola ◽  
Marco Cascio ◽  
Luigi Cinque ◽  
Gian Luca Foresti ◽  
Daniele Pannone

In recent years, the spread of video sensor networks both in public and private areas has grown considerably. Smart algorithms for video semantic content understanding are increasingly developed to support human operators in monitoring different activities, by recognizing events that occur in the observed scene. With the term event, we refer to one or more actions performed by one or more subjects (e.g., people or vehicles) acting within the same observed area. When these actions are performed by subjects that do not interact with each other, the events are usually classified as simple. Instead, when any kind of interaction occurs among subjects, the involved events are typically classified as complex. This survey starts by providing the formal definitions of both scene and event, and the logical architecture for a generic event recognition system. Subsequently, it presents two taxonomies based on features and machine learning algorithms, respectively, which are used to describe the different approaches for the recognition of events within a video sequence. This paper also discusses key works of the current state-of-the-art of event recognition, providing the list of datasets used to evaluate the performance of reported methods for video content understanding.

Author(s):  
Yu Shao ◽  
Xinyue Wang ◽  
Wenjie Song ◽  
Sobia Ilyas ◽  
Haibo Guo ◽  
...  

With the increasing aging population in modern society, falls as well as fall-induced injuries in elderly people become one of the major public health problems. This study proposes a classification framework that uses floor vibrations to detect fall events as well as distinguish different fall postures. A scaled 3D-printed model with twelve fully adjustable joints that can simulate human body movement was built to generate human fall data. The mass proportion of a human body takes was carefully studied and was reflected in the model. Object drops, human falling tests were carried out and the vibration signature generated in the floor was recorded for analyses. Machine learning algorithms including K-means algorithm and K nearest neighbor algorithm were introduced in the classification process. Three classifiers (human walking versus human fall, human fall versus object drop, human falls from different postures) were developed in this study. Results showed that the three proposed classifiers can achieve the accuracy of 100, 85, and 91%. This paper developed a framework of using floor vibration to build the pattern recognition system in detecting human falls based on a machine learning approach.


Author(s):  
Tumisho Billson Mokgonyane ◽  
Tshephisho Joseph Sefara ◽  
Thipe Isaiah Modipa ◽  
Madimetja Jonas Manamela

2016 ◽  
Vol 24 (2) ◽  
pp. 125-135 ◽  
Author(s):  
Diego Gachet Páez ◽  
Manuel de Buenaga Rodríguez ◽  
Enrique Puertas Sánz ◽  
María Teresa Villalba ◽  
Rafael Muñoz Gil

The aging population and economic crisis specially in developed countries have as a consequence the reduction in funds dedicated to health care; it is then desirable to optimize the costs of public and private healthcare systems, reducing the affluence of chronic and dependent people to care centers; promoting healthy lifestyle and activities can allow people to avoid chronic diseases as for example hypertension. In this article, we describe a system for promoting an active and healthy lifestyle for people and to recommend with guidelines and valuable information about their habits. The proposed system is being developed around the Big Data paradigm using bio-signal sensors and machine-learning algorithms for recommendations.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1415
Author(s):  
Hirokazu Madokoro ◽  
Kazuhisa Nakasho ◽  
Nobuhiro Shimoi ◽  
Hanwool Woo ◽  
Kazuhito Sato

This paper presents a novel bed-leaving sensor system for real-time recognition of bed-leaving behavior patterns. The proposed system comprises five pad sensors installed on a bed, a rail sensor inserted in a safety rail, and a behavior pattern recognizer based on machine learning. The linear characteristic between loads and output was obtained from a load test to evaluate sensor output characteristics. Moreover, the output values change linearly concomitantly with speed to attain the sensor with the equivalent load. We obtained benchmark datasets of continuous and discontinuous behavior patterns from ten subjects. Recognition targets using our sensor prototype and their monitoring system comprise five behavior patterns: sleeping, longitudinal sitting, lateral sitting, terminal sitting, and leaving the bed. We compared machine learning algorithms of five types to recognize five behavior patterns. The experimentally obtained results revealed that the proposed sensor system improved recognition accuracy for both datasets. Moreover, we achieved improved recognition accuracy after integration of learning datasets as a general discriminator.


Life ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 44
Author(s):  
Max Riekeles ◽  
Janosch Schirmack ◽  
Dirk Schulze-Makuch

(1) Background: Future missions to potentially habitable places in the Solar System require biochemistry-independent methods for detecting potential alien life forms. The technology was not advanced enough for onboard machine analysis of microscopic observations to be performed in past missions, but recent increases in computational power make the use of automated in-situ analyses feasible. (2) Methods: Here, we present a semi-automated experimental setup, capable of distinguishing the movement of abiotic particles due to Brownian motion from the motility behavior of the bacteria Pseudoalteromonas haloplanktis, Planococcus halocryophilus, Bacillus subtilis, and Escherichia coli. Supervised machine learning algorithms were also used to specifically identify these species based on their characteristic motility behavior. (3) Results: While we were able to distinguish microbial motility from the abiotic movements due to Brownian motion with an accuracy exceeding 99%, the accuracy of the automated identification rates for the selected species does not exceed 82%. (4) Conclusions: Motility is an excellent biosignature, which can be used as a tool for upcoming life-detection missions. This study serves as the basis for the further development of a microscopic life recognition system for upcoming missions to Mars or the ocean worlds of the outer Solar System.


2021 ◽  
Vol 1 (3) ◽  
pp. 138-165
Author(s):  
Thomas Krause ◽  
Jyotsna Talreja Wassan ◽  
Paul Mc Kevitt ◽  
Haiying Wang ◽  
Huiru Zheng ◽  
...  

Metagenomics promises to provide new valuable insights into the role of microbiomes in eukaryotic hosts such as humans. Due to the decreasing costs for sequencing, public and private repositories for human metagenomic datasets are growing fast. Metagenomic datasets can contain terabytes of raw data, which is a challenge for data processing but also an opportunity for advanced machine learning methods like deep learning that require large datasets. However, in contrast to classical machine learning algorithms, the use of deep learning in metagenomics is still an exception. Regardless of the algorithms used, they are usually not applied to raw data but require several preprocessing steps. Performing this preprocessing and the actual analysis in an automated, reproducible, and scalable way is another challenge. This and other challenges can be addressed by adjusting known big data methods and architectures to the needs of microbiome analysis and DNA sequence processing. A conceptual architecture for the use of machine learning and big data on metagenomic data sets was recently presented and initially validated to analyze the rumen microbiome. The same architecture can be used for clinical purposes as is discussed in this paper.


Sign in / Sign up

Export Citation Format

Share Document