scholarly journals Small Human Group Detection and Validation using Pyramidal Histogram of Oriented Gradients and Gray Level Run Length Method

Over the decade’s human detection in security and surveillance system became dynamic research part in computer vision. This concern is focused by wide functions in several areas such as smart surveillance, multiple human interface, human pose characterization, person counting and person identification etc. Video surveillance organism mainly deals with recognition plus classification of moving objects with respect to several actions like walking, talking and hand shaking etc. The specific processing stages of small human group detection and validation includes frame generation, segmentation using hierarchical clustering, To achieve accurate classification feature descriptors namely Multi-Scale Completed Local Binary Pattern (MS-CLBP) and Pyramidal Histogram Of Oriented Gradients (PHOG) are employed to extract the features efficiently, Recurrent Neural Network (RNN) classifier helps to classify the features into human and group in a crowd, To extract statistical features Gray Level Run Length Method (GLRLM) is incorporated which helps in group validation.

Author(s):  
Mona E. Elbashier ◽  
Suhaib Alameen ◽  
Caroline Edward Ayad ◽  
Mohamed E. M. Gar-Elnabi

This study concern to characterize the pancreas areato head, body and tail using Gray Level Run Length Matrix (GLRLM) and extract classification features from CT images. The GLRLM techniques included eleven’s features. To find the gray level distribution in CT images it complements the GLRLM features extracted from CT images with runs of gray level in pixels and estimate the size distribution of thesubpatterns. analyzing the image with Interactive Data Language IDL software to measure the grey level distribution of images. The results show that the Gray Level Run Length Matrix and  features give classification accuracy of pancreashead 89.2%, body 93.6 and the tail classification accuracy 93.5%. The overall classification accuracy of pancreas area 92.0%.These relationships are stored in a Texture Dictionary that can be later used to automatically annotate new CT images with the appropriate pancreas area names.


2017 ◽  
Vol 11 (5) ◽  
Author(s):  
Ahmad E. Eladawi ◽  
Saad A. A. Sayed ◽  
Hammad T. Elmetwally ◽  
Tamer O. Diab

Author(s):  
SANG-HO CHO ◽  
TAEWAN KIM ◽  
DAIJIN KIM

This paper proposes a pose robust human detection and identification method for sequences of stereo images using multiply-oriented 2D elliptical filters (MO2DEFs), which can detect and identify humans regardless of scale and pose. Four 2D elliptical filters with specific orientations are applied to a 2D spatial-depth histogram, and threshold values are used to detect humans. The human pose is then determined by finding the filter whose convolution result was maximal. Candidates are verified by either detecting the face or matching head-shoulder shapes. Human identification employs the human detection method for a sequence of input stereo images and identifies them as a registered human or a new human using the Bhattacharyya distance of the color histogram. Experimental results show that (1) the accuracy of pose angle estimation is about 88%, (2) human detection using the proposed method outperforms that of using the existing Object Oriented Scale Adaptive Filter (OOSAF) by 15–20%, especially in the case of posed humans, and (3) the human identification method has a nearly perfect accuracy.


2014 ◽  
Vol 541-542 ◽  
pp. 1297-1303
Author(s):  
Kheddar Boudjemaa ◽  
Ping Song

This paper presents a design and implementation of an efficient and low cost system for indoor monitoring of human intrusion. The system design is based on the use of already available pyroelectric infrared passive sensors (PIR) that are able to detect thermal perturbation caused by moving objects within their field of view (FOV). Our design uses the PIR sensors in the geometric context as binary detectors with adaptive threshold estimation. The combined field of view of three PIR detectors is modulated by a custom designed lens mask to estimate the bearing angle of the single human intrusion. The prototype is formed by a sensing module routed wirelessly to another host module to visualize processed raw data.


2018 ◽  
Vol 6 (4) ◽  
pp. 146-151
Author(s):  
Endina Putri Purwandari ◽  
Rachmi Ulizah Hasibuan ◽  
Desi Andreswari

Bamboo species can be identified from the bamboo leaf images. This study conducted the identification of bamboo species based on leaf texture using Gray Level Co-occurrence Matrix (GLCM) and Gray Level Run Length Matrix (GLRLM) for texture feature extraction, and Euclidean distance for measure the image distance. This study used the images of bamboo species in Bengkulu province, that are bambusa Vulgaris Var Vulgaris, bambusa Multiplex, bambusa Vulgaris Var Striata, Gigantochloa Robusta, Gigantochloa Schortrchinii, Gigantochloa Serik, Schizostachyum Brachycladum, and Dendrocalamus Asper. The bamboo application was built using Matlab. The accuracy of the application was 100% for bamboo leaf test images captured using a smartphone camera and 81.25% for test images downloaded from the Internet.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6985
Author(s):  
Fakhreddine Ababsa ◽  
Hicham Hadj-Abdelkader ◽  
Marouane Boui

The purpose of this paper is to investigate the problem of 3D human tracking in complex environments using a particle filter with images captured by a catadioptric vision system. This issue has been widely studied in the literature on RGB images acquired from conventional perspective cameras, while omnidirectional images have seldom been used and published research works in this field remains limited. In this study, the Riemannian varieties was considered in order to compute the gradient on spherical images and generate a robust descriptor used along with an SVM classifier for human detection. Original likelihood functions associated with the particle filter are proposed, using both geodesic distances and overlapping regions between the silhouette detected in the images and the projected 3D human model. Our approach was experimentally evaluated on real data and showed favorable results compared to machine learning based techniques about the 3D pose accuracy. Thus, the Root Mean Square Error (RMSE) was measured by comparing estimated 3D poses and truth data, resulting in a mean error of 0.065 m when walking action was applied.


Sign in / Sign up

Export Citation Format

Share Document