illumination changes
Recently Published Documents


TOTAL DOCUMENTS

284
(FIVE YEARS 71)

H-INDEX

20
(FIVE YEARS 6)

2021 ◽  
Vol 12 (1) ◽  
pp. 127
Author(s):  
Weibo Cai ◽  
Jintao Cheng ◽  
Juncan Deng ◽  
Yubin Zhou ◽  
Hua Xiao ◽  
...  

Line segment matching is essential for industrial applications such as scene reconstruction, pattern recognition, and VSLAM. To achieve good performance under the scene with illumination changes, we propose a line segment matching method fusing local gradient order and non-local structure information. This method begins with intensity histogram multiple averaging being utilized for adaptive partitioning. After that, the line support region is divided into several sub-regions, and the whole image is divided into a few intervals. Then the sub-regions are encoded by local gradient order, and the intervals are encoded by non-local structure information of the relationship between the sampled points and the anchor points. Finally, two histograms of the encoded vectors are, respectively, normalized and cascaded. The proposed method was tested on the public datasets and compared with previous methods, which are the line-junction-line (LJL), the mean-standard deviation line descriptor (MSLD) and the line-point invariant (LPI). Experiments show that our approach has better performance than the representative methods in various scenes. Therefore, a tentative conclusion can be drawn that this method is robust and suitable for various illumination changes scenes.


2021 ◽  
Vol 12 (1) ◽  
pp. 4
Author(s):  
Chengming Liu ◽  
Ronghua Fu ◽  
Yinghao Li ◽  
Yufei Gao ◽  
Lei Shi ◽  
...  

In this paper, we propose a new method for detecting abnormal human behavior based on skeleton features using self-attention augment graph convolution. The skeleton data have been proved to be robust to the complex background, illumination changes, and dynamic camera scenes and are naturally constructed as a graph in non-Euclidean space. Particularly, the establishment of spatial temporal graph convolutional networks (ST-GCN) can effectively learn the spatio-temporal relationships of Non-Euclidean Structure Data. However, it only operates on local neighborhood nodes and thereby lacks global information. We propose a novel spatial temporal self-attention augmented graph convolutional networks (SAA-Graph) by combining improved spatial graph convolution operator with a modified transformer self-attention operator to capture both local and global information of the joints. The spatial self-attention augmented module is used to understand the intra-frame relationships between human body parts. As far as we know, we are the first group to utilize self-attention for video anomaly detection tasks by enhancing spatial temporal graph convolution. Moreover, to validate the proposed model, we performed extensive experiments on two large-scale publicly standard datasets (i.e., ShanghaiTech Campus and CUHK Avenue datasets) which reveal the state-of-art performance for our proposed approach when compared to existing skeleton-based methods and graph convolution methods.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8090
Author(s):  
Joel Vidal ◽  
Chyi-Yeu Lin ◽  
Robert Martí

Recently, 6D pose estimation methods have shown robust performance on highly cluttered scenes and different illumination conditions. However, occlusions are still challenging, with recognition rates decreasing to less than 10% for half-visible objects in some datasets. In this paper, we propose to use top-down visual attention and color cues to boost performance of a state-of-the-art method on occluded scenarios. More specifically, color information is employed to detect potential points in the scene, improve feature-matching, and compute more precise fitting scores. The proposed method is evaluated on the Linemod occluded (LM-O), TUD light (TUD-L), Tejani (IC-MI) and Doumanoglou (IC-BIN) datasets, as part of the SiSo BOP benchmark, which includes challenging highly occluded cases, illumination changing scenarios, and multiple instances. The method is analyzed and discussed for different parameters, color spaces and metrics. The presented results show the validity of the proposed approach and their robustness against illumination changes and multiple instance scenarios, specially boosting the performance on relatively high occluded cases. The proposed solution provides an absolute improvement of up to 30% for levels of occlusion between 40% to 50%, outperforming other approaches with a best overall recall of 71% for the LM-O, 92% for TUD-L, 99.3% for IC-MI and 97.5% for IC-BIN.


Computation ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 127
Author(s):  
Daniel Queirós da da Silva ◽  
Filipe Neves dos dos Santos ◽  
Armando Jorge Sousa ◽  
Vítor Filipe ◽  
José Boaventura-Cunha

Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish a smarter, more precise, and sustainable forestry. This article presents a state-of-the-art review about unimodal and multimodal perception in forests, detailing the current developed work about perception using a single type of sensors (unimodal) and by combining data from different kinds of sensors (multimodal). This work also makes a comparison between existing perception datasets in the literature and presents a new multimodal dataset, composed by images and laser scanning data, as a contribution for this research field. Lastly, a critical analysis of the works collected is conducted by identifying strengths and research trends in this domain.


Author(s):  
Sohee Son ◽  
Jeongin Kwon ◽  
Hui-Yong Kim ◽  
Haechul Choi

Unmanned aerial vehicles like drones are one of the key development technologies with many beneficial applications. As they have made great progress, security and privacy issues are also growing. Drone tacking with a moving camera is one of the important methods to solve these issues. There are various challenges of drone tracking. First, drones move quickly and are usually tiny. Second, images captured by a moving camera have illumination changes. Moreover, the tracking should be performed in real-time for surveillance applications. For fast and accurate drone tracking, this paper proposes a tracking framework utilizing two trackers, a predictor, and a refinement process. One tracker finds a moving target based on motion flow and the other tracker locates the region of interest (ROI) employing histogram features. The predictor estimates the trajectory of the target by using a Kalman filter. The predictor contributes to keeping track of the target even if the trackers fail. Lastly, the refinement process decides the location of the target taking advantage of ROIs from the trackers and the predictor. In experiments on our dataset containing tiny flying drones, the proposed method achieved an average success rate of 1.134 times higher than conventional tracking methods and it performed at an average run-time of 21.08 frames per second.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Khalid M. Hosny ◽  
Taher Magdy ◽  
Nabil A. Lashin ◽  
Kyriakos Apostolidis ◽  
George A. Papakostas

Representation and classification of color texture generate considerable interest within the field of computer vision. Texture classification is a difficult task that assigns unlabeled images or textures to the correct labeled class. Some key factors such as scaling and viewpoint variations and illumination changes make this task challenging. In this paper, we present a new feature extraction technique for color texture classification and recognition. The presented approach aggregates the features extracted from local binary patterns (LBP) and convolution neural network (CNN) to provide discriminatory information, leading to better texture classification results. Almost all of the CNN model cases classify images based on global features that describe the image as a whole to generalize the entire object. LBP classifies images based on local features that describe the image’s key points (image patches). Our analysis shows that using LBP improves the classification task when compared to using CNN only. We test the proposed approach experimentally over three challenging color image datasets (ALOT, CBT, and Outex). The results demonstrated that our approach improved up to 25% in the classification accuracy over the traditional CNN models. We identify optimal combinations for each dataset and obtain high classification rates. The proposed approach is robust, stable, and discriminatory among the three datasets and has shown enhancement in classification and recognition compared to the state-of-the-art method.


Author(s):  
Bruno Sauvalle ◽  
Arnaud de La Fortelle

The goal of background reconstruction is to recover the background image of a scene from a sequence of frames showing this scene cluttered by various moving objects. This task is fundamental in image analysis, and is generally the first step before more advanced processing, but difficult because there is no formal definition of what should be considered as background or foreground and the results may be severely impacted by various challenges such as illumination changes, intermittent object motions, highly cluttered scenes, etc. We propose in this paper a new iterative algorithm for background reconstruction, where the current estimate of the background is used to guess which image pixels are background pixels and a new background estimation is performed using those pixels only. We then show that the proposed algorithm, which uses stochastic gradient descent for improved regularization, is more accurate than the state of the art on the challenging SBMnet dataset, especially for short videos with low frame rates, and is also fast, reaching an average of 52 fps on this dataset when parameterized for maximal accuracy using GPU acceleration and a Python implementation.


Eos ◽  
2021 ◽  
Vol 102 ◽  
Author(s):  
Elise Cutts

InSight data hint that shifting carbon dioxide ice loads, illumination changes, or solar tides could drive an uptick in marsquakes during northern summer—a “marsquake season.”


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Leilei Rong ◽  
Yan Xu ◽  
Xiaolei Zhou ◽  
Lisu Han ◽  
Linghui Li ◽  
...  

AbstractVehicle re-identification (re-id) aims to solve the problems of matching and identifying the same vehicle under the scenes across multiple surveillance cameras. For public security and intelligent transportation system (ITS), it is extremely important to locate the target vehicle quickly and accurately in the massive vehicle database. However, re-id of the target vehicle is very challenging due to many factors, such as the orientation variations, illumination changes, occlusion, low resolution, rapid vehicle movement, and amounts of similar vehicle models. In order to resolve the difficulties and enhance the accuracy for vehicle re-id, in this work, we propose an improved multi-branch network in which global–local feature fusion, channel attention mechanism and weighted local feature are comprehensively combined. Firstly, the fusion of global and local features is adopted to obtain more information of the vehicle and enhance the learning ability of the model; Secondly, the channel attention module in the feature extraction branch is embedded to extract the personalized features of the targeting vehicle; Finally, the background and noise information on feature extraction is controlled by weighted local feature. The results of comprehensive experiments on the mainstream evaluation datasets including VeRi-776, VRIC, and VehicleID indicate that our method can effectively improve the accuracy of vehicle re-identification and is superior to the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document