scholarly journals Object tracking algorithm by moving video camera

Author(s):  
B. A. Zalesky

The algorithm ACT (Adaptive Color Tracker) to track objects by a moving video camera is presented. One of the features of the algorithm is the adaptation of the feature set of the tracked object to the background of the current frame. At each step, the algorithm extracts from the object features those that are more specific to the object and at the same time are at least specific to the current frame background, since the rest of the object features not only do not contribute to the separation of the tracked object from the background, but also impede its correct detection. The features of the object and background are formed based on the color representations of scenes. They can be computed in two ways. The first way is 3D-color vectors of the clustered image of the object and the background by a fast version of the well-known k-means algorithm. The second way consists in simpler and faster partitioning of the RGB-color space into 3D-parallelepipeds and subsequent replacement of the color of each pixel with the average value of all colors belonging to the same parallelepiped as the pixel color. Another specificity of the algorithm is its simplicity, which allows it to be used on small mobile computers, such as the Jetson TXT1 or TXT2.The algorithm was tested on video sequences captured by various camcorders, as well as by using the well-known TV77 data set, containing 77 different tagged video sequences. The tests have shown the efficiency of the algorithm. On the test images, its accuracy and speed overcome the characteristics of the trackers implemented in the computer vision library OpenCV 4.1.

2019 ◽  
Vol 12 (9) ◽  
pp. 4713-4724
Author(s):  
Chaojun Shi ◽  
Yatong Zhou ◽  
Bo Qiu ◽  
Jingfei He ◽  
Mu Ding ◽  
...  

Abstract. Cloud segmentation plays a very important role in astronomical observatory site selection. At present, few researchers segment cloud in nocturnal all-sky imager (ASI) images. This paper proposes a new automatic cloud segmentation algorithm that utilizes the advantages of deep-learning fully convolutional networks (FCNs) to segment cloud pixels from diurnal and nocturnal ASI images; it is called the enhancement fully convolutional network (EFCN). Firstly, all the ASI images in the data set from the Key Laboratory of Optical Astronomy at the National Astronomical Observatories of Chinese Academy of Sciences (CAS) are converted from the red–green–blue (RGB) color space to hue saturation intensity (HSI) color space. Secondly, the I channel of the HSI color space is enhanced by histogram equalization. Thirdly, all the ASI images are converted from the HSI color space to RGB color space. Then after 100 000 iterative trainings based on the ASI images in the training set, the optimum associated parameters of the EFCN-8s model are obtained. Finally, we use the trained EFCN-8s to segment the cloud pixels of the ASI image in the test set. In the experiments our proposed EFCN-8s was compared with four other algorithms (OTSU, FCN-8s, EFCN-32s, and EFCN-16s) using four evaluation metrics. Experiments show that the EFCN-8s is much more accurate in cloud segmentation for diurnal and nocturnal ASI images than the other four algorithms.


2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Zhuhan Jiang

We propose to model a tracked object in a video sequence by locating a list of object features that are ranked according to their ability to differentiate against the image background. The Bayesian inference is utilised to derive the probabilistic location of the object in the current frame, with the prior being approximated from the previous frame and the posterior achieved via the current pixel distribution of the object. Consideration has also been made to a number of relevant aspects of object tracking including multidimensional features and the mixture of colours, textures, and object motion. The experiment of the proposed method on the video sequences has been conducted and has shown its effectiveness in capturing the target in a moving background and with nonrigid object motion.


2019 ◽  
Author(s):  
Chaojun Shi ◽  
Yatong Zhou ◽  
Bo Qiu ◽  
Jingfei He ◽  
Mu Ding ◽  
...  

Abstract. Cloud segmentation plays a very important role in the astronomical observatory site selection. At present, few researchers segment cloud in the nocturnal All Sky Imager (ASI) images. This paper proposes a new automatic cloud segmentation algorithm which utilizes the advantages of deep learning fully convolutional networks (FCN) to segment cloud pixels from diurnal and nocturnal ASI images, named enhancement fully convolutional networks (EFCN). Firstly, all the ASI images in the data set from the Key Laboratory of Optical Astronomy at the National Astronomical Observatories of Chinese Academy of Sciences (CAS) are converted from the red-green-blue (RGB) color space to hue-saturation-intensity (HSI) color space. Secondly, the channel of the HSI color space is enhanced by the histogram equalization. Thirdly, all the ASI images are converted from the HSI color space to RGB color space. Then after 100000 times iterative training based on the ASI images in the training set, the optimum associated parameters of the EFCN-8s model are obtained. Finally, we use the trained EFCN-8s to segment the cloud pixels of the ASI image in the test set. In the experiments our proposed EFCN-8s was compared with other four algorithms (OTSU, FCN-8s, EFCN-32s, and EFCN-16s) using four evaluation metrics. Experiments show that the EFCN-8s is much more accurate in the cloud segmentation for diurnal and nocturnal ASI images than other four algorithms.


Agriculture ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 6
Author(s):  
Ewa Ropelewska

The aim of this study was to evaluate the usefulness of the texture and geometric parameters of endocarp (pit) for distinguishing different cultivars of sweet cherries using image analysis. The textures from images converted to color channels and the geometric parameters of the endocarp (pits) of sweet cherry ‘Kordia’, ‘Lapins’, and ‘Büttner’s Red’ were calculated. For the set combining the selected textures from all color channels, the accuracy reached 100% when comparing ‘Kordia’ vs. ‘Lapins’ and ‘Kordia’ vs. ‘Büttner’s Red’ for all classifiers. The pits of ‘Kordia’ and ‘Lapins’, as well as ‘Kordia’ and ‘Büttner’s Red’ were also 100% correctly discriminated for discriminative models built separately for RGB, Lab and XYZ color spaces, G, L and Y color channels and for models combining selected textural and geometric features. For discrimination ‘Lapins’ and ‘Büttner’s Red’ pits, slightly lower accuracies were determined—up to 93% for models built based on textures selected from all color channels, 91% for the RGB color space, 92% for the Lab and XYZ color spaces, 84% for the G and L color channels, 83% for the Y channel, 94% for geometric features, and 96% for combined textural and geometric features.


2021 ◽  
Vol 13 (6) ◽  
pp. 1211
Author(s):  
Pan Fan ◽  
Guodong Lang ◽  
Bin Yan ◽  
Xiaoyan Lei ◽  
Pengju Guo ◽  
...  

In recent years, many agriculture-related problems have been evaluated with the integration of artificial intelligence techniques and remote sensing systems. The rapid and accurate identification of apple targets in an illuminated and unstructured natural orchard is still a key challenge for the picking robot’s vision system. In this paper, by combining local image features and color information, we propose a pixel patch segmentation method based on gray-centered red–green–blue (RGB) color space to address this issue. Different from the existing methods, this method presents a novel color feature selection method that accounts for the influence of illumination and shadow in apple images. By exploring both color features and local variation in apple images, the proposed method could effectively distinguish the apple fruit pixels from other pixels. Compared with the classical segmentation methods and conventional clustering algorithms as well as the popular deep-learning segmentation algorithms, the proposed method can segment apple images more accurately and effectively. The proposed method was tested on 180 apple images. It offered an average accuracy rate of 99.26%, recall rate of 98.69%, false positive rate of 0.06%, and false negative rate of 1.44%. Experimental results demonstrate the outstanding performance of the proposed method.


2021 ◽  
Vol 13 (5) ◽  
pp. 939
Author(s):  
Yongan Xue ◽  
Jinling Zhao ◽  
Mingmei Zhang

To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.


Author(s):  
HUA YANG ◽  
MASAAKI KASHIMURA ◽  
NORIKADU ONDA ◽  
SHINJI OZAWA

This paper describes a new system for extracting and classifying bibliography regions from the color image of a book cover. The system consists of three major components: preprocessing, color space segmentation and text region extraction and classification. Preprocessing extracts the edge lines of the book and geometrically corrects and segments the input image, into the parts of front cover, spine and back cover. The same as all color image processing researches, the segmentation of color space is an essential and important step here. Instead of RGB color space, HSI color space is used in this system. The color space is segmented into achromatic and chromatic regions first; and both the achromatic and chromatic regions are segmented further to complete the color space segmentation. Then text region extraction and classification follow. After detecting fundamental features (stroke width and local label width) text regions are determined. By comparing the text regions on front cover with those on spine, all extracted text regions are classified into suitable bibliography categories: author, title, publisher and other information, without applying OCR.


2021 ◽  
Vol 13 (5) ◽  
pp. 919
Author(s):  
Marco Gabella

A previous study has used the stable and peculiar echoes backscattered by a single “bright scatterer” (BS) during five winter days to characterize the hardware of C-band, the dual-polarization radar located at Monte Lema (1625 m altitude) in Southern Switzerland. The BS is the 90 m tall metallic tower on Cimetta (1633 m altitude, 18 km range). In this note, the statistics of the echoes from the BS were derived from other ten dry days with normal propagation conditions in winter 2015 and January 2019. The study confirms that spectral signatures, such as spectrum width, wideband noise and Doppler velocity, were persistently stable. Regarding the polarimetric signatures, the large values (with small dispersion) of the copolar correlation coefficient between horizontal and vertical polarization were also confirmed: the average value was 0.9961 (0.9982) in winter 2015 (January 2019); the daily standard deviations were very small, ranging from 0.0007 to 0.0030. The dispersion of the differential phase shift was also confirmed to be quite small: the daily standard deviation ranged from a minimum of 2.5° to a maximum of 5.3°. Radar reflectivities in both polarizations were typically around 80 dBz and were confirmed to be among the largest values observed in the surveillance volume of the Monte Lema radar. Finally, another recent 5-day data set from January 2020 was analyzed after the replacement of the radar calibration unit that includes low noise amplifiers: these five days show poorer characteristics of the polarimetric signatures and a few outliers affecting the spectral signatures. It was shown that the “historical” polarimetric and spectral signatures of a bright scatterer could represent a benchmark for an in-depth comparison after hardware replacements.


2021 ◽  
Vol 7 (8) ◽  
pp. 150
Author(s):  
Kohei Inoue ◽  
Minyao Jiang ◽  
Kenji Hara

This paper proposes a method for improving saturation in the context of hue-preserving color image enhancement. The proposed method handles colors in an RGB color space, which has the form of a cube, and enhances the contrast of a given image by histogram manipulation, such as histogram equalization and histogram specification, of the intensity image. Then, the color corresponding to a target intensity is determined in a hue-preserving manner, where a gamut problem should be taken into account. We first project any color onto a surface in the RGB color space, which bisects the RGB color cube, to increase the saturation without a gamut problem. Then, we adjust the intensity of the saturation-enhanced color to the target intensity given by the histogram manipulation. The experimental results demonstrate that the proposed method achieves higher saturation than that given by related methods for hue-preserving color image enhancement.


Sign in / Sign up

Export Citation Format

Share Document