Advances in Environmental Engineering and Green Technologies - Computer Vision and Pattern Recognition in Environmental Informatics
Latest Publications


TOTAL DOCUMENTS

16
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781466694354, 9781466694361

Author(s):  
Haoyi Zhou ◽  
Jun Zhou ◽  
Haichuan Yang ◽  
Cheng Yan ◽  
Xiao Bai ◽  
...  

Imaging devices are of increasing use in environmental research requiring an urgent need to deal with such issues as image data, feature matching over different dimensions. Among them, matching hyperspectral image with other types of images is challenging due to the high dimensional nature of hyperspectral data. This chapter addresses this problem by investigating structured support vector machines to construct and learn a graph-based model for each type of image. The graph model incorporates both low-level features and stable correspondences within images. The inherent characteristics are depicted by using a graph matching algorithm on extracted weighted graph models. The effectiveness of this method is demonstrated through experiments on matching hyperspectral images to RGB images, and hyperspectral images with different dimensions on images of natural objects.



Author(s):  
Ali Zia ◽  
Jie Liang

Plant phenomics research requires different types of sensors employed to measure the physical traits of plant surface and to estimate the biomass. Of particular interests is the hyperspectral imaging device which captures wavelength indexed band images that characterize material properties of objects under study. This chapter introduces a proof of concept research that builds 3D plant model directly from hyperspectral images captured in a controlled lab environment. The method presented in this chapter allows fine structural-spectral information of an object be captured and integrated into the 3D model, which can be used to support further research and applications. The hyperspectral imaging has shown clear advantages in segmenting plant from its background and is very promising in generating comprehensive 3D plant models.



Author(s):  
Xiaozheng Zhang ◽  
Yongsheng Gao

3D modeling plays an important role in the field of computer vision and image processing. It provides a convenient tool set for many environmental informatics tasks, such as taxonomy and species identification. This chapter discusses a novel way of building the 3D models of objects from their varying 2D views. The appearance of a 3D object depends on both the viewing directions and illumination conditions. What is the set of images of an object under all viewing directions? In this chapter, a novel image representation is proposed, which transforms any n-pixel image of a 3D object to a vector in a 2n-dimensional pose space. In such a pose space, it is proven that the transformed images of a 3D object under all viewing directions form a parametric manifold in a 6-dimensional linear subspace. With in-depth rotations along a single axis in particular, this manifold is an ellipse. Furthermore, it is shown that this parametric pose manifold of a convex object can be estimated from a few images in different poses and used to predict object's appearances under unseen viewing directions. These results immediately suggest a number of approaches to object recognition, scene detection, and 3D modeling, applicable to environmental informatics. Experiments on both synthetic data and real images were reported, which demonstrates the validity of the proposed representation.



Author(s):  
Hanqing Lu ◽  
Xinwen Hou ◽  
Cheng-Lin Liu ◽  
Xiaolin Chen

Insect recognition is a hard problem because the difference of appearance between insects is so small that only some entomologist experts can distinguish them. Besides that, insects are often composed of several parts (multiple views) which generate more degrees of freedom. This chapter proposes several discriminative coding approaches and one decision fusion scheme of heterogeneous class sets for insect recognition. The three discriminative coding methods use class specific concatenated vectors instead of traditional global coding vectors for insect image patches. The decision fusion scheme uses an allocation matrix for classifier selection and a weight matrix for classifier fusion, which is suitable for combining classifiers of heterogeneous class sets in multi-view insect image recognition. Experimental results on a Tephritidae dataset show that the three proposed discriminative coding methods perform well in insect recognition, and the proposed fusion scheme improves the recognition accuracy significantly.



Author(s):  
Alice Ahlem Othmani

Due to the increasing use of the Terrestrial LiDAR Scanning (TLS also called T-LiDAR) technology in the forestry domain, many researchers and forest management organizations have developed several algorithms for the automatic measurement of forest inventory attributes. However, to the best of our knowledge not much has been done regarding single tree species recognition based on T-LiDAR data despite its importance for the assessment of the forestry resource. In this paper, we propose to put the light on the few works reported in the literature. The various algorithms presented in this paper uses the bark texture criteria and can be categorized into three families of approaches: those how combine T-LiDAR technology and photogrammetry, those based on depth images generated from T-LiDAR data and those based on raw 3D point cloud.



Author(s):  
Donatella Giuliani

This chapter presents a method to compute the skeletal curve of shapes extracted by images derived by the real world. This skeletonization approach has been proved effective when applied to recognize biological forms, regardless of their complexity. The coloured and grayscale images have been pre-processed and transformed in binary images, recurring to segmentation. Generally the resulting binary images contain bi-dimensional bounded shapes, not-simply connected. For edge extraction it has been performed a parametric active contour procedure with a generalized external force field. The force field has been evaluated through an anisotropic diffusion equation. It has been noticed that the field divergence satisfies an anisotropic diffusion equation as well. Moreover, the curves of positive divergence can be considered as propagating fronts that converge to a steady state, the skeleton of the extracted object. This methodology has also been tested on shapes with boundary perturbations and disconnections.



Author(s):  
Magnus Oskarsson ◽  
Tobias Kjellberg ◽  
Tobias Palmér ◽  
Dan-Eric Nilsson ◽  
Kalle Åström

In this chapter a system for tracking the motion of box jellyfish Tripedalia cystophora in a special test setup is investigated. The goal is to measure the motor response of the animal given certain visual stimuli. The approach is based on tracking the special sensory structures - the rhopalia - of the box jellyfish from high-speed video sequences. The focus has been on a real-time system with simple building blocks in the system. However, using a combination of simple intensity based detection and model based tracking promising tracking results with up to 95% accuracy are achieved.



Author(s):  
Varun Santhaseelan ◽  
Vijayan K. Asari

In this chapter, solutions to the problem of whale blow detection in infrared video are presented. The solutions are considered to be assistive technology that could help whale researchers to sift through hours or days of video without manual intervention. Video is captured from an elevated position along the shoreline using an infrared camera. The presence of whales is inferred from the presence of blows detected in the video. In this chapter, three solutions are proposed for this problem. The first algorithm makes use of a neural network (multi-layer perceptron) for classification, the second uses fractal features and the third solution is using convolutional neural networks. The central idea of all the algorithms is to attempt and model the spatio-temporal characteristics of a whale blow accurately using appropriate mathematical models. We provide a detailed description and analysis of the proposed solutions, the challenges and some possible directions for future research.



Author(s):  
Sebastian Haug ◽  
Jörn Ostermann

Small size agricultural robots which are capable of sensing and manipulating the field environment are a promising approach towards more ecological, sustainable and human-friendly agriculture. This chapter proposes a machine vision approach for plant classification in the field and discusses its possible application in the context of robot based precision agriculture. The challenges of machine vision in the field are discussed at the example of plant classification for weed control. Automatic crop/weed discrimination enables new weed control strategies where single weed plants are treated individually. System development and evaluation are done using a dataset of images captured in a commercial organic carrot farm with the autonomous field robot Bonirob under field conditions. Results indicate plant classification performance with 93% average accuracy.



Author(s):  
Prasanna Kannappan ◽  
Herbert G. Tanner ◽  
Arthur C. Trembanis ◽  
Justin H. Walker

A large volume of image data, in the order of thousands to millions of images, can be generated by robotic marine surveys aimed at assessment of organism populations. Manual processing and annotation of individual images in such large datasets is not an attractive option. It would seem that computer vision and machine learning techniques can be used to automate this process, yet to this date, available automated detection and counting tools for scallops do not work well with noisy low-resolution images and are bound to produce very high false positive rates. In this chapter, we hone a recently developed method for automated scallop detection and counting for the purpose of drastically reducing its false positive rate. In the process, we compare the performance of two customized false positive filtering alternatives, histogram of gradients and weighted correlation template matching.



Sign in / Sign up

Export Citation Format

Share Document