scholarly journals Segmentation of Database Images Using Gradient Based Multiscale Morphological Reconstruction

Author(s):  
A. Nithya ◽  
R. Kayalvizhi

The main purpose of this research is to improve the accuracy of object segmentation in database images by constructing an object segmentation algorithm. Image segmentation is a crucial step in the field of image processing and pattern recognition. Segmentation allows the identification of structures in an image which can be utilized for further processing. Both region-based and object-based segmentation are utilized for large-scale database images in a robust and principled manner. Gradient based MultiScalE Graylevel mOrphological recoNstructions (G-SEGON) is used for segmenting an image. SEGON roughly identifies the background and object regions in the image. This proposed method comprises of four phases namely pre-processing phase, object identification phase, object region segmentation phase, majority selection and refinement phase. After developing the grey level mesh the resultant image is converted into gradient and K-means clustering segmentation algorithm is used to segment the object from the gradient image. After implementation the accuracy of the proposed G-SEGON technique is compared with the existing method to prove its efficiency.

2021 ◽  
Vol 13 (3) ◽  
pp. 364
Author(s):  
Han Gao ◽  
Jinhui Guo ◽  
Peng Guo ◽  
Xiuwan Chen

Recently, deep learning has become the most innovative trend for a variety of high-spatial-resolution remote sensing imaging applications. However, large-scale land cover classification via traditional convolutional neural networks (CNNs) with sliding windows is computationally expensive and produces coarse results. Additionally, although such supervised learning approaches have performed well, collecting and annotating datasets for every task are extremely laborious, especially for those fully supervised cases where the pixel-level ground-truth labels are dense. In this work, we propose a new object-oriented deep learning framework that leverages residual networks with different depths to learn adjacent feature representations by embedding a multibranch architecture in the deep learning pipeline. The idea is to exploit limited training data at different neighboring scales to make a tradeoff between weak semantics and strong feature representations for operational land cover mapping tasks. We draw from established geographic object-based image analysis (GEOBIA) as an auxiliary module to reduce the computational burden of spatial reasoning and optimize the classification boundaries. We evaluated the proposed approach on two subdecimeter-resolution datasets involving both urban and rural landscapes. It presented better classification accuracy (88.9%) compared to traditional object-based deep learning methods and achieves an excellent inference time (11.3 s/ha).


2019 ◽  
Vol 12 (1) ◽  
pp. 96 ◽  
Author(s):  
James Brinkhoff ◽  
Justin Vardanega ◽  
Andrew J. Robson

Land cover mapping of intensive cropping areas facilitates an enhanced regional response to biosecurity threats and to natural disasters such as drought and flooding. Such maps also provide information for natural resource planning and analysis of the temporal and spatial trends in crop distribution and gross production. In this work, 10 meter resolution land cover maps were generated over a 6200 km2 area of the Riverina region in New South Wales (NSW), Australia, with a focus on locating the most important perennial crops in the region. The maps discriminated between 12 classes, including nine perennial crop classes. A satellite image time series (SITS) of freely available Sentinel-1 synthetic aperture radar (SAR) and Sentinel-2 multispectral imagery was used. A segmentation technique grouped spectrally similar adjacent pixels together, to enable object-based image analysis (OBIA). K-means unsupervised clustering was used to filter training points and classify some map areas, which improved supervised classification of the remaining areas. The support vector machine (SVM) supervised classifier with radial basis function (RBF) kernel gave the best results among several algorithms trialled. The accuracies of maps generated using several combinations of the multispectral and radar bands were compared to assess the relative value of each combination. An object-based post classification refinement step was developed, enabling optimization of the tradeoff between producers’ accuracy and users’ accuracy. Accuracy was assessed against randomly sampled segments, and the final map achieved an overall count-based accuracy of 84.8% and area-weighted accuracy of 90.9%. Producers’ accuracies for the perennial crop classes ranged from 78 to 100%, and users’ accuracies ranged from 63 to 100%. This work develops methods to generate detailed and large-scale maps that accurately discriminate between many perennial crops and can be updated frequently.


Author(s):  
Alexander Miropolsky ◽  
Anath Fischer

The inspection of machined objects is one of the most important quality control tasks in the manufacturing industry. Contemporary scanning technologies have provided the impetus for the development of computational inspection methods, where the computer model of the manufactured object is reconstructed from the scan data, and then verified against its digital design model. Scan data, however, are typically very large scale (i.e., many points), unorganized, noisy, and incomplete. Therefore, reconstruction is problematic. To overcome the above problems the reconstruction methods may exploit diverse feature data, that is, diverse information about the properties of the scanned object. Based on this concept, the paper proposes a new method for denoising and reduction in scan data by extended geometric filter. The proposed method is applied directly on the scanned points and is automatic, fast, and straightforward to implement. The paper demonstrates the integration of the proposed method into the framework of the computational inspection process.


2021 ◽  
Author(s):  
Béla Kovács ◽  
Márton Pál ◽  
Fanni Vörös

<p>The use of aerial photography in topography has started in the first decades of the 20<sup>th</sup> century. Remote sensed data have become indispensable for cartographers and GIS staff when doing large-scale mapping: especially topographic, orienteering and thematic maps. The use of UAVs (unmanned aerial vehicles) for this purpose has also become widespread for some years. Various drones and sensors (RGB, multispectral and hyperspectral) with many specifications are used to capture and process the physical properties of an examined area. In parallel with the development of the hardware, new software solutions are emerging to visualize and analyse photogrammetric material: a large set of algorithms with different approaches are available for image processing.</p><p>Our study focuses on the large-scale topographic mapping of vegetation and land cover. Most traditional analogue and digital maps use these layers either for background or highlighted thematic purposes. We propose to use the theory of OBIA – Object-based Image Analysis to differentiate cover types. This method involves pixels to be grouped into larger polygon units based on either spectral or other variables (e.g. elevation, aspect, curvature in case of DEMs). The neighbours of initial seed points are examined whether they should be added to the region according to the similarity of their attributes. Using OBIA, different land cover types (trees, grass, soils, bare rock surfaces) can be distinguished either with supervised or unsupervised classification – depending on the purposes of the analyst. Our base data were high-resolution RGB and multispectral images (with 5 bands).</p><p>Following this methodology, not only elevation data (e.g. shaded relief or vector contour lines) can be derived from UAV imagery but vector land cover data are available for cartographers and GIS analysts. As the number of distinct land cover groups is free to choose, even quite complex thematic layers can be produced. These layers can serve as subjects of further analyses or for cartographic visualization.</p><p> </p><p>BK is supported by Application Domain Specific Highly Reliable IT Solutions” project  has been implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the Thematic Excellence Programme TKP2020-NKA-06 (National Challenges Subprogramme) funding scheme.</p><p>MP and FV are supported by EFOP-3.6.3-VEKOP-16-2017-00001: Talent Management in Autonomous Vehicle Control Technologies – The Project is financed by the Hungarian Government and co-financed by the European Social Fund.</p>


2021 ◽  
Author(s):  
Shinya Ito ◽  
Yufei Si ◽  
Alan M. Litke ◽  
David A. Feldheim

AbstractSensory information from different modalities is processed in parallel, and then integrated in associative brain areas to improve object identification and the interpretation of sensory experiences. The Superior Colliculus (SC) is a midbrain structure that plays a critical role in integrating visual, auditory, and somatosensory input to assess saliency and promote action. Although the response properties of the individual SC neurons to visuoauditory stimuli have been characterized, little is known about the spatial and temporal dynamics of the integration at the population level. Here we recorded the response properties of SC neurons to spatially restricted visual and auditory stimuli using large-scale electrophysiology. We then created a general, population-level model that explains the spatial, temporal, and intensity requirements of stimuli needed for sensory integration. We found that the mouse SC contains topographically organized visual and auditory neurons that exhibit nonlinear multisensory integration. We show that nonlinear integration depends on properties of auditory but not visual stimuli. We also find that a heuristically derived nonlinear modulation function reveals conditions required for sensory integration that are consistent with previously proposed models of sensory integration such as spatial matching and the principle of inverse effectiveness.


Author(s):  
Zhao Sun ◽  
Yifu Wang ◽  
Lei Pan ◽  
Yunhong Xie ◽  
Bo Zhang ◽  
...  

AbstractPine wilt disease (PWD) is currently one of the main causes of large-scale forest destruction. To control the spread of PWD, it is essential to detect affected pine trees quickly. This study investigated the feasibility of using the object-oriented multi-scale segmentation algorithm to identify trees discolored by PWD. We used an unmanned aerial vehicle (UAV) platform equipped with an RGB digital camera to obtain high spatial resolution images, and multi-scale segmentation was applied to delineate the tree crown, coupling the use of object-oriented classification to classify trees discolored by PWD. Then, the optimal segmentation scale was implemented using the estimation of scale parameter (ESP2) plug-in. The feature space of the segmentation results was optimized, and appropriate features were selected for classification. The results showed that the optimal scale, shape, and compactness values of the tree crown segmentation algorithm were 56, 0.5, and 0.8, respectively. The producer’s accuracy (PA), user’s accuracy (UA), and F1 score were 0.722, 0.605, and 0.658, respectively. There were no significant classification errors in the final classification results, and the low accuracy was attributed to the low number of objects count caused by incorrect segmentation. The multi-scale segmentation and object-oriented classification method could accurately identify trees discolored by PWD with a straightforward and rapid processing. This study provides a technical method for monitoring the occurrence of PWD and identifying the discolored trees of disease using UAV-based high-resolution images.


Sign in / Sign up

Export Citation Format

Share Document