Extended Geometric Filter for Reconstruction as a Basis for Computational Inspection

Author(s):  
Alexander Miropolsky ◽  
Anath Fischer

Inspection of machined objects is one of the most important quality control tasks in the manufacturing industry. Contemporary scanning technologies have provided the impetus for the development of computational inspection methods, where the computer model of the manufactured object is reconstructed from the scan data, and then verified against its design computer model. Scan data, however, is typically very large scale (i.e. many points), unorganized, noisy and incomplete. Therefore, reconstruction is problematic. To overcome the above problems the reconstruction methods may exploit diverse feature data, that is, diverse information about the properties of the scanned object. Based on this concept, the paper proposes a new method for de-noising and reduction of scan data by Extended Geometric Filter (EGF). The proposed method is applied directly on the scanned points and is automatic, fast and straightforward to implement. The paper demonstrates the integration of the proposed method into the framework of the computational inspection process.

Author(s):  
Alexander Miropolsky ◽  
Anath Fischer

The inspection of machined objects is one of the most important quality control tasks in the manufacturing industry. Contemporary scanning technologies have provided the impetus for the development of computational inspection methods, where the computer model of the manufactured object is reconstructed from the scan data, and then verified against its digital design model. Scan data, however, are typically very large scale (i.e., many points), unorganized, noisy, and incomplete. Therefore, reconstruction is problematic. To overcome the above problems the reconstruction methods may exploit diverse feature data, that is, diverse information about the properties of the scanned object. Based on this concept, the paper proposes a new method for denoising and reduction in scan data by extended geometric filter. The proposed method is applied directly on the scanned points and is automatic, fast, and straightforward to implement. The paper demonstrates the integration of the proposed method into the framework of the computational inspection process.


2007 ◽  
Vol 7 (3) ◽  
pp. 211-224 ◽  
Author(s):  
A. Miropolsky ◽  
A. Fischer

Inspection of machined objects is one of the most important quality control tasks in the manufacturing industry. Ideally, inspection processes should be able to work directly on scan point data. Scan data, however, are typically very large scale (i.e., many points), unorganized, noisy, and incomplete. Therefore, direct processing of scanned points is problematic. Many of these problems may be reduced if reconstruction methods exploit diverse scan data, that is, information about the properties of the scanned object. This paper describes this concept and proposes new methods for extraction and processing of diverse scan data: (1) extraction (detection of a scanned object’s sharp features by the sharp feature detection method) and (2) processing (scan data reduction by the geometric bilateral filter method). The proposed methods are applied directly on the scanned points and are completely automatic, fast, and straightforward to implement. Finally, this paper demonstrates the integration of the proposed methods into the computational inspection process.


1966 ◽  
Vol 05 (02) ◽  
pp. 67-74 ◽  
Author(s):  
W. I. Lourie ◽  
W. Haenszeland

Quality control of data collected in the United States by the Cancer End Results Program utilizing punchcards prepared by participating registries in accordance with a Uniform Punchcard Code is discussed. Existing arrangements decentralize responsibility for editing and related data processing to the local registries with centralization of tabulating and statistical services in the End Results Section, National Cancer Institute. The most recent deck of punchcards represented over 600,000 cancer patients; approximately 50,000 newly diagnosed cases are added annually.Mechanical editing and inspection of punchcards and field audits are the principal tools for quality control. Mechanical editing of the punchcards includes testing for blank entries and detection of in-admissable or inconsistent codes. Highly improbable codes are subjected to special scrutiny. Field audits include the drawing of a 1-10 percent random sample of punchcards submitted by a registry; the charts are .then reabstracted and recoded by a NCI staff member and differences between the punchcard and the results of independent review are noted.


Impact ◽  
2019 ◽  
Vol 2019 (10) ◽  
pp. 90-92
Author(s):  
Kae Doki ◽  
Yuki Funabora ◽  
Shinji Doki

Every day we are seeing an increasing number of robots being employed in our day-to-day lives. They are working in factories, cleaning our houses and may soon be chauffeuring us around in vehicles. The affordability of drones too has come down and now it is conceivable for most anyone to own a sophisticated unmanned aerial vehicle (UAV). While fun to fly, these devices also represent powerful new tools for several industries. Anytime an aerial view is needed for a planning, surveillance or surveying, for example, a UAV can be deployed. Further still, equipping these vehicles with an array of sensors, for climate research or mapping, increases their capability even more. This gives companies, governments or researchers a cheap and safe way to collect vast amounts of data and complete tasks in remote or dangerous areas that were once impossible to reach. One area UAVs are proving to be particularly useful is infrastructure inspection. In countries all over the world large scale infrastructure projects like dams and bridges are ageing and in need of upkeep. Identifying which ones and exactly where they are in need of patching is a huge undertaking. Not only can this work be dangerous, requiring trained inspectors to climb these megaprojects, it is incredibly time consuming and costly. Enter the UAVs. With a fleet of specially equipped UAVs and a small team piloting them and interpreting the data they bring back the speed and safety of this work increases exponentially. The promise of UAVs to overturn the infrastructure inspection process is enticing, but there remain several obstacles to overcome. One is achieving the fine level of control and positioning required to navigate the robots around 3D structures for inspection. One can imagine that piloting a small UAV underneath a huge highway bridge without missing a single small crack is quite difficult, especially when the operators are safely on the ground hundreds of meters away. To do this knowing exactly where the vehicle is in space becomes a critical variable. The job can be made even easier if a flight plan based on set waypoints can be pre-programmed and followed autonomously by the UAV. It is exactly this problem that Dr Kae Doki from the Department of Electrical Engineering at Aichi Institute of Technology, and collaborators are focused on solving.


1979 ◽  
Vol 6 (2) ◽  
pp. 70-72
Author(s):  
T. A. Coffelt ◽  
F. S. Wright ◽  
J. L. Steele

Abstract A new method of harvesting and curing breeder's seed peanuts in Virginia was initiated that would 1) reduce the labor requirements, 2) maintain a high level of germination, 3) maintain varietal purity at 100%, and 4) reduce the risk of frost damage. Three possible harvesting and curing methods were studied. The traditional stack-pole method satisfied the latter 3 objectives, but not the first. The windrow-combine method satisfied the first 2 objectives, but not the last 2. The direct harvesting method satisfied all four objectives. The experimental equipment and curing procedures for direct harvesting had been developed but not tested on a large scale for seed harvesting. This method has been used in Virginia to produce breeder's seed of 3 peanut varieties (Florigiant, VA 72R and VA 61R) during five years. Compared to the stackpole method, labor requirements have been reduced, satisfactory levels of germination and varietal purity have been obtained, and the risk of frost damage has been minimized.


2021 ◽  
Vol 502 (3) ◽  
pp. 3976-3992
Author(s):  
Mónica Hernández-Sánchez ◽  
Francisco-Shu Kitaura ◽  
Metin Ata ◽  
Claudio Dalla Vecchia

ABSTRACT We investigate higher order symplectic integration strategies within Bayesian cosmic density field reconstruction methods. In particular, we study the fourth-order discretization of Hamiltonian equations of motion (EoM). This is achieved by recursively applying the basic second-order leap-frog scheme (considering the single evaluation of the EoM) in a combination of even numbers of forward time integration steps with a single intermediate backward step. This largely reduces the number of evaluations and random gradient computations, as required in the usual second-order case for high-dimensional cases. We restrict this study to the lognormal-Poisson model, applied to a full volume halo catalogue in real space on a cubical mesh of 1250 h−1 Mpc side and 2563 cells. Hence, we neglect selection effects, redshift space distortions, and displacements. We note that those observational and cosmic evolution effects can be accounted for in subsequent Gibbs-sampling steps within the COSMIC BIRTH algorithm. We find that going from the usual second to fourth order in the leap-frog scheme shortens the burn-in phase by a factor of at least ∼30. This implies that 75–90 independent samples are obtained while the fastest second-order method converges. After convergence, the correlation lengths indicate an improvement factor of about 3.0 fewer gradient computations for meshes of 2563 cells. In the considered cosmological scenario, the traditional leap-frog scheme turns out to outperform higher order integration schemes only when considering lower dimensional problems, e.g. meshes with 643 cells. This gain in computational efficiency can help to go towards a full Bayesian analysis of the cosmological large-scale structure for upcoming galaxy surveys.


2021 ◽  
Vol 13 (3) ◽  
pp. 364
Author(s):  
Han Gao ◽  
Jinhui Guo ◽  
Peng Guo ◽  
Xiuwan Chen

Recently, deep learning has become the most innovative trend for a variety of high-spatial-resolution remote sensing imaging applications. However, large-scale land cover classification via traditional convolutional neural networks (CNNs) with sliding windows is computationally expensive and produces coarse results. Additionally, although such supervised learning approaches have performed well, collecting and annotating datasets for every task are extremely laborious, especially for those fully supervised cases where the pixel-level ground-truth labels are dense. In this work, we propose a new object-oriented deep learning framework that leverages residual networks with different depths to learn adjacent feature representations by embedding a multibranch architecture in the deep learning pipeline. The idea is to exploit limited training data at different neighboring scales to make a tradeoff between weak semantics and strong feature representations for operational land cover mapping tasks. We draw from established geographic object-based image analysis (GEOBIA) as an auxiliary module to reduce the computational burden of spatial reasoning and optimize the classification boundaries. We evaluated the proposed approach on two subdecimeter-resolution datasets involving both urban and rural landscapes. It presented better classification accuracy (88.9%) compared to traditional object-based deep learning methods and achieves an excellent inference time (11.3 s/ha).


Forests ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 1006
Author(s):  
Zhenhuan Chen ◽  
Hongge Zhu ◽  
Wencheng Zhao ◽  
Menghan Zhao ◽  
Yutong Zhang

China’s forest products manufacturing industry is experiencing the dual pressure of forest protection policies and wood scarcity and, therefore, it is of great significance to reveal the spatial agglomeration characteristics and evolution drivers of this industry to enhance its sustainable development. Based on the perspective of large-scale agglomeration in a continuous space, in this study, we used the spatial Gini coefficient and standard deviation ellipse method to investigate the spatial agglomeration degree and location distribution characteristics of China’s forest products manufacturing industry, and we used exploratory spatial data analysis to investigate its spatial agglomeration pattern. The results show that: (1) From 1988 to 2018, the degree of spatial agglomeration of China’s forest products manufacturing industry was relatively low, and the industry was characterized by a very pronounced imbalance in its spatial distribution. (2) The industry has a very clear core–periphery structure, the spatial distribution exhibits a “northeast-southwest” pattern, and the barycenter of the industrial distribution has tended to move south. (3) The industry mainly has a high–high and low–low spatial agglomeration pattern. The provinces with high–high agglomeration are few and concentrated in the southeast coastal area. (4) The spatial agglomeration and evolution characteristics of China’s forest products manufacturing industry may be simultaneously affected by forest protection policies, sources of raw materials, international trade and the degree of marketization. In the future, China’s forest products manufacturing industry should further increase the level of spatial agglomeration to fully realize the economies of scale.


2019 ◽  
Vol 35 (14) ◽  
pp. i417-i426 ◽  
Author(s):  
Erin K Molloy ◽  
Tandy Warnow

Abstract Motivation At RECOMB-CG 2018, we presented NJMerge and showed that it could be used within a divide-and-conquer framework to scale computationally intensive methods for species tree estimation to larger datasets. However, NJMerge has two significant limitations: it can fail to return a tree and, when used within the proposed divide-and-conquer framework, has O(n5) running time for datasets with n species. Results Here we present a new method called ‘TreeMerge’ that improves on NJMerge in two ways: it is guaranteed to return a tree and it has dramatically faster running time within the same divide-and-conquer framework—only O(n2) time. We use a simulation study to evaluate TreeMerge in the context of multi-locus species tree estimation with two leading methods, ASTRAL-III and RAxML. We find that the divide-and-conquer framework using TreeMerge has a minor impact on species tree accuracy, dramatically reduces running time, and enables both ASTRAL-III and RAxML to complete on datasets (that they would otherwise fail on), when given 64 GB of memory and 48 h maximum running time. Thus, TreeMerge is a step toward a larger vision of enabling researchers with limited computational resources to perform large-scale species tree estimation, which we call Phylogenomics for All. Availability and implementation TreeMerge is publicly available on Github (http://github.com/ekmolloy/treemerge). Supplementary information Supplementary data are available at Bioinformatics online.


2019 ◽  
Vol 12 (1) ◽  
pp. 96 ◽  
Author(s):  
James Brinkhoff ◽  
Justin Vardanega ◽  
Andrew J. Robson

Land cover mapping of intensive cropping areas facilitates an enhanced regional response to biosecurity threats and to natural disasters such as drought and flooding. Such maps also provide information for natural resource planning and analysis of the temporal and spatial trends in crop distribution and gross production. In this work, 10 meter resolution land cover maps were generated over a 6200 km2 area of the Riverina region in New South Wales (NSW), Australia, with a focus on locating the most important perennial crops in the region. The maps discriminated between 12 classes, including nine perennial crop classes. A satellite image time series (SITS) of freely available Sentinel-1 synthetic aperture radar (SAR) and Sentinel-2 multispectral imagery was used. A segmentation technique grouped spectrally similar adjacent pixels together, to enable object-based image analysis (OBIA). K-means unsupervised clustering was used to filter training points and classify some map areas, which improved supervised classification of the remaining areas. The support vector machine (SVM) supervised classifier with radial basis function (RBF) kernel gave the best results among several algorithms trialled. The accuracies of maps generated using several combinations of the multispectral and radar bands were compared to assess the relative value of each combination. An object-based post classification refinement step was developed, enabling optimization of the tradeoff between producers’ accuracy and users’ accuracy. Accuracy was assessed against randomly sampled segments, and the final map achieved an overall count-based accuracy of 84.8% and area-weighted accuracy of 90.9%. Producers’ accuracies for the perennial crop classes ranged from 78 to 100%, and users’ accuracies ranged from 63 to 100%. This work develops methods to generate detailed and large-scale maps that accurately discriminate between many perennial crops and can be updated frequently.


Sign in / Sign up

Export Citation Format

Share Document