scholarly journals Toward the recognition of spacecraft feature components: A new benchmark and a new model

Astrodynamics ◽  
2021 ◽  
Author(s):  
Linwei Qiu ◽  
Liang Tang ◽  
Rui Zhong

AbstractCountries are increasingly interested in spacecraft surveillance and recognition which play an important role in on-orbit maintenance, space docking, and other applications. Traditional detection methods, including radar, have many restrictions, such as excessive costs and energy supply problems. For many on-orbit servicing spacecraft, image recognition is a simple but relatively accurate method for obtaining sufficient position and direction information to offer services. However, to the best of our knowledge, few practical machine-learning models focusing on the recognition of spacecraft feature components have been reported. In addition, it is difficult to find substantial on-orbit images with which to train or evaluate such a model. In this study, we first created a new dataset containing numerous artificial images of on-orbit spacecraft with labeled components. Our base images were derived from 3D Max and STK software. These images include many types of satellites and satellite postures. Considering real-world illumination conditions and imperfect camera observations, we developed a degradation algorithm that enabled us to produce thousands of artificial images of spacecraft. The feature components of the spacecraft in all images were labeled manually. We discovered that direct utilization of the DeepLab V3+ model leads to poor edge recognition. Poorly defined edges provide imprecise position or direction information and degrade the performance of on-orbit services. Thus, the edge information of the target was taken as a supervisory guide, and was used to develop the proposed Edge Auxiliary Supervision DeepLab Network (EASDN). The main idea of EASDN is to provide a new edge auxiliary loss by calculating the L2 loss between the predicted edge masks and ground-truth edge masks during training. Our extensive experiments demonstrate that our network can perform well both on our benchmark and on real on-orbit spacecraft images from the Internet. Furthermore, the device usage and processing time meet the demands of engineering applications.

2021 ◽  
Vol 13 (12) ◽  
pp. 2328
Author(s):  
Yameng Hong ◽  
Chengcai Leng ◽  
Xinyue Zhang ◽  
Zhao Pei ◽  
Irene Cheng ◽  
...  

Image registration has always been an important research topic. This paper proposes a novel method of constructing descriptors called the histogram of oriented local binary pattern descriptor (HOLBP) for fast and robust matching. There are three new components in our algorithm. First, we redefined the gradient and angle calculation template to make it more sensitive to edge information. Second, we proposed a new construction method of the HOLBP descriptor and improved the traditional local binary pattern (LBP) computation template. Third, the principle of uniform rotation-invariant LBP was applied to add 10-dimensional gradient direction information to form a 138-dimension HOLBP descriptor vector. The experimental results showed that our method is very stable in terms of accuracy and computational time for different test images.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Xiang Li ◽  
Jianzheng Liu ◽  
Jessica Baron ◽  
Khoa Luu ◽  
Eric Patterson

AbstractRecent attention to facial alignment and landmark detection methods, particularly with application of deep convolutional neural networks, have yielded notable improvements. Neither these neural-network nor more traditional methods, though, have been tested directly regarding performance differences due to camera-lens focal length nor camera viewing angle of subjects systematically across the viewing hemisphere. This work uses photo-realistic, synthesized facial images with varying parameters and corresponding ground-truth landmarks to enable comparison of alignment and landmark detection techniques relative to general performance, performance across focal length, and performance across viewing angle. Recently published high-performing methods along with traditional techniques are compared in regards to these aspects.


2021 ◽  
Vol 368 (6) ◽  
Author(s):  
Liwen Zhang ◽  
Qingyu Lv ◽  
Yuling Zheng ◽  
Xuan Chen ◽  
Decong Kong ◽  
...  

ABSTRACT T-2 is a common mycotoxin contaminating cereal crops. Chronic consumption of food contaminated with T-2 toxin can lead to death, so simple and accurate detection methods in food and feed are necessary. In this paper, we establish a highly sensitive and accurate method for detecting T-2 toxin using AlphaLISA. The system consists of acceptor beads labeled with T-2-bovine serum albumin (BSA), streptavidin-labeled donor beads and biotinylated T-2 antibodies. T-2 in the sample matrix competes with T-2-BSA for antibodies. Adding biotinylated antibodies to the test well followed by T-2 and T-2-BSA acceptor beads yielded a detection range of 0.03–500 ng/mL. The half-maximal inhibitory concentration was 2.28 ng/mL and the coefficient of variation was <10%. In addition, this method had no cross-reaction with other related mycotoxins. This optimized method for extracting T-2 from food and feed samples achieved a recovery rate of approximately 90% in T-2 concentrations as low as 1 ng/mL, better than the performance of a commercial ELISA kit. This competitive AlphaLISA method offers high sensitivity, good specificity, good repeatability and simple operation for detecting T-2 toxin in food and feed.


2019 ◽  
Vol 20 (5) ◽  
pp. 821-832 ◽  
Author(s):  
Satya Prakash ◽  
Ashwin Seshadri ◽  
J. Srinivasan ◽  
D. S. Pai

Abstract Rain gauges are considered the most accurate method to estimate rainfall and are used as the “ground truth” for a wide variety of applications. The spatial density of rain gauges varies substantially and hence influences the accuracy of gridded gauge-based rainfall products. The temporal changes in rain gauge density over a region introduce considerable biases in the historical trends in mean rainfall and its extremes. An estimate of uncertainty in gauge-based rainfall estimates associated with the nonuniform layout and placement pattern of the rain gauge network is vital for national decisions and policy planning in India, which considers a rather tight threshold of rainfall anomaly. This study examines uncertainty in the estimation of monthly mean monsoon rainfall due to variations in gauge density across India. Since not all rain gauges provide measurements perpetually, we consider the ensemble uncertainty in spatial average estimation owing to randomly leaving out rain gauges from the estimate. A recently developed theoretical model shows that the uncertainty in the spatially averaged rainfall is directly proportional to the spatial standard deviation and inversely proportional to the square root of the total number of available gauges. On this basis, a new parameter called the “averaging error factor” has been proposed that identifies the regions with large ensemble uncertainties. Comparison of the theoretical model with Monte Carlo simulations at a monthly time scale using rain gauge observations shows good agreement with each other at all-India and subregional scales. The uncertainty in monthly mean rainfall estimates due to omission of rain gauges is largest for northeast India (~4% uncertainty for omission of 10% gauges) and smallest for central India. Estimates of spatial average rainfall should always be accompanied by a measure of uncertainty, and this paper provides such a measure for gauge-based monthly rainfall estimates. This study can be further extended to determine the minimum number of rain gauges necessary for any given region to estimate rainfall at a certain level of uncertainty.


2013 ◽  
Vol 321-324 ◽  
pp. 1046-1050
Author(s):  
Ai Ping Cai

The support vector machine (SVM) has been shown to be an efficient approach for a variety of classification problems. It has also been widely used in target identification and tracking, motion analysis, image segmentation technology. Traditional detection methods mostly exist pseudo-edge and poor anti-noise capability. Under these circumstances, developing an efficient method is necessary. In this paper, we propose a new detection algorithm based on FSVM, the main idea is to train classified sample and give all training data a degree of membership, increase punishment to the wrong sub-sample. Then training and testing the FSVM classification model. Finally, extract edge of the image by using FSVM classification model. Experimental results show that the new algorithm can detect a clear image edge and have a good anti-noise nature.


2020 ◽  
Author(s):  
Jie Zhao ◽  
Marco Chini ◽  
Ramona Pelich ◽  
Patrick Matgen ◽  
Renaud Hostache ◽  
...  

<p>Change detection has been widely used in many flood-mapping algorithms using pairs of Synthetic Aperture Radar (SAR) intensity images. The rationale is that when the right conditions are met, the appearance of floodwater results in a significant decrease of backscatter.  However, limitations still exist in areas where the SAR backscatter is not sufficiently impacted by surface changes due to floodwater. For example, in shadow areas, the backscatter is stable over time because the SAR signal does not reach the ground due to prominent topography or obstacles on the ground (e.g., buildings). Densely vegetated forest is another insensitive region due to low capability of SAR C-band wavelengths to penetrate its canopy. Moreover, although in principle SAR can sense water over different land cover classes such as arid regions, streets and buildings, the backscatter changes over time could not be detected because in such areas the scattering variation caused by the presence of water might be negligible with respect to the normal “unflooded” state. The identification of the abovementioned areas where SAR does not allow detecting water based on change detection methods, hereafter called exclusion map, is crucial for providing reliable SAR-based flood maps.</p><p>In this study, insensitive areas are identified using long time-series of Sentinel-1 data and the final exclusion map is classified in four distinctive classes: shadow, layover, urban areas and dense forest. In the proposed method the identification of insensitive areas is based on the use of pixel-based time series backscatter statistics (minimum, maximum, median and standard deviation) coupled with a local spatial autocorrelation analysis (i.e. Moran’s I, Getis-Ord Gi and Geary’s C). In order to evaluate the extracted exclusion map, which is quite unique, we employ a comprehensive ground truth dataset that is obtained by combining different products: 1) a shadow/layover map generated using a 25m-resolution DEM and the geometric acquisition parameters of the SAR data; 2) 20m resolution imperviousness map provided by Copernicus, as well as a high-resolution global urban footprint (GUF) data provided by DLR; 3) a 20m tree cover density (TCD) map provided by Copernicus. In the end, the exclusion map is used to mask out unclassified areas in the flood maps derived by an automatic change detection method, which is expected to enhance flood maps by removing areas where the presence or absence of floodwater cannot be evidenced. In addition, we argue that our insensitive area map provides valuable information for improving the calibration, validation and regular updating of hydraulic models using SAR derived flood extent maps.</p>


2013 ◽  
Vol 56 (5) ◽  
pp. 1416-1428 ◽  
Author(s):  
Brian Reggiannini ◽  
Stephen J. Sheinkopf ◽  
Harvey F. Silverman ◽  
Xiaoxue Li ◽  
Barry M. Lester

Purpose In this article, the authors describe and validate the performance of a modern acoustic analyzer specifically designed for infant cry analysis. Method Utilizing known algorithms, the authors developed a method to extract acoustic parameters describing infant cries from standard digital audio files. They used a frame rate of 25 ms with a frame advance of 12.5 ms. Cepstral-based acoustic analysis proceeded in 2 phases, computing frame-level data and then organizing and summarizing this information within cry utterances. Using signal detection methods, the authors evaluated the accuracy of the automated system to determine voicing and to detect fundamental frequency (F 0 ) as compared to voiced segments and pitch periods manually coded from spectrogram displays. Results The system detected F 0 with 88% to 95% accuracy, depending on tolerances set at 10 to 20 Hz. Receiver operating characteristic analyses demonstrated very high accuracy at detecting voicing characteristics in the cry samples. Conclusions This article describes an automated infant cry analyzer with high accuracy to detect important acoustic features of cry. A unique and important aspect of this work is the rigorous testing of the system's accuracy as compared to ground-truth manual coding. The resulting system has implications for basic and applied research on infant cry development.


2021 ◽  
Vol 11 (7) ◽  
pp. 656
Author(s):  
Si-Wa Chan ◽  
Wei-Hsuan Hu ◽  
Yen-Chieh Ouyang ◽  
Hsien-Chi Su ◽  
Chin-Yao Lin ◽  
...  

Breast magnetic resonance imaging (MRI) is currently a widely used clinical examination tool. Recently, MR diffusion-related technologies, such as intravoxel incoherent motion diffusion weighted imaging (IVIM-DWI), have been extensively studied by breast cancer researchers and gradually adopted in clinical practice. In this study, we explored automatic tumor detection by IVIM-DWI. We considered the acquired IVIM-DWI data as a hyperspectral image cube and used a well-known hyperspectral subpixel target detection technique: constrained energy minimization (CEM). Two extended CEM methods—kernel CEM (K-CEM) and iterative CEM (I-CEM)—were employed to detect breast tumors. The K-means and fuzzy C-means clustering algorithms were also evaluated. The quantitative measurement results were compared to dynamic contrast-enhanced T1-MR imaging as ground truth. All four methods were successful in detecting tumors for all the patients studied. The clustering methods were found to be faster, but the CEM methods demonstrated better performance according to both the Dice and Jaccard metrics. These unsupervised tumor detection methods have the advantage of potentially eliminating operator variability. The quantitative results can be measured by using ADC, signal attenuation slope, D*, D, and PF parameters to classify tumors of mass, non-mass, cyst, and fibroadenoma types.


A vitalcrucial pre-processing phase in image processing, computer vision and machine learning applications is Edge Detection which detects boundaries of foreground and background objects in an image. Discrimination between significant edges and not so important spurious edges highly affects the accuracy of edge detection process. This paper introduces an approach for extraction of significant edges present in images based on cellular automata. Cellular automata is a finite state machine where every cell has a state. Existing edge detection methods are complex to implement so they have large processing time. These methods tend to produce non-satisfactory results for noisy images which have cluttered background. Some methods are so trivial that they miss part of true edges and some methods are so complex that they tend to give spurious edges which are not required. The advantage of using cellular computing approach is to enhance edge detection process by reducing complexity and processing time. Parallel processing makes this method fast and computationally imple. MATLAB results of proposed method performed on images from Mendeley Dataset are compared with results obtained from existing edge detection techniques by evaluation of MSE and PSNR values Results indicate promising performance of the proposed algorithm. Visually compared, the proposed method produces better results to identify edges more clearly and is intelligent enough to discard spurious edges even for cluttered and complex images


2019 ◽  
Vol 36 (5) ◽  
pp. 1599-1606 ◽  
Author(s):  
Yizhi Wang ◽  
Congchao Wang ◽  
Petter Ranefall ◽  
Gerard Joey Broussard ◽  
Yinxue Wang ◽  
...  

Abstract Motivation Synapses are essential to neural signal transmission. Therefore, quantification of synapses and related neurites from images is vital to gain insights into the underlying pathways of brain functionality and diseases. Despite the wide availability of synaptic punctum imaging data, several issues are impeding satisfactory quantification of these structures by current tools. First, the antibodies used for labeling synapses are not perfectly specific to synapses. These antibodies may exist in neurites or other cell compartments. Second, the brightness of different neurites and synaptic puncta is heterogeneous due to the variation of antibody concentration and synapse-intrinsic differences. Third, images often have low signal to noise ratio due to constraints of experiment facilities and availability of sensitive antibodies. These issues make the detection of synapses challenging and necessitates developing a new tool to easily and accurately quantify synapses. Results We present an automatic probability-principled synapse detection algorithm and integrate it into our synapse quantification tool SynQuant. Derived from the theory of order statistics, our method controls the false discovery rate and improves the power of detecting synapses. SynQuant is unsupervised, works for both 2D and 3D data, and can handle multiple staining channels. Through extensive experiments on one synthetic and three real datasets with ground truth annotation or manually labeling, SynQuant was demonstrated to outperform peer specialized unsupervised synapse detection tools as well as generic spot detection methods. Availability and implementation Java source code, Fiji plug-in, and test data are available at https://github.com/yu-lab-vt/SynQuant. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document