scholarly journals Autonomous feature type selection based on environment using expectation maximization in self-localization

2018 ◽  
Vol 15 (6) ◽  
pp. 172988141881470
Author(s):  
Nezih Ergin Özkucur ◽  
H Levent Akın

Self-localization in autonomous robots is one of the fundamental issues in the development of intelligent robots, and processing of raw sensory information into useful features is an integral part of this problem. In a typical scenario, there are several choices for the feature extraction algorithm, and each has its weaknesses and strengths depending on the characteristics of the environment. In this work, we introduce a localization algorithm that is capable of capturing the quality of a feature type based on the local environment and makes soft selection of feature types throughout different regions. A batch expectation–maximization algorithm is developed for both discrete and Monte Carlo localization models, exploiting the probabilistic pose estimations of the robot without requiring ground truth poses and also considering different observation types as blackbox algorithms. We tested our method in simulations, data collected from an indoor environment with a custom robot platform and a public data set. The results are compared with the individual feature types as well as naive fusion strategy.

2017 ◽  
Vol 10 (1) ◽  
pp. 87-113
Author(s):  
Lance Hannon

The City of Philadelphia has faced significant litigation related to racial and ethnic disparities in stop-and-frisk practices. The Philadelphia Police Department has made much of its stop-and-frisk data publicly available in the name of transparency and to facilitate independent investigation (the data describe over 350,000 pedestrian stops with over 45,000 pedestrian frisks for 2014–2015). The current analysis made use of this public data set to explore whether the individual-level relationship between Black racial classification and being subjected to a frisk can be explained by associated neighborhood-level factors such as the violent crime rate. Additionally, the present analysis examined whether variation in the violent crime rate is similarly related to the likelihood of being frisked in predominantly Black versus non-Black areas and whether area racial composition affects the likelihood that an officer’s decision to frisk will be supported with uncovered contraband. The results were consistent with theories of neighborhood racial stigma. In particular, the violent crime rate was a significantly weaker predictor of being frisked in Black areas, and, net of a variety of factors at the individual and neighborhood levels, Black citizens and Black places experienced a disproportionate amount of frisks where no contraband was found or arrest made.


2020 ◽  
Vol 2020 (6) ◽  
pp. 71-1-71-7
Author(s):  
Christian Kapeller ◽  
Doris Antensteiner ◽  
Svorad Štolc

Industrial machine vision applications frequently employ Photometric Stereo (PS) methods to detect fine surface defects on objects with challenging surface properties. To achieve highly precise results, acquisition setups with a vast amount of strobed illumination angles are required. The time-consuming nature of such an undertaking renders it inapt for most industrial applications. We overcome these limitations by carefully tailoring the required light setup to specific applications. Our novel approach facilitates the design of optimized acquisition setups for inline PS inspection systems. The optimal positions of light sources are derived from only a few representative material samples without the need for extensive amounts of training data. We formulate an energy function that constructs the illumination setup which generates the highest PS accuracy. The setup can be tailored for fast acquisition speed or cost efficiency. A thorough evaluation of the performance of our approach will be given on a public data set, evaluated by the mean angular error (MAE) for surface normals and root mean square (RMS) error for albedos. Our results show, that the obtained optimized PS setups can deliver a reconstruction performance close to the ground truth, while requiring only a few acquisitions.


2014 ◽  
Vol 19 (4) ◽  
pp. 37-55 ◽  
Author(s):  
Sayan Mandal ◽  
Samit Biswas ◽  
Amit Kumar Das ◽  
Bhabatosh Chanda

Abstract Research on document image analysis is actively pursued in the last few decades and services like OCR, vectorization of drawings/graphics and various types of form processing are very common. Handwritten documents, old historical documents and documents captured through camera are now being the subjects of active research. However, another very important type of paper document, namely the map document image processing research suffers due to the inherent complexities of the map document and also for nonavailability of benchmark public data-sets. This paper presents a new data-set, namely, the Land Map Image Database (LMIDb) that consists of a variety of land maps images (446 images at present and growing; scanned at 200/300 dpi in TIF format) and the corresponding ground-truth. Using semiautomatic tools non-text part of the images are deleted and the text-only ground-truth is also kept in the database. This paper also presents a classification strategy for map images using which the maps in the database are automatically classified into Political (Po), Physical (Ph), Resource (R) and Topographic (T) maps. The automatic classification of maps help indexing of the images in LMIDb for archival and easy retrieval of the right maps to get the appropriate geographical information. Classification accuracy is also tested on the proposed data-set and the result is encouraging.


2010 ◽  
Vol 2010 ◽  
pp. 1-16 ◽  
Author(s):  
Qihong Duan ◽  
Zhiping Chen ◽  
Dengfu Zhao

In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there aremtransient states in the system and that there arenfailure time data. The devised algorithm only needs to compute the exponential ofm×mupper triangular matrices forO(nm2)times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.


Filomat ◽  
2018 ◽  
Vol 32 (19) ◽  
pp. 6575-6598
Author(s):  
Mouna Zitouni ◽  
Mourad Zribi ◽  
Afif Masmoudi

This paper is concerned with a class of exponential dispersion distributions. We particularly focused on the mixture models, which represent an extension of the Gaussian distribution. It should be noted that the parameters estimation of mixture distributions is an important task in statistical processing. In order to estimate the parameters vector, we proposed a formulation of the Expectation-Maximization algorithm (EM) under exponential dispersion mixture distributions. Also, we developed a hybrid algorithm called Expectation-Maximization and Method of moments algorithm (EMM). Under mild regularity, several convergence results of the EMM algorithm were obtained. Through simulation studies, the robustness of the EMM was proved and the strong consistency of the EMM sequence appeared when the data set size and the number of iterations tend to infinity.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yao Shen ◽  
Zhipeng Yan

AbstractTo study the drug resistance problem caused by transporters, we leveraged multiple large-scale public data sets of drug sensitivity, cell line genetic and transcriptional profiles, and gene silencing experiments. Through systematic integration of these data sets, we built various machine learning models to predict the difference between cell viability upon drug treatment and the silencing of its target across the same cell lines. More than 50% of the models built with the same data set or with independent data sets successfully predicted the testing set with significant correlation to the ground truth data. Features selected by our models were also significantly enriched in known drug transporters annotated in DrugBank for more than 60% of the models. Novel drug-transporter interactions were discovered, such as lapatinib and gefitinib with ABCA1, olaparib and NVPADW742 with ABCC3, and gefitinib and AZ628 with SLC4A4. Furthermore, we identified ABCC3, SLC12A7, SLCO4A1, SERPINA1, and SLC22A3 as potential transporters for erlotinib, three of which are also significantly more highly expressed in patients who were resistant to therapy in a clinical trial.


2020 ◽  
Vol 16 (2) ◽  
pp. 121-132
Author(s):  
Dani Chu ◽  
Matthew Reyers ◽  
James Thomson ◽  
Lucas Yifan Wu

AbstractTracking data in the National Football League (NFL) is a sequence of spatial-temporal measurements that varies in length depending on the duration of the play. In this paper, we demonstrate how model-based curve clustering of observed player trajectories can be used to identify the routes run by eligible receivers on offensive passing plays. We use a Bernstein polynomial basis function to represent cluster centers, and the Expectation Maximization algorithm to learn the route labels for each of the 33,967 routes run on the 6963 passing plays in the data set. With few assumptions and no pre-existing labels, we are able to closely recreate the standard route tree from our algorithm. We go on to suggest ideas for new potential receiver metrics that account for receiver deployment and movement common throughout the league. The resulting route labels can also be paired with film to enable streamlined queries of game film.


1998 ◽  
Vol 1643 (1) ◽  
pp. 95-109 ◽  
Author(s):  
A. Raja Shekharan ◽  
Gonzalo R. Rada ◽  
Gary E. Elkins ◽  
William Y. Bellinger

In the Long-Term Pavement Performance (LTPP) program, 35-mm, black and white, continuous-strip photographs are used as a permanent record of pavement distress development for archival purposes and to quantify the distress severity and extent for pavement performance analysis. The traditional method of interpreting distress from LTPP film utilizes a relatively small image projected onto a digitizing tablet. From quality control checks performed on the interpreted data, it was found that some low severity types of distress, identified from larger magnified images projected onto a wall or projection screen, could not be seen in the smaller image used for distress interpretation. The variability in distresses interpreted directly off of the large format, wall-image projection was assessed through analysis of interpretations performed on six asphalt concrete and six portland cement concrete pavement sections used in the LTPP distress rater accreditation workshops. The data set included distress ratings from eight individuals, four two-person rater teams, and an experienced rater team. Also available were distress ratings performed in the field by the experienced rater team, which are used as reference values which represent the best estimate of ground-truth. Statistical tests show that the film-interpreted distresses from individual raters exhibit much larger variability than those from the rating teams. The most significant contributor to this finding is outlier observations in which one of the individual raters had significantly different ratings than the rest of the group. The spread in the rating teams was much lower. The film interpreted distresses from the experienced group correlated very well with the field-derived reference values.


Geophysics ◽  
2021 ◽  
pp. 1-56
Author(s):  
Nico Skibbe ◽  
Thomas Günther ◽  
Mike Müller-Petke

As one main objective in hydrogeophysics, we aim at describing hydraulic properties in the subsurface in at least two dimensions. However, due to the limited resolution and ambiguity of the individual methods, those images often remain blurry. We present a methodology to combine two measuring methods, magnetic resonance tomography (MRT) and electrical resistivity tomography (ERT). To this end, we extend a structurally coupled cooperative inversion (SCCI) scheme to three parameters. It results in clearer images of the three parameters water content, relaxation time and electrical resistivity, and thus a less ambiguous hydrogeophysical interpretation. Synthetic models with a circular and bar-like structure demonstrate its effectiveness and show how the parameters of the coupling equation affect the images and how they can be chosen. Furthermore, we demonstrate the influence of resistivity structures on the MRT kernel function. We apply the method to a roll-along MRT data set and a detailed ERT profile. As a final result, a hydraulic conductivity image is produced. Known ground-penetrating radar reflectors act as ground truth and prove that the obtained images are improved by the structural coupling.


Author(s):  
D. E. Becker

An efficient, robust, and widely-applicable technique is presented for computational synthesis of high-resolution, wide-area images of a specimen from a series of overlapping partial views. This technique can also be used to combine the results of various forms of image analysis, such as segmentation, automated cell counting, deblurring, and neuron tracing, to generate representations that are equivalent to processing the large wide-area image, rather than the individual partial views. This can be a first step towards quantitation of the higher-level tissue architecture. The computational approach overcomes mechanical limitations, such as hysterisis and backlash, of microscope stages. It also automates a procedure that is currently done manually. One application is the high-resolution visualization and/or quantitation of large batches of specimens that are much wider than the field of view of the microscope.The automated montage synthesis begins by computing a concise set of landmark points for each partial view. The type of landmarks used can vary greatly depending on the images of interest. In many cases, image analysis performed on each data set can provide useful landmarks. Even when no such “natural” landmarks are available, image processing can often provide useful landmarks.


Sign in / Sign up

Export Citation Format

Share Document