Model-Free Segmentation and Grasp Selection of Unknown Stacked Objects

Author(s):  
Umar Asif ◽  
Mohammed Bennamoun ◽  
Ferdous Sohel
Keyword(s):  
Author(s):  
Xinshui Yu ◽  
Zhaohui Yang ◽  
Kunling Song ◽  
Tianxiang Yu ◽  
Bozhi Guo

The distribution and parameters of the random variables is an important part of conventional reliability analysis methods, such as Monte Carlo method, which should be known fist before using these methods, but it is often hard or impossible to obtain. Model-free sampling technique puts forward a method to get the distribution of the random variables, but the accuracy of the extended sample generated by it is not enough. This paper presented an improved model-free sampling technique, which is based on Bootstrap methods, to increase the accuracy of the extended sample and decrease the iteration times. In this improved model-free sampling technique, the method of the selection of initial sample points and the generation of iterative sample is improved. Meanwhile, a center distance criterion, which considers the local characteristics of the extended sample, is added to the generating criterion of dissimilarity measure. The effectiveness of this improved method is illustrated through some numerical examples.


2014 ◽  
Vol 369 (1655) ◽  
pp. 20130474 ◽  
Author(s):  
Etienne Koechlin

The prefrontal cortex subserves executive control and decision-making, that is, the coordination and selection of thoughts and actions in the service of adaptive behaviour. We present here a computational theory describing the evolution of the prefrontal cortex from rodents to humans as gradually adding new inferential Bayesian capabilities for dealing with a computationally intractable decision problem: exploring and learning new behavioural strategies versus exploiting and adjusting previously learned ones through reinforcement learning (RL). We provide a principled account identifying three inferential steps optimizing this arbitration through the emergence of (i) factual reactive inferences in paralimbic prefrontal regions in rodents; (ii) factual proactive inferences in lateral prefrontal regions in primates and (iii) counterfactual reactive and proactive inferences in human frontopolar regions. The theory clarifies the integration of model-free and model-based RL through the notion of strategy creation. The theory also shows that counterfactual inferences in humans yield to the notion of hypothesis testing, a critical reasoning ability for approximating optimal adaptive processes and presumably endowing humans with a qualitative evolutionary advantage in adaptive behaviour.


Author(s):  
FAYIN LI ◽  
HARRY WECHSLER

The paper describes an integrated recognition-by-parts architecture for reliable and robust face recognition. Reliability and robustness are characteristic of the ability to deploy full-fledged and operational biometric engines, and handling adverse image conditions that include among others uncooperative subjects, occlusion, and temporal variability, respectively. The architecture proposed is model-free and non-parametric. The conceptual framework draws support from discriminative methods using likelihood ratios. At the conceptual level it links forensics and biometrics, while at the implementation level it links the Bayesian framework and statistical learning theory (SLT). Layered categorization starts with face detection using implicit rather than explicit segmentation. It proceeds with face authentication that involves feature selection of local patch instances including dimensionality reduction, exemplar-based clustering of patches into parts, and data fusion for matching using boosting driven by parts that play the role of weak-learners. Face authentication shares the same implementation with face detection. The implementation, driven by transduction, employs proximity and typicality (ranking) realized using strangeness and p-values, respectively. The feasibility and reliability of the proposed architecture are illustrated using FRGC data. The paper concludes with suggestions for augmenting and enhancing the scope and utility of the proposed architecture.


2016 ◽  
Vol 9 (3) ◽  
Author(s):  
Haider Zaman ◽  
Anjum Jalal ◽  
Zulfiqar Haider

The use of inferior vena caval (IVC) filters has been an accepted method for preventing pulmonary embolism, especially in cases where there are contraindications to anticoagulation. However, these filters have numerous potential complications some of which could be life threatening like, thrombosis and migration of the filter. The "Antheor" filter was designed to prevent this complication. Unfortunately, we encountered one patient where it has been unsuccessful in this regard. We therefore, believe that there is no ideal filter and the selection of a filter requires thorough knowledge of the limitations of individual brand.


2004 ◽  
Vol 851 ◽  
Author(s):  
M. Moser ◽  
S. Heltzel ◽  
C. O. A. Semprimoschnig ◽  
G. Garcia Martin

ABSTRACTFuture science missions of the European Space Agency (ESA) to the inner part of the solar system will require the use of materials at an extreme radiation and temperature environment. A major concern regarding the selection of these materials is the thermal behaviour and the thermal stability. In this paper ways are shown to assess the thermal endurance of polymers by kinetic modelling. Two commonly used kinetic models, the one following the ASTM E 1641 and ASTM E 1877 standards and the other following the Model Free Kinetics (MFK) approach, are presented and compared to each other with the given example of two competing polyimide films, Kapton HN® of DuPont and Upilex S® of Ube Industries1, which were tested within ESA's critical materials technology program.


Author(s):  
Nina M. van Mastrigt ◽  
Katinka van der Kooij ◽  
Jeroen B. J. Smeets

AbstractWhen learning a movement based on binary success information, one is more variable following failure than following success. Theoretically, the additional variability post-failure might reflect exploration of possibilities to obtain success. When average behavior is changing (as in learning), variability can be estimated from differences between subsequent movements. Can one estimate exploration reliably from such trial-to-trial changes when studying reward-based motor learning? To answer this question, we tried to reconstruct the exploration underlying learning as described by four existing reward-based motor learning models. We simulated learning for various learner and task characteristics. If we simply determined the additional change post-failure, estimates of exploration were sensitive to learner and task characteristics. We identified two pitfalls in quantifying exploration based on trial-to-trial changes. Firstly, performance-dependent feedback can cause correlated samples of motor noise and exploration on successful trials, which biases exploration estimates. Secondly, the trial relative to which trial-to-trial change is calculated may also contain exploration, which causes underestimation. As a solution, we developed the additional trial-to-trial change (ATTC) method. By moving the reference trial one trial back and subtracting trial-to-trial changes following specific sequences of trial outcomes, exploration can be estimated reliably for the three models that explore based on the outcome of only the previous trial. Since ATTC estimates are based on a selection of trial sequences, this method requires many trials. In conclusion, if exploration is a binary function of previous trial outcome, the ATTC method allows for a model-free quantification of exploration.


2020 ◽  
Vol 35 (4) ◽  
pp. 1879-1894
Author(s):  
Jonas M. B. Haslbeck ◽  
Dirk U. Wulff

Abstract We improve instability-based methods for the selection of the number of clusters k in cluster analysis by developing a corrected clustering distance that corrects for the unwanted influence of the distribution of cluster sizes on cluster instability. We show that our corrected instability measure outperforms current instability-based measures across the whole sequence of possible k, overcoming limitations of current insability-based methods for large k. We also compare, for the first time, model-based and model-free approaches to determining cluster-instability and find their performance to be comparable. We make our method available in the R-package .


2021 ◽  
Vol 50 (2) ◽  
pp. 187-209 ◽  
Author(s):  
Florian Pein ◽  
Benjamin Eltzner ◽  
Axel Munk

AbstractAnalysis of patchclamp recordings is often a challenging issue. We give practical guidance how such recordings can be analyzed using the model-free multiscale idealization methodology JSMURF, JULES, and HILDE. We provide an operational manual how to use the accompanying software available as an R-package and as a graphical user interface. This includes selection of the right approach and tuning of parameters. We also discuss advantages and disadvantages of model-free approaches in comparison to hidden Markov model approaches and explain how they complement each other.


2019 ◽  
Author(s):  
Adam Morris ◽  
Fiery Andrews Cushman

The alignment of habits with model-free reinforcement learning (MF RL) is a success story for computational models of decision making, and MF RL has been applied to explain phasic dopamine responses, working memory gating, drug addiction, moral intuitions, and more. Yet, the role of MF RL has recently been challenged by an alternate model---model-based selection of chained action sequences---that produces similar behavioral and neural patterns. Here, we present two experiments that dissociate MF RL from this prominent alternative, and present unconfounded empirical support for the role of MF RL in human decision making. Our results also demonstrate that people are simultaneously using model-based selection of action sequences, thus demonstrating two distinct mechanisms of habitual control in a common experimental paradigm. These findings clarify the nature of habits and help solidify MF RL's central position in models of human behavior.


Sign in / Sign up

Export Citation Format

Share Document