Study on dual peg-in-hole insertion using of constraints formed in the environment

Author(s):  
Jianhua Su ◽  
Rui Li ◽  
Hong Qiao ◽  
Jing Xu ◽  
Qinglin Ai ◽  
...  

Purpose The purpose of this paper is to develop a dual peg-in-hole insertion strategy. Dual peg-in-hole insertion is the most common task in manufacturing. Most of the previous work develop the insertion strategy in a two- or three-dimensional space, in which they suppose the initial yaw angle is zero and only concern the roll and pitch angles. However, in some case, the yaw angle could not be ignored due to the pose uncertainty of the peg on the gripper. Therefore, there is a need to design the insertion strategy in a higher-dimensional configuration space. Design/methodology/approach In this paper, the authors handle the insertion problem by converting it into several sub-problems based on the attractive region formed by the constraints. The existence of the attractive region in the high-dimensional configuration space is first discussed. Then, the construction of the high-dimensional attractive region with its sub-attractive region in the low-dimensional space is proposed. Therefore, the robotic insertion strategy can be designed in the subspace to eliminate some uncertainties between the dual pegs and dual holes. Findings Dual peg-in-hole insertion is realized without using of force sensors. The proposed strategy is also used to demonstrate the precision dual peg-in-hole insertion, where the clearance between the dual-peg and dual-hole is about 0.02 mm. Practical implications The sensor-less insertion strategy will not increase the cost of the assembly system and also can be used in the dual peg-in-hole insertion. Originality/value The theoretical and experimental analyses for dual peg-in-hole insertion are proposed without using of force sensor.

Author(s):  
Alyssa Ney

This chapter proposes a solution to the macro-object problem for wave function realism. This is the problem of how a wave function in a high-dimensional space may come to constitute the low-dimensional, macroscopic objects of our experience. The solution takes place in several stages. First, it is argued that how the wave function’s being invariant under certain transformations may give us reason to regard three-dimensional configurations corresponding symmetries with ontological seriousness. Second it is shown how the wave function may decompose into low-dimensional microscopic parts. Interestingly, this reveals mereological relationships in which parts and wholes inhabit distinct spatial frameworks. Third, it is shown how these parts may come to compose macroscopic objects.


2019 ◽  
Vol 85 (18) ◽  
Author(s):  
Yutaka Yawata ◽  
Tatsunori Kiyokawa ◽  
Yuhki Kawamura ◽  
Tomohiro Hirayama ◽  
Kyosuke Takabe ◽  
...  

ABSTRACT Here we analyzed the innate fluorescence signature of the single microbial cell, within both clonal and mixed populations of microorganisms. We found that even very similarly shaped cells differ noticeably in their autofluorescence features and that the innate fluorescence signatures change dynamically with growth phases. We demonstrated that machine learning models can be trained with a data set of single-cell innate fluorescence signatures to annotate cells according to their phenotypes and physiological status, for example, distinguishing a wild-type Aspergillus nidulans cell from its nitrogen metabolism mutant counterpart and log-phase cells from stationary-phase cells of Pseudomonas putida. We developed a minimally invasive method (confocal reflection microscopy-assisted single-cell innate fluorescence [CRIF] analysis) to optically extract and catalog the innate cellular fluorescence signatures of each of the individual live microbial cells in a three-dimensional space. This technique represents a step forward from traditional techniques which analyze the innate fluorescence signatures at the population level and necessitate a clonal culture. Since the fluorescence signature is an innate property of a cell, our technique allows the prediction of the types or physiological status of intact and tag-free single cells, within a cell population distributed in a three-dimensional space. Our study presents a blueprint for a streamlined cell analysis where one can directly assess the potential phenotype of each single cell in a heterogenous population by its autofluorescence signature under a microscope, without cell tagging. IMPORTANCE A cell’s innate fluorescence signature is an assemblage of fluorescence signals emitted by diverse biomolecules within a cell. It is known that the innate fluoresce signature reflects various cellular properties and physiological statuses; thus, they can serve as a rich source of information in cell characterization as well as cell identification. However, conventional techniques focus on the analysis of the innate fluorescence signatures at the population level but not at the single-cell level and thus necessitate a clonal culture. In the present study, we developed a technique to analyze the innate fluorescence signature of a single microbial cell. Using this novel method, we found that even very similarly shaped cells differ noticeably in their autofluorescence features, and the innate fluorescence signature changes dynamically with growth phases. We also demonstrated that the different cell types can be classified accurately within a mixed population under a microscope at the resolution of a single cell, depending solely on the innate fluorescence signature information. We suggest that single-cell autofluoresce signature analysis is a promising tool to directly assess the taxonomic or physiological heterogeneity within a microbial population, without cell tagging.


2018 ◽  
Vol 29 (5) ◽  
pp. 776-808 ◽  
Author(s):  
Ruth N. Bolton ◽  
Janet R. McColl-Kennedy ◽  
Lilliemay Cheung ◽  
Andrew Gallan ◽  
Chiara Orsingher ◽  
...  

PurposeThe purpose of this paper is to explore innovations in customer experience at the intersection of the digital, physical and social realms. It explicitly considers experiences involving new technology-enabled services, such as digital twins and automated social presence (i.e. virtual assistants and service robots).Design/methodology/approachFuture customer experiences are conceptualized within a three-dimensional space – low to high digital density, low to high physical complexity and low to high social presence – yielding eight octants.FindingsThe conceptual framework identifies eight “dualities,” or specific challenges connected with integrating digital, physical and social realms that challenge organizations to create superior customer experiences in both business-to-business and business-to-consumer markets. The eight dualities are opposing strategic options that organizations must reconcile when co-creating customer experiences under different conditions.Research limitations/implicationsA review of theory demonstrates that little research has been conducted at the intersection of the digital, physical and social realms. Most studies focus on one realm, with occasional reference to another. This paper suggests an agenda for future research and gives examples of fruitful ways to study connections among the three realms rather than in a single realm.Practical implicationsThis paper provides guidance for managers in designing and managing customer experiences that the authors believe will need to be addressed by the year 2050.Social implicationsThis paper discusses important societal issues, such as individual and societal needs for privacy, security and transparency. It sets out potential avenues for service innovation in these areas.Originality/valueThe conceptual framework integrates knowledge about customer experiences in digital, physical and social realms in a new way, with insights for future service research, managers and public policy makers.


Author(s):  
Samuel Melton ◽  
Sharad Ramanathan

Abstract Motivation Recent technological advances produce a wealth of high-dimensional descriptions of biological processes, yet extracting meaningful insight and mechanistic understanding from these data remains challenging. For example, in developmental biology, the dynamics of differentiation can now be mapped quantitatively using single-cell RNA sequencing, yet it is difficult to infer molecular regulators of developmental transitions. Here, we show that discovering informative features in the data is crucial for statistical analysis as well as making experimental predictions. Results We identify features based on their ability to discriminate between clusters of the data points. We define a class of problems in which linear separability of clusters is hidden in a low-dimensional space. We propose an unsupervised method to identify the subset of features that define a low-dimensional subspace in which clustering can be conducted. This is achieved by averaging over discriminators trained on an ensemble of proposed cluster configurations. We then apply our method to single-cell RNA-seq data from mouse gastrulation, and identify 27 key transcription factors (out of 409 total), 18 of which are known to define cell states through their expression levels. In this inferred subspace, we find clear signatures of known cell types that eluded classification prior to discovery of the correct low-dimensional subspace. Availability and implementation https://github.com/smelton/SMD. Supplementary information Supplementary data are available at Bioinformatics online.


2019 ◽  
Vol 15 (3) ◽  
pp. 346-358
Author(s):  
Luciano Barbosa

Purpose Matching instances of the same entity, a task known as entity resolution, is a key step in the process of data integration. This paper aims to propose a deep learning network that learns different representations of Web entities for entity resolution. Design/methodology/approach To match Web entities, the proposed network learns the following representations of entities: embeddings, which are vector representations of the words in the entities in a low-dimensional space; convolutional vectors from a convolutional layer, which capture short-distance patterns in word sequences in the entities; and bag-of-word vectors, created by a bow layer that learns weights for words in the vocabulary based on the task at hand. Given a pair of entities, the similarity between their learned representations is used as a feature to a binary classifier that identifies a possible match. In addition to those features, the classifier also uses a modification of inverse document frequency for pairs, which identifies discriminative words in pairs of entities. Findings The proposed approach was evaluated in two commercial and two academic entity resolution benchmarking data sets. The results have shown that the proposed strategy outperforms previous approaches in the commercial data sets, which are more challenging, and have similar results to its competitors in the academic data sets. Originality/value No previous work has used a single deep learning framework to learn different representations of Web entities for entity resolution.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Hui Liu ◽  
Tinglong Tang ◽  
Jake Luo ◽  
Meng Zhao ◽  
Baole Zheng ◽  
...  

Purpose This study aims to address the challenge of training a detection model for the robot to detect the abnormal samples in the industrial environment, while abnormal patterns are very rare under this condition. Design/methodology/approach The authors propose a new model with double encoder–decoder (DED) generative adversarial networks to detect anomalies when the model is trained without any abnormal patterns. The DED approach is used to map high-dimensional input images to a low-dimensional space, through which the latent variables are obtained. Minimizing the change in the latent variables during the training process helps the model learn the data distribution. Anomaly detection is achieved by calculating the distance between two low-dimensional vectors obtained from two encoders. Findings The proposed method has better accuracy and F1 score when compared with traditional anomaly detection models. Originality/value A new architecture with a DED pipeline is designed to capture the distribution of images in the training process so that anomalous samples are accurately identified. A new weight function is introduced to control the proportion of losses in the encoding reconstruction and adversarial phases to achieve better results. An anomaly detection model is proposed to achieve superior performance against prior state-of-the-art approaches.


2020 ◽  
Author(s):  
Alexander Feigin ◽  
Aleksei Seleznev ◽  
Dmitry Mukhin ◽  
Andrey Gavrilov ◽  
Evgeny Loskutov

<p>We suggest a new method for construction of data-driven dynamical models from observed multidimensional time series. The method is based on a recurrent neural network (RNN) with specific structure, which allows for the joint reconstruction of both a low-dimensional embedding for dynamical components in the data and an operator describing the low-dimensional evolution of the system. The key link of the method is a Bayesian optimization of both model structure and the hypothesis about the data generating law, which is needed for constructing the cost function for model learning.  The form of the model we propose allows us to construct a stochastic dynamical system of moderate dimension that copies dynamical properties of the original high-dimensional system. An advantage of the proposed method is the data-adaptive properties of the RNN model: it is based on the adjustable nonlinear elements and has easily scalable structure. The combination of the RNN with the Bayesian optimization procedure efficiently provides the model with statistically significant nonlinearity and dimension.<br>The method developed for the model optimization aims to detect the long-term connections between system’s states – the memory of the system: the cost-function used for model learning is constructed taking into account this factor. In particular, in the case of absence of interaction between the dynamical component and noise, the method provides unbiased reconstruction of the hidden deterministic system. In the opposite case when the noise has strong impact on the dynamics, the method yield a model in the form of a nonlinear stochastic map determining the Markovian process with memory. Bayesian approach used for selecting both the optimal model’s structure and the appropriate cost function allows to obtain the statistically significant inferences about the dynamical signal in data as well as its interaction with the noise components.<br>Data driven model derived from the relatively short time series of the QG3 model – the high dimensional nonlinear system producing chaotic behavior – is shown be able to serve as a good simulator for the QG3 LFV components. The statistically significant recurrent states of the QG3 model, i.e. the well-known teleconnections in NH, are all reproduced by the model obtained. Moreover, statistics of the residence times of the model near these states is very close to the corresponding statistics of the original QG3 model. These results demonstrate that the method can be useful in modeling the variability of the real atmosphere.</p><p>The work was supported by the Russian Science Foundation (Grant No. 19-42-04121).</p>


1996 ◽  
Vol 8 (6) ◽  
pp. 1321-1340 ◽  
Author(s):  
Joseph J. Atick ◽  
Paul A. Griffin ◽  
A. Norman Redlich

The human visual system is proficient in perceiving three-dimensional shape from the shading patterns in a two-dimensional image. How it does this is not well understood and continues to be a question of fundamental and practical interest. In this paper we present a new quantitative approach to shape-from-shading that may provide some answers. We suggest that the brain, through evolution or prior experience, has discovered that objects can be classified into lower-dimensional object-classes as to their shape. Extraction of shape from shading is then equivalent to the much simpler problem of parameter estimation in a low-dimensional space. We carry out this proposal for an important class of three-dimensional (3D) objects: human heads. From an ensemble of several hundred laser-scanned 3D heads, we use principal component analysis to derive a low-dimensional parameterization of head shape space. An algorithm for solving shape-from-shading using this representation is presented. It works well even on real images where it is able to recover the 3D surface for a given person, maintaining facial detail and identity, from a single 2D image of his face. This algorithm has applications in face recognition and animation.


2015 ◽  
Vol 26 (09) ◽  
pp. 1550103
Author(s):  
Yifang Ma ◽  
Zhiming Zheng

The evolution of networks or dynamic systems is controlled by many parameters in high-dimensional space, and it is crucial to extract the reduced and dominant ones in low-dimensional space. Here we consider the network ensemble, introduce a matrix resolvent scale function and apply it to a spectral approach to get the similarity relations between each pair of networks. The concept of Diffusion Maps is used to get the principal parameters, and we point out that the reduced dimensional principal parameters are captured by the low order eigenvectors of the diffusion matrix of the network ensemble. We validate our results by using two classical network ensembles and one dynamical network sequence via a cooperative Achlioptas growth process where an abrupt transition of the structures has been captured by our method. Our method provides a potential access to the pursuit of invisible control parameters of complex systems.


Sign in / Sign up

Export Citation Format

Share Document