A neural network based monitoring system for safety in shared work-space human-robot collaboration

Author(s):  
Hemant Rajnathsing ◽  
Chenggang Li

PurposeHuman–robot collaboration (HRC) is on the rise in a bid for improved flexibility in production cells. In the context of overlapping workspace between a human operator and an industrial robot, the major cause for concern rests on the safety of the former.Design/methodology/approachIn light of recent advances and trends, this paper proposes to implement a monitoring system for the shared workspace HRC, which supplements the robot, to locate the human operator and to ensure that at all times a minimum safe distance is respected by the robot with respect to its human partner. The monitoring system consists of four neural networks, namely, an object detector, two neural networks responsible for assessing the detections and a simple, custom speech recognizer.FindingsIt was observed that with due consideration of the production cell, it is possible to create excellent data sets which result in promising performances of the neural networks. Each neural network can be further improved by using its mistakes as examples thrown back in the data set. Thus, the whole monitoring system can achieve a reliable performance.Practical implicationsSuccess of the proposed framework may lead to any industrial robot being suitable for use in HRC.Originality/valueThis paper proposes a system comprising neural networks in most part, and it looks at a digital representation of the workspace from a different angle. The exclusive use of neural networks is seen as an attempt to propose a system which can be relatively easily deployed in industrial settings as neural networks can be fine-tuned for adjustments.

2019 ◽  
Vol 52 (4) ◽  
pp. 854-863 ◽  
Author(s):  
Brendan Sullivan ◽  
Rick Archibald ◽  
Jahaun Azadmanesh ◽  
Venu Gopal Vandavasi ◽  
Patricia S. Langan ◽  
...  

Neutron crystallography offers enormous potential to complement structures from X-ray crystallography by clarifying the positions of low-Z elements, namely hydrogen. Macromolecular neutron crystallography, however, remains limited, in part owing to the challenge of integrating peak shapes from pulsed-source experiments. To advance existing software, this article demonstrates the use of machine learning to refine peak locations, predict peak shapes and yield more accurate integrated intensities when applied to whole data sets from a protein crystal. The artificial neural network, based on the U-Net architecture commonly used for image segmentation, is trained using about 100 000 simulated training peaks derived from strong peaks. After 100 training epochs (a round of training over the whole data set broken into smaller batches), training converges and achieves a Dice coefficient of around 65%, in contrast to just 15% for negative control data sets. Integrating whole peak sets using the neural network yields improved intensity statistics compared with other integration methods, including k-nearest neighbours. These results demonstrate, for the first time, that neural networks can learn peak shapes and be used to integrate Bragg peaks. It is expected that integration using neural networks can be further developed to increase the quality of neutron, electron and X-ray crystallography data.


mSphere ◽  
2020 ◽  
Vol 5 (5) ◽  
Author(s):  
Artur Yakimovich ◽  
Moona Huttunen ◽  
Jerzy Samolej ◽  
Barbara Clough ◽  
Nagisa Yoshida ◽  
...  

ABSTRACT The use of deep neural networks (DNNs) for analysis of complex biomedical images shows great promise but is hampered by a lack of large verified data sets for rapid network evolution. Here, we present a novel strategy, termed “mimicry embedding,” for rapid application of neural network architecture-based analysis of pathogen imaging data sets. Embedding of a novel host-pathogen data set, such that it mimics a verified data set, enables efficient deep learning using high expressive capacity architectures and seamless architecture switching. We applied this strategy across various microbiological phenotypes, from superresolved viruses to in vitro and in vivo parasitic infections. We demonstrate that mimicry embedding enables efficient and accurate analysis of two- and three-dimensional microscopy data sets. The results suggest that transfer learning from pretrained network data may be a powerful general strategy for analysis of heterogeneous pathogen fluorescence imaging data sets. IMPORTANCE In biology, the use of deep neural networks (DNNs) for analysis of pathogen infection is hampered by a lack of large verified data sets needed for rapid network evolution. Artificial neural networks detect handwritten digits with high precision thanks to large data sets, such as MNIST, that allow nearly unlimited training. Here, we developed a novel strategy we call mimicry embedding, which allows artificial intelligence (AI)-based analysis of variable pathogen-host data sets. We show that deep learning can be used to detect and classify single pathogens based on small differences.


2018 ◽  
Vol 6 (3) ◽  
pp. 134-146
Author(s):  
Daniil Igorevich Mikhalchenko ◽  
Arseniy Ivin ◽  
Dmitrii Malov

Purpose Single image depth prediction allows to extract depth information from a usual 2D image without usage of special sensors such as laser sensors, stereo cameras, etc. The purpose of this paper is to solve the problem of obtaining depth information from 2D image by applying deep neural networks (DNNs). Design/methodology/approach Several experiments and topologies are presented: DNN that uses three inputs—sequence of 2D images from videostream and DNN that uses only one input. However, there is no data set, that contains videostream and corresponding depth maps for every frame. So technique of creating data sets using the Blender software is presented in this work. Findings Despite the problem of an insufficient amount of available data sets, the problem of overfitting was encountered. Although created models work on the data sets, they are still overfitted and cannot predict correct depth map for the random images, that were included into the data sets. Originality/value Existing techniques of depth images creation are tested, using DNN.


Author(s):  
Paolo Massimo Buscema ◽  
William J Tastle

Data sets collected independently using the same variables can be compared using a new artificial neural network called Artificial neural network What If Theory, AWIT. Given a data set that is deemed the standard reference for some object, i.e. a flower, industry, disease, or galaxy, other data sets can be compared against it to identify its proximity to the standard. Thus, data that might not lend itself well to traditional methods of analysis could identify new perspectives or views of the data and thus, potentially new perceptions of novel and innovative solutions. This method comes out of the field of artificial intelligence, particularly artificial neural networks, and utilizes both machine learning and pattern recognition to display an innovative analysis.


Kybernetes ◽  
2014 ◽  
Vol 43 (7) ◽  
pp. 1114-1123 ◽  
Author(s):  
Chih-Fong Tsai ◽  
Chihli Hung

Purpose – Credit scoring is important for financial institutions in order to accurately predict the likelihood of business failure. Related studies have shown that machine learning techniques, such as neural networks, outperform many statistical approaches to solving this type of problem, and advanced machine learning techniques, such as classifier ensembles and hybrid classifiers, provide better prediction performance than single machine learning based classification techniques. However, it is not known which type of advanced classification technique performs better in terms of financial distress prediction. The paper aims to discuss these issues. Design/methodology/approach – This paper compares neural network ensembles and hybrid neural networks over three benchmarking credit scoring related data sets, which are Australian, German, and Japanese data sets. Findings – The experimental results show that hybrid neural networks and neural network ensembles outperform the single neural network. Although hybrid neural networks perform slightly better than neural network ensembles in terms of predication accuracy and errors with two of the data sets, there is no significant difference between the two types of prediction models. Originality/value – The originality of this paper is in comparing two types of advanced classification techniques, i.e. hybrid and ensemble learning techniques, in terms of financial distress prediction.


Author(s):  
Paolo Massimo Buscema ◽  
William J. Tastle

Data sets collected independently using the same variables can be compared using a new artificial neural network called Artificial neural network What If Theory, AWIT. Given a data set that is deemed the standard reference for some object, i.e. a flower, industry, disease, or galaxy, other data sets can be compared against it to identify its proximity to the standard. Thus, data that might not lend itself well to traditional methods of analysis could identify new perspectives or views of the data and thus, potentially new perceptions of novel and innovative solutions. This method comes out of the field of artificial intelligence, particularly artificial neural networks, and utilizes both machine learning and pattern recognition to display an innovative analysis.


Author(s):  
Jungeui Hong ◽  
Elizabeth A. Cudney ◽  
Genichi Taguchi ◽  
Rajesh Jugulum ◽  
Kioumars Paryani ◽  
...  

The Mahalanobis-Taguchi System is a diagnosis and predictive method for analyzing patterns in multivariate cases. The goal of this study is to compare the ability of the Mahalanobis-Taguchi System and a neural network to discriminate using small data sets. We examine the discriminant ability as a function of data set size using an application area where reliable data is publicly available. The study uses the Wisconsin Breast Cancer study with nine attributes and one class.


2020 ◽  
Vol 6 ◽  
Author(s):  
Jaime de Miguel Rodríguez ◽  
Maria Eugenia Villafañe ◽  
Luka Piškorec ◽  
Fernando Sancho Caparrini

Abstract This work presents a methodology for the generation of novel 3D objects resembling wireframes of building types. These result from the reconstruction of interpolated locations within the learnt distribution of variational autoencoders (VAEs), a deep generative machine learning model based on neural networks. The data set used features a scheme for geometry representation based on a ‘connectivity map’ that is especially suited to express the wireframe objects that compose it. Additionally, the input samples are generated through ‘parametric augmentation’, a strategy proposed in this study that creates coherent variations among data by enabling a set of parameters to alter representative features on a given building type. In the experiments that are described in this paper, more than 150 k input samples belonging to two building types have been processed during the training of a VAE model. The main contribution of this paper has been to explore parametric augmentation for the generation of large data sets of 3D geometries, showcasing its problems and limitations in the context of neural networks and VAEs. Results show that the generation of interpolated hybrid geometries is a challenging task. Despite the difficulty of the endeavour, promising advances are presented.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Abhijat Arun Abhyankar ◽  
Harish Kumar Singla

Purpose The purpose of this study is to compare the predictive performance of the hedonic multivariate regression model with the probabilistic neural network (PNN)-based general regression neural network (GRNN) model of housing prices in “Pune-India.” Design/methodology/approach Data on 211 properties across “Pune city-India” is collected. The price per square feet is considered as a dependent variable whereas distances from important landmarks such as railway station, fort, university, airport, hospital, temple, parks, solid waste site and stadium are considered as independent variables along with a dummy for amenities. The data is analyzed using a hedonic type multivariate regression model and GRNN. The GRNN divides the entire data set into two sets, namely, training set and testing set and establishes a functional relationship between the dependent and target variables based on the probability density function of the training data (Alomair and Garrouch, 2016). Findings While comparing the performance of the hedonic multivariate regression model and PNN-based GRNN, the study finds that the output variable (i.e. price) has been accurately predicted by the GRNN model. All the 42 observations of the testing set are correctly classified giving an accuracy rate of 100%. According to Cortez (2015), a value close to 100% indicates that the model can correctly classify the test data set. Further, the root mean square error (RMSE) value for the final testing for the GRNN model is 0.089 compared to 0.146 for the hedonic multivariate regression model. A lesser value of RMSE indicates that the model contains smaller errors and is a better fit. Therefore, it is concluded that GRNN is a better model to predict the housing price functions. The distance from the solid waste site has the highest degree of variable senstivity impact on the housing prices (22.59%) followed by distance from university (17.78%) and fort (17.73%). Research limitations/implications The study being a “case” is restricted to a particular geographic location hence, the findings of the study cannot be generalized. Further, as the objective of the study is restricted to just to compare the predictive performance of two models, it is felt appropriate to restrict the scope of work by focusing only on “location specific hedonic factors,” as determinants of housing prices. Practical implications The study opens up a new dimension for scholars working in the field of housing prices/valuation. Authors do not rule out the use of traditional statistical techniques such as ordinary least square regression but strongly recommend that it is high time scholars use advanced statistical methods to develop the domain. The application of GRNN, artificial intelligence or other techniques such as auto regressive integrated moving average and vector auto regression modeling helps analyze the data in a much more sophisticated manner and help come up with more robust and conclusive evidence. Originality/value To the best of the author’s knowledge, it is the first case study that compares the predictive performance of the hedonic multivariate regression model with the PNN-based GRNN model for housing prices in India.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Tressy Thomas ◽  
Enayat Rajabi

PurposeThe primary aim of this study is to review the studies from different dimensions including type of methods, experimentation setup and evaluation metrics used in the novel approaches proposed for data imputation, particularly in the machine learning (ML) area. This ultimately provides an understanding about how well the proposed framework is evaluated and what type and ratio of missingness are addressed in the proposals. The review questions in this study are (1) what are the ML-based imputation methods studied and proposed during 2010–2020? (2) How the experimentation setup, characteristics of data sets and missingness are employed in these studies? (3) What metrics were used for the evaluation of imputation method?Design/methodology/approachThe review process went through the standard identification, screening and selection process. The initial search on electronic databases for missing value imputation (MVI) based on ML algorithms returned a large number of papers totaling at 2,883. Most of the papers at this stage were not exactly an MVI technique relevant to this study. The literature reviews are first scanned in the title for relevancy, and 306 literature reviews were identified as appropriate. Upon reviewing the abstract text, 151 literature reviews that are not eligible for this study are dropped. This resulted in 155 research papers suitable for full-text review. From this, 117 papers are used in assessment of the review questions.FindingsThis study shows that clustering- and instance-based algorithms are the most proposed MVI methods. Percentage of correct prediction (PCP) and root mean square error (RMSE) are most used evaluation metrics in these studies. For experimentation, majority of the studies sourced the data sets from publicly available data set repositories. A common approach is that the complete data set is set as baseline to evaluate the effectiveness of imputation on the test data sets with artificially induced missingness. The data set size and missingness ratio varied across the experimentations, while missing datatype and mechanism are pertaining to the capability of imputation. Computational expense is a concern, and experimentation using large data sets appears to be a challenge.Originality/valueIt is understood from the review that there is no single universal solution to missing data problem. Variants of ML approaches work well with the missingness based on the characteristics of the data set. Most of the methods reviewed lack generalization with regard to applicability. Another concern related to applicability is the complexity of the formulation and implementation of the algorithm. Imputations based on k-nearest neighbors (kNN) and clustering algorithms which are simple and easy to implement make it popular across various domains.


Sign in / Sign up

Export Citation Format

Share Document