A cloud integrated strategy for reconfigurable manufacturing systems

2020 ◽  
Vol 28 (4) ◽  
pp. 305-318
Author(s):  
Bo Guo ◽  
Fu-Shin Lee ◽  
Chen-I Lin ◽  
Yun-Qing Lu

Manufacturing industries nowadays need to reconfigure their production lines promptly as to acclimate to rapid changing markets. Meanwhile, exercising system reconfigurations also needs to manage innumerous types of manufacturing apparatus involved. Nevertheless, traditional incompatible manufacturing systems delivered by exclusive vendors usually increase manufacture costs and prolong development time. This paper presents a novel RMS framework, which is intended to implement a Redis master/slave server mechanism to integrate various CNC manufacturing apparatus, hardware control means, and data exchange protocols through developed configurating codes. In the RMS framework each manufacturing apparatus or accessory stands for an object, and information of recognized CNC control panel image features, associated apparatus tuned parameters, communication formats, operation procedures, and control APIs, are stored into the Redis master cloud server database. Through implementation of machine vision techniques to acquire CNC controller panel images, the system effectively identifies instantaneous CNC machining states and response messages once the embedded image features are recognized. Upon demanding system reconfigurations for the manufacturing resources, the system issues commands from Redis local client servers to retrieve the stored information in the Redis master cloud servers, in which the resources for registered CNC machines, robots, and built-in accessories are maintained securely. The system then exploits the collected information locally to reconfigure involved manufacturing resources and starts manufacturing immediately, and thus is capable to promptly response to fast revised orders in a comitative market. In a prototyped RMS architecture, the proposed approach takes advantage of recognized feedback visual information, which is obtained using an invariant image feature extraction algorithm, and effectively commands an industrial robot to accomplish demanded actions on a CNC control panel, as a regular operator does daily in front of the CNC machine for manufacturing.

Author(s):  
Peng Cheng Wei ◽  
Yang Zou

As an important branch of artificial intelligence, computer vision plays a huge role in the rapid development of artificial intelligence. From a biological point of view, in the acquisition and processing of information, vision is much more important than hearing, touch, etc., because 70% of the human cerebral cortex is processing visual information. Therefore, advances in computer vision technology are critical to the development of artificial intelligence that is designed to allow machines to think and handle things like humans. The acquisition and processing of visual information has always been the focus of computer vision research, and it is also difficult. The main problem of traditional computer vision technology in the processing of visual information is that the extracted image features are less discriminative, the generalization ability of image features in complex background scenes is insufficient, and the recognition ability on object recognition is poor. In response to these problems, based on the visual neural mechanism, this paper establishes an appropriate computer model for the neuronal cells in the human primary visual cortex, models the recognition response mechanism of the visual ventral system, and performs image feature extraction on the training samples. And object recognition. The results show that compared with the traditional methods, the proposed method effectively improves the discrimination of image features, and the image features extracted under complex background scenes have good generalization ability. On this basis, the training samples can be effectively recognized. The results show that the model based on the visual neural mechanism, the recognition of the edge, orientation and contour of the training sample show the advantages of the biological vision mechanism in object recognition.


2014 ◽  
Vol 621 ◽  
pp. 499-504
Author(s):  
Tian Biao Yu ◽  
Xu Zhang ◽  
Xiu Ling Xu ◽  
Lei Geng ◽  
Wan Shan Wang

Flexible manufacturing cell (FMC) was the extension of the CNC machining center,which consisted of three portions , a CNC machining center , an industrial robot and unified by a the background computer programmable control. In order to guarantee the rationality and feasibility of the automatic operation loading and unloading workpieces , the simulation object was based on the KUKA robot of flexible manufacturing cell in this paper, which based on MATLAB/GUI and combinated of VRML language and information transmission method development between MATLAB and the simulation system of flexible manufacturing unit. According to the set up parameters and angles, the model was implemented. Through the control panel and the robot's real-time interactive simulation, 3d visualization window can be real-time display of the robot simulation, and validated the path planning. According to the simulation results of the model , it could provide reference to the actual movement of the KUKA robot trajectory planning, provided the better fully automated feasibility analysis of flexible manufacturing cell, and proved the improvement of flexible manufacturing unit operation scientifically and reasonably.


Author(s):  
W. Krakow ◽  
D. A. Smith

The successful determination of the atomic structure of [110] tilt boundaries in Au stems from the investigation of microscope performance at intermediate accelerating voltages (200 and 400kV) as well as a detailed understanding of how grain boundary image features depend on dynamical diffraction processes variation with specimen and beam orientations. This success is also facilitated by improving image quality by digital image processing techniques to the point where a structure image is obtained and each atom position is represented by a resolved image feature. Figure 1 shows an example of a low angle (∼10°) Σ = 129/[110] tilt boundary in a ∼250Å Au film, taken under tilted beam brightfield imaging conditions, to illustrate the steps necessary to obtain the atomic structure configuration from the image. The original image of Fig. 1a shows the regular arrangement of strain-field images associated with the cores of ½ [10] primary dislocations which are separated by ∼15Å.


2016 ◽  
Vol 20 (2) ◽  
pp. 191-201 ◽  
Author(s):  
Wei Lu ◽  
Yan Cui ◽  
Jun Teng

To decrease the cost of instrumentation for the strain and displacement monitoring method that uses sensors as well as considers the structural health monitoring challenges in sensor installation, it is necessary to develop a machine vision-based monitoring method. For this method, the most important step is the accurate extraction of the image feature. In this article, the edge detection operator based on multi-scale structure elements and the compound mathematical morphological operator is proposed to provide improved image feature extraction. The proposed method can not only achieve an improved filtering effect and anti-noise ability but can also detect the edge more accurately. Furthermore, the required image features (vertex of a square calibration board and centroid of a circular target) can be accurately extracted using the extracted image edge information. For validation, the monitoring tests for the structural local mean strain and in-plane displacement were designed accordingly. Through analysis of the error between the measured and calculated values of the structural strain and displacement, the feasibility and effectiveness of the proposed edge detection operator are verified.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 226
Author(s):  
Xuyang Zhao ◽  
Cisheng Wu ◽  
Duanyong Liu

Within the context of the large-scale application of industrial robots, methods of analyzing the life-cycle cost (LCC) of industrial robot production have shown considerable developments, but there remains a lack of methods that allow for the examination of robot substitution. Taking inspiration from the symmetry philosophy in manufacturing systems engineering, this article further establishes a comparative LCC analysis model to compare the LCC of the industrial robot production with traditional production at the same time. This model introduces intangible costs (covering idle loss, efficiency loss and defect loss) to supplement the actual costs and comprehensively uses various methods for cost allocation and variable estimation to conduct total cost and the cost efficiency analysis, together with hierarchical decomposition and dynamic comparison. To demonstrate the model, an investigation of a Chinese automobile manufacturer is provided to compare the LCC of welding robot production with that of manual welding production; methods of case analysis and simulation are combined, and a thorough comparison is done with related existing works to show the validity of this framework. In accordance with this study, a simple template is developed to support the decision-making analysis of the application and cost management of industrial robots. In addition, the case analysis and simulations can provide references for enterprises in emerging markets in relation to robot substitution.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 291 ◽  
Author(s):  
Hamdi Sahloul ◽  
Shouhei Shirafuji ◽  
Jun Ota

Local image features are invariant to in-plane rotations and robust to minor viewpoint changes. However, the current detectors and descriptors for local image features fail to accommodate out-of-plane rotations larger than 25°–30°. Invariance to such viewpoint changes is essential for numerous applications, including wide baseline matching, 6D pose estimation, and object reconstruction. In this study, we present a general embedding that wraps a detector/descriptor pair in order to increase viewpoint invariance by exploiting input depth maps. The proposed embedding locates smooth surfaces within the input RGB-D images and projects them into a viewpoint invariant representation, enabling the detection and description of more viewpoint invariant features. Our embedding can be utilized with different combinations of descriptor/detector pairs, according to the desired application. Using synthetic and real-world objects, we evaluated the viewpoint invariance of various detectors and descriptors, for both standalone and embedded approaches. While standalone local image features fail to accommodate average viewpoint changes beyond 33.3°, our proposed embedding boosted the viewpoint invariance to different levels, depending on the scene geometry. Objects with distinct surface discontinuities were on average invariant up to 52.8°, and the overall average for all evaluated datasets was 45.4°. Similarly, out of a total of 140 combinations involving 20 local image features and various objects with distinct surface discontinuities, only a single standalone local image feature exceeded the goal of 60° viewpoint difference in just two combinations, as compared with 19 different local image features succeeding in 73 combinations when wrapped in the proposed embedding. Furthermore, the proposed approach operates robustly in the presence of input depth noise, even that of low-cost commodity depth sensors, and well beyond.


2012 ◽  
Vol 516 ◽  
pp. 234-239 ◽  
Author(s):  
Wei Wu ◽  
Toshiki Hirogaki ◽  
Eiichi Aoyama

Recently, new needs have emerged to control not only linear motion but also rotational motion in high-accuracy manufacturing fields. Many five-axis-controlled machining centres are therefore in use. However, one problem has been the difficulty of creating flexible manufacturing systems with methods based on the use of these machine tools. On the other hand, the industrial dual-arm robot has gained attention as a new way to achieve accurate linear and rotational motion in an attempt to control a working plate like a machine tool table. In the present report, a cooperating dual-arm motion is demonstrated to make it feasible to perform stable operation control, such as controlling the working plate to keep a ball rolling around a circular path on it. As a result, we investigated the influence of each axis motion error on a ball-rolling path.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 828
Author(s):  
Wai Lun Lo ◽  
Henry Shu Hung Chung ◽  
Hong Fu

Estimation of Meteorological visibility from image characteristics is a challenging problem in the research of meteorological parameters estimation. Meteorological visibility can be used to indicate the weather transparency and this indicator is important for transport safety. This paper summarizes the outcomes of the experimental evaluation of a Particle Swarm Optimization (PSO) based transfer learning method for meteorological visibility estimation method. This paper proposes a modified approach of the transfer learning method for visibility estimation by using PSO feature selection. Image data are collected at fixed location with fixed viewing angle. The database images were gone through a pre-processing step of gray-averaging so as to provide information of static landmark objects for automatic extraction of effective regions from images. Effective regions are then extracted from image database and the image features are then extracted from the Neural Network. Subset of Image features are selected based on the Particle Swarming Optimization (PSO) methods to obtain the image feature vectors for each effective sub-region. The image feature vectors are then used to estimate the visibilities of the images by using the Multiple Support Vector Regression (SVR) models. Experimental results show that the proposed method can give an accuracy more than 90% for visibility estimation and the proposed method is effective and robust.


2011 ◽  
Vol 2011 ◽  
pp. 1-14 ◽  
Author(s):  
Jinjun Li ◽  
Hong Zhao ◽  
Chengying Shi ◽  
Xiang Zhou

A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD) is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document