Journal of Intelligent Manufacturing
Latest Publications


TOTAL DOCUMENTS

2287
(FIVE YEARS 417)

H-INDEX

64
(FIVE YEARS 9)

Published By Springer-Verlag

1572-8145, 0956-5515

Author(s):  
Dionísio H. C. S. S. Martins ◽  
Amaro A. de Lima ◽  
Milena F. Pinto ◽  
Douglas de O. Hemerly ◽  
Thiago de M. Prego ◽  
...  

Author(s):  
Weinan Liu ◽  
Guojun Zhang ◽  
Yu Huang ◽  
Wenyuan Li ◽  
Youmin Rong ◽  
...  

Author(s):  
Xiaoqian Huang ◽  
Mohamad Halwani ◽  
Rajkumar Muthusamy ◽  
Abdulla Ayyad ◽  
Dewald Swart ◽  
...  

AbstractRobotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects’ grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.


Author(s):  
Moncef Soualhi ◽  
Khanh T. P. Nguyen ◽  
Kamal Medjahe ◽  
Denis Lebel ◽  
David Cazaban

Author(s):  
Marco Wurster ◽  
Marius Michel ◽  
Marvin Carl May ◽  
Andreas Kuhnle ◽  
Nicole Stricker ◽  
...  

AbstractRemanufacturing includes disassembly and reassembly of used products to save natural resources and reduce emissions. While assembly is widely understood in the field of operations management, disassembly is a rather new problem in production planning and control. The latter faces the challenge of high uncertainty of type, quantity and quality conditions of returned products, leading to high volatility in remanufacturing production systems. Traditionally, disassembly is a manual labor-intensive production step that, thanks to advances in robotics and artificial intelligence, starts to be automated with autonomous workstations. Due to the diverging material flow, the application of production systems with loosely linked stations is particularly suitable and, owing to the risk of condition induced operational failures, the rise of hybrid disassembly systems that combine manual and autonomous workstations can be expected. In contrast to traditional workstations, autonomous workstations can expand their capabilities but suffer from unknown failure rates. For such adverse conditions a condition-based control for hybrid disassembly systems, based on reinforcement learning, alongside a comprehensive modeling approach is presented in this work. The method is applied to a real-world production system. By comparison with a heuristic control approach, the potential of the RL approach can be proven simulatively using two different test cases.


Sign in / Sign up

Export Citation Format

Share Document