Digital Twin Enhanced Assembly Based on Deep Reinforcement Learning

Author(s):  
Junzheng Li ◽  
Dong Pang ◽  
Yu Zheng ◽  
Xinyi Le
2021 ◽  
Author(s):  
Flavia Pires ◽  
Bilal Ahmad ◽  
Antonio Paulo Moreira ◽  
Paulo Leitao

2021 ◽  
Vol 11 (7) ◽  
pp. 2977
Author(s):  
Kyu Tae Park ◽  
Yoo Ho Son ◽  
Sang Wook Ko ◽  
Sang Do Noh

To achieve efficient personalized production at an affordable cost, a modular manufacturing system (MMS) can be utilized. MMS enables restructuring of its configuration to accommodate product changes and is thus an efficient solution to reduce the costs involved in personalized production. A micro smart factory (MSF) is an MMS with heterogeneous production processes to enable personalized production. Similar to MMS, MSF also enables the restructuring of production configuration; additionally, it comprises cyber-physical production systems (CPPSs) that help achieve resilience. However, MSFs need to overcome performance hurdles with respect to production control. Therefore, this paper proposes a digital twin (DT) and reinforcement learning (RL)-based production control method. This method replaces the existing dispatching rule in the type and instance phases of the MSF. In this method, the RL policy network is learned and evaluated by coordination between DT and RL. The DT provides virtual event logs that include states, actions, and rewards to support learning. These virtual event logs are returned based on vertical integration with the MSF. As a result, the proposed method provides a resilient solution to the CPPS architectural framework and achieves appropriate actions to the dynamic situation of MSF. Additionally, applying DT with RL helps decide what-next/where-next in the production cycle. Moreover, the proposed concept can be extended to various manufacturing domains because the priority rule concept is frequently applied.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4410 ◽  
Author(s):  
Seunghwan Jeong ◽  
Gwangpyo Yoo ◽  
Minjong Yoo ◽  
Ikjun Yeom ◽  
Honguk Woo

Hyperconnectivity via modern Internet of Things (IoT) technologies has recently driven us to envision “digital twin”, in which physical attributes are all embedded, and their latest updates are synchronized on digital spaces in a timely fashion. From the point of view of cyberphysical system (CPS) architectures, the goals of digital twin include providing common programming abstraction on the same level of databases, thereby facilitating seamless integration of real-world physical objects and digital assets at several different system layers. However, the inherent limitations of sampling and observing physical attributes often pose issues related to data uncertainty in practice. In this paper, we propose a learning-based data management scheme where the implementation is layered between sensors attached to physical attributes and domain-specific applications, thereby mitigating the data uncertainty between them. To do so, we present a sensor data management framework, namely D2WIN, which adopts reinforcement learning (RL) techniques to manage the data quality for CPS applications and autonomous systems. To deal with the scale issue incurred by many physical attributes and sensor streams when adopting RL, we propose an action embedding strategy that exploits their distance-based similarity in the physical space coordination. We introduce two embedding methods, i.e., a user-defined function and a generative model, for different conditions. Through experiments, we demonstrate that the D2WIN framework with the action embedding outperforms several known heuristics in terms of achievable data quality under certain resource restrictions. We also test the framework with an autonomous driving simulator, clearly showing its benefit. For example, with only 30% of updates selectively applied by the learned policy, the driving agent maintains its performance about 96.2%, as compared to the ideal condition with full updates.


2021 ◽  
Author(s):  
Raghu Sesha Iyengar ◽  
Kapardi Mallampalli ◽  
Mohan Raghavan

Mechanisms behind neural control of movement have been an active area of research. Goal-directed movement is a common experimental setup used to understand these mechanisms and neural pathways. On the one hand, optimal feedback control theory is used to model and make quantitative predictions of the coordinated activations of the effectors, such as muscles, joints or limbs. While on the other hand, evidence shows that higher centres such as Basal Ganglia and Cerebellum are involved in activities such as reinforcement learning and error correction. In this paper, we provide a framework to build a digital twin of relevant sections of the human spinal cord using our NEUROiD platform. The digital twin is anatomically and physiologically realistic model of the spinal cord at cellular, spinal networks and system level. We then build a framework to learn the supraspinal activations necessary to perform a simple goal directed movement of the upper limb. The NEUROiD model is interfaced to an Opensim model for all the musculoskeletal simulations. We use Deep Reinforcement Learning to obtain the supraspinal activations necessary to perform the goal directed movement. As per our knowledge, this is the first time an attempt is made to learn the stimulation pattern at the spinal cord level, especially by limiting the observation space to only the afferent feedback received on the Ia, II and Ib fibers. Such a setup results in a biologically realistic constrained environment for learning. Our results show that (1) Reinforcement Learning algorithm converges naturally to the triphasic response observed during goal directed movement (2) Increasing the complexity of the goal gradually was very important to accelerate learning (3) Modulation of the afferent inputs were sufficient to execute tasks which were not explicitly learned, but were closely related to the learnt task.


Author(s):  
Gaoqing Shen ◽  
Lei Lei ◽  
Zhilin Li ◽  
Shengsuo Cai ◽  
Lijuan Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document