Milling Simulation-Based Method to Evaluate Manufacturability of Machine Parts

Author(s):  
Masatomo Inui ◽  
Tong Zhang ◽  
Nobuyuki Umezu

Abstract The designers of mechanical products are generally not experts in machining. Therefore, they often design parts with inherent machining difficulties. Although various design for manufacturability tools have been developed to avoid such problems, their use in practice remains limited due to their lack of versatility. We develop a novel piece of software that can automatically detect difficult-to-machine shapes in a part. Using this software, the designer can determine which shapes are difficult to produce using conventional cutting by themselves, and can modify the shape on the spot. In the Internet-based part manufacturing business, the same software can be used to check whether the given part can be produced using the standard milling operations predetermined in a company. Our system is based on “milling simulation”. It detects any shapes that cannot be produced using the prepared cutting tools by executing the milling simulations with the tools, and then visualizing shapes that remain unmachined after all simulations. In this study, the acceleration of the processing is realized using graphics processing unit technology, and it is possible to extract difficult-to-machine shapes in several minutes using a standard PC.

Sustainability of a company is driven by its operational efficiency. The operational efficiency plays a significant role in a company’s growth and profitability. Thus, it forms the foundation for the metrics of performance known as the Key Performance Indicators (KPIs). The KPIs establishes a connection between the concept of performance and the means to gauge the same. In this work, we use a neural network with two fully connected layers for analyzing and predicting the factors which are used for calculating the KPIs. The implementation was done with the help of a Graphics Processing Unit for running the complex calculations. The KPIs are obtained for the projected factors and the inference was done for five different non- life insurers in India, based on the public disclosure data available with insurance supervisors in India


Author(s):  
Masatomo Inui ◽  
Kouhei Nishimiya ◽  
Nobuyuki Umezu

Abstract Clearance is a basic parameter in the design of mechanical products, generally specified as the distance between two shape elements, for example, the width of a slot. This definition is unsuitable for evaluating the clearance during assembly or manufacturing tasks, where the depth information is also critical. In this paper, we propose a novel definition of clearance for the surface of three-dimensional objects. Unlike the typical methods used to define clearance, the proposed method can simultaneously handle the relationship between the width and depth in the clearance, and thus, obtain an intuitive understanding regarding the assembly and manufacturing capability of a product. Our definition is based on the accessibility cone of a point on the object’s surface; further, the peak angle of the accessibility cone corresponds to the clearance at this point. A computation method of the clearance is presented and the results of its application are demonstrated. Our method uses the rendering function of a graphics processing unit to compute the clearance. A large computation time necessary for the analysis is considered as a problem regarding the practical use of this clearance definition.


2009 ◽  
Vol 79-82 ◽  
pp. 1309-1312
Author(s):  
Kuan Yu ◽  
Bo Zhu

Molecular simulation can provide mechanism insights into how material behaviour related to molecular properties and microscopic details of the arrangement of many molecules. With the development of Graphics Processing Unit (GPU), scientists have realized general purpose molecular simulations on GPU and the Common Unified Device Architecture (CUDA) environment. In this paper, we provided a brief overview of molecular simulation and CUDA, introduced the recent achievements in molecular simulation based on GPU in material science, mainly about Monte Carlo method and Molecular Dynamics. The recent research achievements have shown that GPUs can provide unprecedented computational power for scientific applications. With optimized algorithms and program codes, a single GPU can provide a performance equivalent to that of a distributed computer cluster. So, study of molecular simulations based on GPU will accelerate the development of material science in the future.


2007 ◽  
Author(s):  
Fredrick H. Rothganger ◽  
Kurt W. Larson ◽  
Antonio Ignacio Gonzales ◽  
Daniel S. Myers

2021 ◽  
Vol 22 (10) ◽  
pp. 5212
Author(s):  
Andrzej Bak

A key question confronting computational chemists concerns the preferable ligand geometry that fits complementarily into the receptor pocket. Typically, the postulated ‘bioactive’ 3D ligand conformation is constructed as a ‘sophisticated guess’ (unnecessarily geometry-optimized) mirroring the pharmacophore hypothesis—sometimes based on an erroneous prerequisite. Hence, 4D-QSAR scheme and its ‘dialects’ have been practically implemented as higher level of model abstraction that allows the examination of the multiple molecular conformation, orientation and protonation representation, respectively. Nearly a quarter of a century has passed since the eminent work of Hopfinger appeared on the stage; therefore the natural question occurs whether 4D-QSAR approach is still appealing to the scientific community? With no intention to be comprehensive, a review of the current state of art in the field of receptor-independent (RI) and receptor-dependent (RD) 4D-QSAR methodology is provided with a brief examination of the ‘mainstream’ algorithms. In fact, a myriad of 4D-QSAR methods have been implemented and applied practically for a diverse range of molecules. It seems that, 4D-QSAR approach has been experiencing a promising renaissance of interests that might be fuelled by the rising power of the graphics processing unit (GPU) clusters applied to full-atom MD-based simulations of the protein-ligand complexes.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


Sign in / Sign up

Export Citation Format

Share Document