scholarly journals Towards the Rapid Design of Engineered Systems Through Deep Neural Networks

2018 ◽  
Author(s):  
Christopher McComb

The design of a system commits a significant portion of the final cost of that system. Many computational approaches have been developed to assist designers in the analysis (e.g., computational fluid dynamics) and synthesis (e.g., topology optimization) of engineered systems. However, many of these approaches are computationally intensive, taking significant time to complete an analysis and even longer to iteratively synthesize a solution. The current work proposes a methodology for rapidly evaluating and syn- thesizing engineered systems through the use of deep neural networks. The proposed methodology is applied to the analysis and synthesis of offshore structures such as oil platforms. These structures are constructed in a ma- rine environment and are typically designed to achieve specific dynamics in response to a known spectrum of ocean waves. Results show that deep learning can be used to accurately and rapidly synthesize and analyze off- shore structure.

Author(s):  
Chi Qiao ◽  
Andrew T. Myers

Abstract Surrogate modeling of the variability of metocean conditions in space and in time during hurricanes is a crucial task for risk analysis on offshore structures such as offshore wind turbines, which are deployed over a large area. This task is challenging because of the complex nature of the meteorology-metocean interaction in addition to the time-dependence and high-dimensionality of the output. In this paper, spatio-temporal characteristics of surrogate models, such as Deep Neural Networks, are analyzed based on an offshore multi-hazard database created by the authors. The focus of this paper is two-fold: first, the effectiveness of dimension reduction techniques for representing high-dimensional output distributed in space is investigated and, second, an overall approach to estimate spatio-temporal characteristics of hurricane hazards using Deep Neural Networks is presented. The popular dimension reduction technique, Principal Component Analysis, is shown to perform similarly compared to a simpler dimension reduction approach and to not perform as well as a surrogate model implemented without dimension reduction. Discussions are provided to explain why the performance of Principal Component Analysis is only mediocre in this implementation and why dimension reduction might not be necessary.


2021 ◽  
Vol 2021 ◽  
pp. 1-12 ◽  
Author(s):  
Zhongmin Chen ◽  
Zhiwei Xu ◽  
Jianxiong Wan ◽  
Jie Tian ◽  
Limin Liu ◽  
...  

Novel smart environments, such as smart home, smart city, and intelligent transportation, are driving increasing interest in deploying deep neural networks (DNN) in edge devices. Unfortunately, deploying DNN at resource-constrained edge devices poses a huge challenge. These workloads are computationally intensive. Moreover, the edge server-based approach may be affected by incidental factors, such as network jitters and conflicts, when multiple tasks are offloaded to the same device. A rational workload scheduling for smart environments is highly desired. In this work, we propose a Conflict-resilient Incremental Offloading of Deep Neural Networks at Edge (CIODE) for improving the efficiency of DNN inference in the edge smart environment. CIODE divides the DNN model into several partitions by layer and incrementally uploads them to local edge nodes. We design a waiting lock-based scheduling paradigm to choose edge devices for DNN layers to be offloaded. In detail, an advanced lock mechanism is proposed to handle concurrency conflicts. Real-world testbed-based experiments demonstrate that, compared with other state-of-the-art baselines, CIODE outperforms the DNN inference performance of these popular baselines by 20 % to 70 % and significantly improves the robustness under the insight of neighboring collaboration.


2021 ◽  
Vol 20 (6) ◽  
pp. 1-24
Author(s):  
Jason Servais ◽  
Ehsan Atoofian

In recent years, Deep Neural Networks (DNNs) have been deployed into a diverse set of applications from voice recognition to scene generation mostly due to their high-accuracy. DNNs are known to be computationally intensive applications, requiring a significant power budget. There have been a large number of investigations into energy-efficiency of DNNs. However, most of them primarily focused on inference while training of DNNs has received little attention. This work proposes an adaptive technique to identify and avoid redundant computations during the training of DNNs. Elements of activations exhibit a high degree of similarity, causing inputs and outputs of layers of neural networks to perform redundant computations. Based on this observation, we propose Adaptive Computation Reuse for Tensor Cores (ACRTC) where results of previous arithmetic operations are used to avoid redundant computations. ACRTC is an architectural technique, which enables accelerators to take advantage of similarity in input operands and speedup the training process while also increasing energy-efficiency. ACRTC dynamically adjusts the strength of computation reuse based on the tolerance of precision relaxation in different training phases. Over a wide range of neural network topologies, ACRTC accelerates training by 33% and saves energy by 32% with negligible impact on accuracy.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

Sign in / Sign up

Export Citation Format

Share Document