implementation effort
Recently Published Documents


TOTAL DOCUMENTS

81
(FIVE YEARS 25)

H-INDEX

10
(FIVE YEARS 2)

Author(s):  
Johannes Blühdorn ◽  
Nicolas R. Gauger ◽  
Matthias Kabel

AbstractWe propose a universal method for the evaluation of generalized standard materials that greatly simplifies the material law implementation process. By means of automatic differentiation and a numerical integration scheme, AutoMat reduces the implementation effort to two potential functions. By moving AutoMat to the GPU, we close the performance gap to conventional evaluation routines and demonstrate in detail that the expression level reverse mode of automatic differentiation as well as its extension to second order derivatives can be applied inside CUDA kernels. We underline the effectiveness and the applicability of AutoMat by integrating it into the FFT-based homogenization scheme of Moulinec and Suquet and discuss the benefits of using AutoMat with respect to runtime and solution accuracy for an elasto-viscoplastic example.


2021 ◽  
Vol 2021 (3) ◽  
pp. 4692-4697
Author(s):  
C. Gißke ◽  
◽  
T. Albrecht ◽  
H, Wiemer ◽  
W. Esswein ◽  
...  

In most sectors of today’s industry, there is the requirement to manufacture work pieces with accuracy in micron range. However, maintaining this accuracy can be considerably impeded by thermally induced displacements which arise in the production process. Thermally induced errors cause large parts of residual machining errors on modern machine tools. Using climate control systems for whole workshops can counteract these errors. Yet, this method is extremely cost and energy intensive. To increase machine accuracy and meet the industrial demands in a more efficient way, research offers various methods to minimize this error. These methods differ greatly in their approaches and requirements. Some intervene in the machine structure, while others are based on thermomechanical models and need to be integrated into the software of the control system as correction algorithms. Since machine tools also vary in their kinematic structure and complexity, it is difficult for potential users to select suitable solutions and estimate the effort required to implement them with the available resources. This paper presents a systematization and taxonomy of such methods, which was elaborated based on solutions developed in the project CRC/TR 96. By conducting semi-structured expert interviews, the functional principle, prerequisites and resources required for the application of each solution were recorded, categorized and evaluated in terms of their effort. Based on the presented systematization, it is possible to compare these different methods and evaluate them regarding their implementation effort and flexibility. This is the first step towards a user-specific evaluation of these methods in the future and towards facilitating the transfer of this fundamental research into industrial application.


Author(s):  
Alexander Stolar ◽  
Anton Friedl

Process safety techniques have been used in industry for decades to make processes and systems safer and to optimize them, and thus to improve sustainability. Their main aim is to prevent damage to people, equipment and the environment. In this overview, process safety and risk management techniques are shown that can be applied in the different life cycle phases of an application without much implementation effort. A broad and universal applicability in a wide range of business sectors is set as the main focus. In addition to the application of system improvement techniques, a number of additional considerations, such as maintenance and the consideration of abnormal operating conditions, are included in order to be able to comprehensively improve a system or application.


2021 ◽  
Vol 11 (10) ◽  
pp. 49
Author(s):  
Hanna-Leena Melender ◽  
Salla Pirkola ◽  
Kaisa Imppola ◽  
Helinä Ahonen ◽  
Saija Seppelin

The purpose of this clinical practice guideline implementation effort was to put into practice a Finnish nursing guideline on emotional support for preschool-aged children in day-surgery nursing for nurses at a day-surgery unit. The strategy was to use a 10-step framework in the implementation process. In this brief article, the strategy and the outcomes of the guideline implementation effort are described.


Author(s):  
Jan Strohschein ◽  
Andreas Fischbach ◽  
Andreas Bunte ◽  
Heide Faeskorn-Woyke ◽  
Natalia Moriz ◽  
...  

AbstractThis paper presents the cognitive module of the Cognitive Architecture for Artificial Intelligence (CAAI) in cyber-physical production systems (CPPS). The goal of this architecture is to reduce the implementation effort of artificial intelligence (AI) algorithms in CPPS. Declarative user goals and the provided algorithm-knowledge base allow the dynamic pipeline orchestration and configuration. A big data platform (BDP) instantiates the pipelines and monitors the CPPS performance for further evaluation through the cognitive module. Thus, the cognitive module is able to select feasible and robust configurations for process pipelines in varying use cases. Furthermore, it automatically adapts the models and algorithms based on model quality and resource consumption. The cognitive module also instantiates additional pipelines to evaluate algorithms from different classes on test functions. CAAI relies on well-defined interfaces to enable the integration of additional modules and reduce implementation effort. Finally, an implementation based on Docker, Kubernetes, and Kafka for the virtualization and orchestration of the individual modules and as messaging technology for module communication is used to evaluate a real-world use case.


2021 ◽  
Author(s):  
Victor Dumitriu

A number of modern digital processing systems implement complex multi-mode applications with high performance requirements and strict operating constraints; examples include video processing and telecommunication applications. A number of these systems use increasingly large FPGAs as the implementation medium, due to reduced development costs. The combination of increases in FPGA capacity and system complexity has lead to a non-linear increase in system implementation effort. If left unchecked, implementation effort for such systems will reach the point where it becomes a design and development bottleneck. At the same time, the reduction in transistor size used to manufacture these devices can lead to increased device fault rates. To address these two problems, the Multi-mode Adaptive Collaborative Reconfigurable self-Organized System (MACROS) Framework and design methodology is proposed and described in this work. The MACROS Framework other the ability for run-time architecture adaptation by integrating FPGA configuration into regular operation. The MACROS Framework allows for run-time generation of Application-Specific Processors (ASPs) through the deployment, assembly and integration of pre-built functional units; the framework further allows the relocation of functional units without affecting system functionality. The use of functional units as building blocks allows the system to be implemented on a piece-by-piece basis, which reduces the complexity of mapping, placement and routing tasks; the ability to relocate functional units allows fault mitigation by avoiding faulty regions in a device. The proposed framework has been used to implement multiple video processing systems which were used as verification and testing instruments. The MACROS framework was found to successfully support run-time architecture adaptation in the form of functional unit deployment and relocation in high performance systems. For large systems (more than 100 functional units), the MACROS Framework implementation effort, measured as time cost, was found to be one third that of a traditional (monolithic) system; more importantly, in MACRO Systems this time cost was found to increase linearly with system complexity (the number of functional units). When considering fault mitigation capabilities, the resource overhead associated with the MACROS Framework was found to be up to 85 % smaller than a traditional Triple Module Redundancy (TMR) solution.


Author(s):  
Angela Merzweiler ◽  
Sebastian Stäubert ◽  
Alexander Strübing ◽  
Armel Tonmbiak ◽  
Knut Kaulke ◽  
...  

IHE has defined more than 200 integration profiles in order to improve the interoperability of application systems in healthcare. These profiles describe how standards should be used in particular use cases. These profiles are very helpful but their correct use is challenging, if the user is not familiar to the specifications. Therefore, inexperienced modelers of information systems quickly lose track of existing IHE profiles. In addition, the users of these profiles are often not aware of rules that are defined within these profiles and of dependencies that exist between the profiles. There are also modelers that do not notice the differences between the implemented actors, because they do not know the optional capabilities of some actors. The aim of this paper is therefore to describe a concept how modelers of information systems can be supported in the selection and use of IHE profiles and how this concept was prototypically implemented in the “Three-layer Graph-based meta model” modeling tool (3LGM2 Tool). The described modeling process consists of the following steps that can be looped: defining the use case, choosing suitable integration profiles, choosing actors and their options and assigning them to application systems, checking for required actor groupings and modeling transactions. Most of these steps were implemented in the 3LGM2 Tool. Further implementation effort and evaluation of our approach by inexperienced users is needed. But after that our tool should be a valuable tool for modelers planning healthcare information system architectures, in particular those based on IHE.


2021 ◽  
Author(s):  
Victor Dumitriu

A number of modern digital processing systems implement complex multi-mode applications with high performance requirements and strict operating constraints; examples include video processing and telecommunication applications. A number of these systems use increasingly large FPGAs as the implementation medium, due to reduced development costs. The combination of increases in FPGA capacity and system complexity has lead to a non-linear increase in system implementation effort. If left unchecked, implementation effort for such systems will reach the point where it becomes a design and development bottleneck. At the same time, the reduction in transistor size used to manufacture these devices can lead to increased device fault rates. To address these two problems, the Multi-mode Adaptive Collaborative Reconfigurable self-Organized System (MACROS) Framework and design methodology is proposed and described in this work. The MACROS Framework other the ability for run-time architecture adaptation by integrating FPGA configuration into regular operation. The MACROS Framework allows for run-time generation of Application-Specific Processors (ASPs) through the deployment, assembly and integration of pre-built functional units; the framework further allows the relocation of functional units without affecting system functionality. The use of functional units as building blocks allows the system to be implemented on a piece-by-piece basis, which reduces the complexity of mapping, placement and routing tasks; the ability to relocate functional units allows fault mitigation by avoiding faulty regions in a device. The proposed framework has been used to implement multiple video processing systems which were used as verification and testing instruments. The MACROS framework was found to successfully support run-time architecture adaptation in the form of functional unit deployment and relocation in high performance systems. For large systems (more than 100 functional units), the MACROS Framework implementation effort, measured as time cost, was found to be one third that of a traditional (monolithic) system; more importantly, in MACRO Systems this time cost was found to increase linearly with system complexity (the number of functional units). When considering fault mitigation capabilities, the resource overhead associated with the MACROS Framework was found to be up to 85 % smaller than a traditional Triple Module Redundancy (TMR) solution.


2021 ◽  
pp. 009385482110084
Author(s):  
Gina M. Vincent ◽  
Rachael T. Perrault ◽  
Dara C. Drawbridge ◽  
Gretchen O. Landry ◽  
Thomas Grisso

This study examined the feasibility of and fidelity to risk/needs assessment, mental health screening, and risk-need-responsivity (RNR)-based case planning within juvenile probation in two states. The researcher-guided implementation effort included the Massachusetts Youth Screening Instrument-2 (MAYSI-2), Structured Assessment of Violence Risk in Youth (SAVRY), and policies to prioritize criminogenic needs while using mental health services only when warranted. Data from 53 probation officers (POs) and 553 youths indicated three of five offices had high fidelity to administration and case planning policies. The interrater reliability ( n = 85; intraclass correlation coefficient [ICC][A, 1] = .92 [Northern state] and .80 [Southern state]) and predictive validity ( n = 455; Exp[B] = 1.83) of SAVRY risk ratings were significant. There was an overreliance on mental health services; 48% of youth received these referrals when only 20% screened as having mental health needs. Barriers to fidelity to RNR practices in some offices included assessments not being conducted before disposition, lack of service availability, and limited buy-in from a few stakeholders.


Sign in / Sign up

Export Citation Format

Share Document