development costs
Recently Published Documents


TOTAL DOCUMENTS

615
(FIVE YEARS 172)

H-INDEX

26
(FIVE YEARS 6)

2022 ◽  
pp. 39-46
Author(s):  
N. A. Gerasimenko

The dynamics of indicators of innovation activities of Russia’s federal districts over 2005– 2019 has been studied. During this period, there has been an increase in the number of organisations that carried out research activities, and also in the total domestic expenditure on research and development. The number of new production technologies and the volume of innovative products produced has increased, and the payback rate on research and development costs has risen in all regions. The trends of decreasing in the innovation activities of organisations and a significant reduction in the number of personnel employed in the field of research and development have been detected. The share of new products in the total volume of products shipped remains low. This situation requires national and regional authorities to take appropriate regulatory actions. 


Materials ◽  
2022 ◽  
Vol 15 (1) ◽  
pp. 323
Author(s):  
Wan-Chun Chuang ◽  
Wei-Long Chen

This study successfully established a strip warpage simulation model of the flip-chip process and investigated the effects of structural design and process (molding, post-mold curing, pretreatment, and ball mounting) on strip warpage. The errors between simulated and experimental values were found to be less than 8%. Taguchi analysis was employed to identify the key factors affecting strip warpage, which were discovered to be die thickness and substrate thickness, followed by mold compound thickness and molding temperature. Although a greater die thickness and mold compound thickness reduce the strip warpage, they also substantially increase the overall strip thickness. To overcome this problem, design criteria are proposed, with the neutral axis of the strip structure located on the bump. The results obtained using the criteria revealed that the strip warpage and overall strip thickness are effectively reduced. In summary, the proposed model can be used to evaluate the effect of structural design and process parameters on strip warpage and can provide strip design guidelines for reducing the amount of strip warpage and meeting the requirements for light, thin, and short chips on the production line. In addition, the proposed guidelines can accelerate the product development cycle and improve product quality with reduced development costs.


2021 ◽  
Vol VI (IV) ◽  
pp. 9-14
Author(s):  
Rao Raza Hashim ◽  
Bushra Arfeen

The modern world operates on the survival of the fittest rule. Hence, there is cutthroat competition among the states, and every state is striving for greater economic development. Development is based on the minimal use of resources which in turn is dependent upon technological innovations. These innovations incur huge research and development costs and can easily be copied to serve as the basis for further developments by the rivals. Thus, the idea of Intellectual Property Rights (IPR) was introduced. While there are many advantages that these rights have to offer, they also prove to be deleterious in some ways as they also play a role in restricting innovation by the global North, which further widens the gap between both worlds. This paper traces the history of the IPR and develops an argument that proposes that IPR has been a cause of inequalities and has restricted innovation.


2021 ◽  
Author(s):  
Sabeer Saeed ◽  
Asaf Varol

As automation is changing everything in today’s world, there is an urgent need for artificial intelligence, the basic component of today’s automation and innovation to have standards for software engineering for analysis and design before it is synthesized to avoid disaster. Artificial intelligence software can make development costs and time easier for programmers. There is a probability that society may reject artificial intelligence unless a trustworthy standard in software engineering is created to make them safe. For society to have more confidence in artificial intelligence applications or systems, researchers and practitioners in computing industry need to work not only on the cross-section of artificial intelligence and software engineering, but also on software theory that can serve as a universal framework for software development, most especially in artificial intelligence systems. This paper seeks to(a) encourage the development of standards in artificial intelligence that will immensely contribute to the development of software engineering industry considering the fact that artificial intelligence is one of the leading technologies driving innovation worldwide (b) Propose the need for professional bodies from philosophy, law, medicine, engineering, government, international community (such as NATO, UN), and science and technology bodies to develop a standardized framework on how AI can work in the future that can guarantee safety to the public among others. These standards will boost public confidence and guarantee acceptance of artificial intelligence applications or systems by both the end-users and the general public.


2021 ◽  
Author(s):  
Syakira Saadon ◽  
Norhazrin Azmi ◽  
Prabagar Murukesavan ◽  
Norsham Nordin ◽  
Salman Saad

Abstract Petroliam Nasional Berhad (PETRONAS) is embarking on the implementation of the Design One Build Many (D1BM) concept, an integrated approach on design standardization, replication and volume consolidation for light weight fit for purpose wellhead platforms - also known as Lightweight Structure (LWS). The objective of the standardization is to enable monetization of marginal and small fields by improving project economics that are challenged with the high development costs and conventional execution schedules. Traditionally, projects are developed through a "bespoke" design which requires a specific engineering study during the Front End Loading (FEL) phase to cater for the field specific requirements. In addition, once the project has been sanctioned, it is a must to undergo tendering and bidding activities which can increase field monetization duration by four to five months. The current "bespoke" design has resulted in non-standardization, loss of opportunity for volume consolidation and ultimately longer time for field monetization. Although the Design One Build Many principles were known for a long time, but they were rather project oriented. Thus this emerging solution is a result of synthesizing multiple challenges with the goal to establish an end-to-end systematic approach in monetizing marginal and small fields by lowering development cost and monetization duration. There will be standardized sets of Base Design and a flexible Catalogue items to cater for standardized add on items. Lessons learned incorporation upon the repeated design and standardized execution strategy including Engineering, Procurement, Construction, Installation and Commissioning could also help in improving the delivery efficiency for the lightweight structure. The greater collaboration across fields and blocks will give significant added advantage through economies of scale efficiency and eventually increase in the overall project value.


2021 ◽  
Author(s):  
Abdullah Al Anboori ◽  
Stephen Dee ◽  
Khalil Al Rashdi ◽  
Herbert Volk

Abstract The degree of fluid compartmentalization has direct implications on the development costs of oil and gas reservoirs, since it may negatively impact gas water contacts (GWC) and fluid condensate gas ratios (CGR). In this case study on the Barik Formation in the giant Khazzan gas field in Block 61 in Oman we demonstrate how integrating independent approaches for assessing potential reservoir compartmentalization can be used to assess compartmentalization risk. The three disciplines that were integrated are structural geology (fault seal analysis, movement and stress stages of faults and fractures, traps geometry over geological time), petroleum systems (fluid chemistry and pressure, charge history) and sedimentology-stratigraphy including diagenesis (sedimentological and diagenetic controls on vertical and lateral facies and reservoir quality variation). Dynamic data from production tests were also analyzed and integrated with the observations above. Based on this work, Combined Common Risk Segment (CCRS) maps with a most likely and alternative scenarios for reservoir compartmentalization were constructed. While pressure data carry significant uncertainty due to the tight nature of the deeply buried rocks, it is clear pressures in gas-bearing sections fall onto a single pressure gradient across Block 61, while water pressures indicate variable GWCs. Overall, the GWCs appear to shallow across the field towards the NW, while water pressure appears to increase in that direction. The "apparent" gas communication with separate aquifers is difficult to explain conventionally. A range of scenarios for fluid distribution and reservoir connectivity are discussed. Fault seal compartmentalization and different trap spill points were found to be the most likely mechanism explaining fluid distribution and likely reservoir compartmentalization. Perched water may be another factor explaining variable GWCs. Hydrodynamic tilting due to the flow of formation water was deemed an unlikely scenario, and the risk of reservoir compartmentalization due to sedimentological and diagenetic flow barriers was deemed to be low.


Molecules ◽  
2021 ◽  
Vol 26 (23) ◽  
pp. 7369
Author(s):  
Jocelyn Sunseri ◽  
David Ryan Koes

Virtual screening—predicting which compounds within a specified compound library bind to a target molecule, typically a protein—is a fundamental task in the field of drug discovery. Doing virtual screening well provides tangible practical benefits, including reduced drug development costs, faster time to therapeutic viability, and fewer unforeseen side effects. As with most applied computational tasks, the algorithms currently used to perform virtual screening feature inherent tradeoffs between speed and accuracy. Furthermore, even theoretically rigorous, computationally intensive methods may fail to account for important effects relevant to whether a given compound will ultimately be usable as a drug. Here we investigate the virtual screening performance of the recently released Gnina molecular docking software, which uses deep convolutional networks to score protein-ligand structures. We find, on average, that Gnina outperforms conventional empirical scoring. The default scoring in Gnina outperforms the empirical AutoDock Vina scoring function on 89 of the 117 targets of the DUD-E and LIT-PCBA virtual screening benchmarks with a median 1% early enrichment factor that is more than twice that of Vina. However, we also find that issues of bias linger in these sets, even when not used directly to train models, and this bias obfuscates to what extent machine learning models are achieving their performance through a sophisticated interpretation of molecular interactions versus fitting to non-informative simplistic property distributions.


2021 ◽  
Vol 108 ◽  
pp. 102834
Author(s):  
Kenneth Moreland ◽  
Robert Maynard ◽  
David Pugmire ◽  
Abhishek Yenpure ◽  
Allison Vacanti ◽  
...  
Keyword(s):  

Systems ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 84
Author(s):  
Sebastian Kirmse ◽  
Robert J. Cloutier ◽  
Kuang-Ting Hsiao

Nanocomposites provide outstanding benefits and possibilities compared to traditional composites but struggle to make it into the market due to the complexity and large number of associated challenges involved in, as well as lack of standards for, nanocomposite commercialization. This article proposes a commercialization framework utilizing market analysis and systems engineering to support the commercialization process of such high technologies. The article demonstrates the importance and usefulness of utilizing Model-Based Systems Engineering throughout the commercialization process of nanocomposite technologies when combining it with the Lean LaunchPad approach and an engineering analysis. The framework was validated using a qualitative research method with a case study approach. Applying this framework to a nanocomposite, called ZT-CFRP technology, showed tremendous impacts on the commercialization process, such as reduced market and technological uncertainties, which limits the commercialization risk and increases the chance for capital funding. Furthermore, utilizing the framework helped to decrease the commercialization time and cost due to the use of a lean engineering analysis. This framework is intended to assist advanced material-based companies, material scientists, researchers and entrepreneurs in academia and the industry during the commercialization process by minimizing uncertainties and risks, while focusing resources to reduce time-to-market and development costs.


Author(s):  
Jocelyn Sunseri ◽  
David Koes

Virtual screening - predicting which compounds within a specified compound library bind to a target molecule, typically a protein - is a fundamental task in the field of drug discovery. Doing virtual screening well provides tangible practical benefits, including reduced drug development costs, faster time to therapeutic viability, and fewer unforeseen side effects. As with most applied computational tasks, the algorithms currently used to perform virtual screening feature inherent tradeoffs between speed and accuracy. Furthermore, even theoretically rigorous, computationally intensive methods may fail to account for important effects relevant to whether a given compound will ultimately be usable as a drug. Here we investigate the virtual screening performance of the recently released Gnina molecular docking software, which uses deep convolutional networks to score protein-ligand structures. We find, on average, that Gnina outperforms conventional empirical scoring. The default scoring in Gnina outperforms the empirical AutoDock Vina scoring function on 89 of the 117 targets of the DUD-E and LIT-PCBA virtual screening benchmarks with a median 1% early enrichment factor that is more than twice that of Vina. However, we also find that issues of bias linger in these sets, even when not used directly to train models, and this bias obfuscates to what extent machine learning models are achieving their performance through a sophisticated interpretation of molecular interactions versus fitting to non-informative simplistic property distributions.


Sign in / Sign up

Export Citation Format

Share Document