TWO-LEVEL BURN-IN FOR RELIABILITY AND ECONOMY IN REPAIRABLE SERIES SYSTEMS HAVING INCOMPATIBILITY

Author(s):  
KYUNGMEE O. KIM ◽  
WAY KUO

When a system is assembled from components, incompatibility often occurs as a result of the assembly process. The ability to quantify incompatibility is very important for making burn-in decisions because the goal of system burn-in is to minimize the incompatibility factor. In the past, incompatibility has been only partially represented in the system prediction models because it was assumed that assembly had no effect on the components. This paper presents a more accurate model for system prediction by allowing for the possibility that, in some cases, assembly adversely affects the components. After applying a superposition of delayed renewal processes and a nonhomogeneous Poisson process for modeling times between system failures, we derive and analyze the effects of component and system burn-in on the system cost and performance. Examples are included to demonstrate how to determine optimal component and system burn-in times simultaneously based on an equivalent problem formation and nonlinear programming.

Author(s):  
Ramesh Varma ◽  
Richard Brooks ◽  
Ronald Twist ◽  
James Arnold ◽  
Cleston Messick

Abstract In a prequalification effort to evaluate the assembly process for the industrial grade high pin count devices for use in a high reliability application, one device exhibited characteristics that, without corrective actions and/or extensive screening, may lead to intermittent system failures and unacceptable reliability. Five methodologies confirmed this conclusion: (1) low post-decapsulation wire pull results; (2) bond shape analysis showed process variation; (3) Failure Analysis (FA) using state of the art equipment determined the root causes and verified the low wire pull results; (4) temperature cycling parts while monitoring, showed intermittent failures, and (5) parts tested from other vendors using the same techniques passed all limits.


2020 ◽  
Vol 16 (5) ◽  
pp. 685-707 ◽  
Author(s):  
Amna Batool ◽  
Farid Menaa ◽  
Bushra Uzair ◽  
Barkat Ali Khan ◽  
Bouzid Menaa

: The pace at which nanotheranostic technology for human disease is evolving has accelerated exponentially over the past five years. Nanotechnology is committed to utilizing the intrinsic properties of materials and structures at submicroscopic-scale measures. Indeed, there is generally a profound influence of reducing physical dimensions of particulates and devices on their physico-chemical characteristics, biological properties, and performance. The exploration of nature’s components to work effectively as nanoscaffolds or nanodevices represents a tremendous and growing interest in medicine for various applications (e.g., biosensing, tunable control and targeted drug release, tissue engineering). Several nanotheranostic approaches (i.e., diagnostic plus therapeutic using nanoscale) conferring unique features are constantly progressing and overcoming all the limitations of conventional medicines including specificity, efficacy, solubility, sensitivity, biodegradability, biocompatibility, stability, interactions at subcellular levels. : This review introduces two major aspects of nanotechnology as an innovative and challenging theranostic strategy or solution: (i) the most intriguing (bare and functionalized) nanomaterials with their respective advantages and drawbacks; (ii) the current and promising multifunctional “smart” nanodevices.


2017 ◽  
Vol 7 (2) ◽  
pp. 7-25
Author(s):  
Karolina Diallo

Pupil with Obsessive-Compulsive Disorder. Over the past twenty years childhood OCD has received more attention than any other anxiety disorder that occurs in the childhood. The increasing interest and research in this area have led to increasing number of diagnoses of OCD in children and adolescents, which affects both specialists and teachers. Depending on the severity of symptoms OCD has a detrimental effect upon child's school performance, which can lead almost to the impossibility to concentrate on school and associated duties. This article is devoted to the obsessive-compulsive disorder and its specifics in children, focusing on the impact of this disorder on behaviour, experience and performance of the child in the school environment. It mentions how important is the role of the teacher in whose class the pupil with this diagnosis is and it points out that it is necessary to increase teachers' competence to identify children with OCD symptoms, to take the disease into the account, to adapt the course of teaching and to introduce such measures that could help children reduce the anxiety and maintain (or increase) the school performance within and in accordance with the school regulations and curriculum.


Author(s):  
Djordje Romanic

Tornadoes and downbursts cause extreme wind speeds that often present a threat to human safety, structures, and the environment. While the accuracy of weather forecasts has increased manifold over the past several decades, the current numerical weather prediction models are still not capable of explicitly resolving tornadoes and small-scale downbursts in their operational applications. This chapter describes some of the physical (e.g., tornadogenesis and downburst formation), mathematical (e.g., chaos theory), and computational (e.g., grid resolution) challenges that meteorologists currently face in tornado and downburst forecasting.


Author(s):  
Lucio Salles de Salles ◽  
Lev Khazanovich

The Pavement ME transverse joint faulting model incorporates mechanistic theories that predict development of joint faulting in jointed plain concrete pavements (JPCP). The model is calibrated using the Long-Term Pavement Performance database. However, the Mechanistic-Empirical Pavement Design Guide (MEPDG) encourages transportation agencies, such as state departments of transportation, to perform local calibrations of the faulting model included in Pavement ME. Model calibration is a complicated and effort-intensive process that requires high-quality pavement design and performance data. Pavement management data—which is collected regularly and in large amounts—may present higher variability than is desired for faulting performance model calibration. The MEPDG performance prediction models predict pavement distresses with 50% reliability. JPCP are usually designed for high levels of faulting reliability to reduce likelihood of excessive faulting. For design, improving the faulting reliability model is as important as improving the faulting prediction model. This paper proposes a calibration of the Pavement ME reliability model using pavement management system (PMS) data. It illustrates the proposed approach using PMS data from Pennsylvania Department of Transportation. Results show an increase in accuracy for faulting predictions using the new reliability model with various design characteristics. Moreover, the new reliability model allows design of JPCP considering higher levels of traffic because of the less conservative predictions.


Author(s):  
Chaochao Lin ◽  
Matteo Pozzi

Optimal exploration of engineering systems can be guided by the principle of Value of Information (VoI), which accounts for the topological important of components, their reliability and the management costs. For series systems, in most cases higher inspection priority should be given to unreliable components. For redundant systems such as parallel systems, analysis of one-shot decision problems shows that higher inspection priority should be given to more reliable components. This paper investigates the optimal exploration of redundant systems in long-term decision making with sequential inspection and repairing. When the expected, cumulated, discounted cost is considered, it may become more efficient to give higher inspection priority to less reliable components, in order to preserve system redundancy. To investigate this problem, we develop a Partially Observable Markov Decision Process (POMDP) framework for sequential inspection and maintenance of redundant systems, where the VoI analysis is embedded in the optimal selection of exploratory actions. We investigate the use of alternative approximate POMDP solvers for parallel and more general systems, compare their computation complexities and performance, and show how the inspection priorities depend on the economic discount factor, the degradation rate, the inspection precision, and the repair cost.


2021 ◽  
Vol 11 (1) ◽  
pp. 81
Author(s):  
Kristina C. Backer ◽  
Heather Bortfeld

A debate over the past decade has focused on the so-called bilingual advantage—the idea that bilingual and multilingual individuals have enhanced domain-general executive functions, relative to monolinguals, due to competition-induced monitoring of both processing and representation from the task-irrelevant language(s). In this commentary, we consider a recent study by Pot, Keijzer, and de Bot (2018), which focused on the relationship between individual differences in language usage and performance on an executive function task among multilingual older adults. We discuss their approach and findings in light of a more general movement towards embracing complexity in this domain of research, including individuals’ sociocultural context and position in the lifespan. The field increasingly considers interactions between bilingualism/multilingualism and cognition, employing measures of language use well beyond the early dichotomous perspectives on language background. Moreover, new measures of bilingualism and analytical approaches are helping researchers interrogate the complexities of specific processing issues. Indeed, our review of the bilingualism/multilingualism literature confirms the increased appreciation researchers have for the range of factors—beyond whether someone speaks one, two, or more languages—that impact specific cognitive processes. Here, we highlight some of the most salient of these, and incorporate suggestions for a way forward that likewise encompasses neural perspectives on the topic.


2021 ◽  
pp. 875529302199636
Author(s):  
Mertcan Geyin ◽  
Brett W Maurer ◽  
Brendon A Bradley ◽  
Russell A Green ◽  
Sjoerd van Ballegooy

Earthquakes occurring over the past decade in the Canterbury region of New Zealand have resulted in liquefaction case-history data of unprecedented quantity. This provides the profession with a unique opportunity to advance the prediction of liquefaction occurrence and consequences. Toward that end, this article presents a curated dataset containing ∼15,000 cone-penetration-test-based liquefaction case histories compiled from three earthquakes in Canterbury. The compiled, post-processed data are presented in a dense array structure, allowing researchers to easily access and analyze a wealth of information pertinent to free-field liquefaction response (i.e. triggering and surface manifestation). Research opportunities using these data include, but are not limited to, the training or testing of new and existing liquefaction-prediction models. The many methods used to obtain and process the case-history data are detailed herein, as is the structure of the compiled digital file. Finally, recommendations for analyzing the data are outlined, including nuances and limitations that users should carefully consider.


Author(s):  
Ruofan Liao ◽  
Paravee Maneejuk ◽  
Songsak Sriboonchitta

In the past, in many areas, the best prediction models were linear and nonlinear parametric models. In the last decade, in many application areas, deep learning has shown to lead to more accurate predictions than the parametric models. Deep learning-based predictions are reasonably accurate, but not perfect. How can we achieve better accuracy? To achieve this objective, we propose to combine neural networks with parametric model: namely, to train neural networks not on the original data, but on the differences between the actual data and the predictions of the parametric model. On the example of predicting currency exchange rate, we show that this idea indeed leads to more accurate predictions.


1974 ◽  
Vol 11 (1) ◽  
pp. 72-85 ◽  
Author(s):  
S. M. Samuels

Theorem: A necessary and sufficient condition for the superposition of two ordinary renewal processes to again be a renewal process is that they be Poisson processes.A complete proof of this theorem is given; also it is shown how the theorem follows from the corresponding one for the superposition of two stationary renewal processes.


Sign in / Sign up

Export Citation Format

Share Document