random failures
Recently Published Documents


TOTAL DOCUMENTS

116
(FIVE YEARS 30)

H-INDEX

16
(FIVE YEARS 1)

Author(s):  
Robert Peruzzi

This case involved industrial equipment whose repeated, seemingly random failures resulted in the buyer of that equipment suing the seller. The failures had been isolated to a group of several transistors within electro-mechanical modules within the equipment, but the root cause of those transistors failing had not been determined. The equipment seller had more than 1,000 units in the field with no similar failures. And the electro-mechanical module manufacturer had more than 20,000 units in the field with no similar failures. Electrical contractors hired by the buyer had measured power quality, and reported no faults found in the three-phase power at the equipment terminals. This paper presents circuit analyses of the failing electro-mechanical module, basics of electrostatic discharge damage and protection, and the root cause of these failures — an electrical code-violating extraneous neutral-to-ground bond in a secondary power cabinet.


Author(s):  
Sebastian Wandelt ◽  
Wei Lin ◽  
Xiaoqian Sun ◽  
Massimiliano Zanin

Author(s):  
V.I. Chernoivanov ◽  
◽  
V.A. Denisov ◽  
Yu.V. Kataev ◽  
A.A. Solomashkin ◽  
...  

The existing strategies of maintenance and repair are discussed and a new strategy based on the control of the residual life is presented. It is assumed that in the future, the main maintenance and repair operations will be focused on minimizing this residual life, i.e. on the increase in the service life. It is proved that this maintenance organization strategy for the maintenance and repair optimization allows minimizing the flow of random failures of components during operation and increasing their service life (for a group of the same components) to a maximum. The work on the selection and justification of the life parameters controlled in this strategy has been performed.


Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1139
Author(s):  
Irene López-Rodríguez ◽  
Cesár F. Reyes-Manzano ◽  
Ariel Guzmán-Vargas ◽  
Lev Guzmán-Vargas

The complexity of drug–disease interactions is a process that has been explained in terms of the need for new drugs and the increasing cost of drug development, among other factors. Over the last years, diverse approaches have been explored to understand drug–disease relationships. Here, we construct a bipartite graph in terms of active ingredients and diseases based on thoroughly classified data from a recognized pharmacological website. We find that the connectivities between drugs (outgoing links) and diseases (incoming links) follow approximately a stretched-exponential function with different fitting parameters; for drugs, it is between exponential and power law functions, while for diseases, the behavior is purely exponential. The network projections, onto either drugs or diseases, reveal that the co-ocurrence of drugs (diseases) in common target diseases (drugs) lead to the appearance of connected components, which varies as the threshold number of common target diseases (drugs) is increased. The corresponding projections built from randomized versions of the original bipartite networks are considered to evaluate the differences. The heterogeneity of association at group level between active ingredients and diseases is evaluated in terms of the Shannon entropy and algorithmic complexity, revealing that higher levels of diversity are present for diseases compared to drugs. Finally, the robustness of the original bipartite network is evaluated in terms of most-connected nodes removal (direct attack) and random removal (random failures).


2021 ◽  
Author(s):  
Devesh Bhasin ◽  
David Staack ◽  
Daniel A. McAdams

Abstract This work analyzes the role of bioinspired product architecture in facilitating the development of robust engineering systems. Existing studies on bioinspired product architecture largely focus on inspiring biology-like function-sharing in engineering design. This work shows that the guidelines for bioinspired product architecture, originally developed for bioinspiration of function-sharing, may induce robustness to random failures in engineered systems. To quantify such an improvement, this study utilizes Functional Modeling to derive modular equivalents of biological systems. The application of the bioinspired product architecture guidelines is then modeled as a transition from the modular product architecture of the modular equivalents to the actual product architecture of the biological systems. The robustness of the systems to random failures is analyzed after the application of each guideline by modeling the systems as directed networks. A singular robustness metric is then introduced to quantify the degradation in the expected functionality of systems upon increasing severity of random disruptions. Our results show that a system with bioinspired product architecture exhibits a gradual degradation in expected functionality upon increasing the number of failed modules as compared to an equivalent system with a one-to-one mapping of functions to modules. The findings are validated by designing and analyzing a COVID-19 breathalyzer as a case study.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jianming Zhao ◽  
Peng Zeng ◽  
Yingjun Liu ◽  
Tianyu Wang

Mobile video communication and Internet of Things are playing a more and more important role in our daily life. Mobile Edge Computing (MEC), as the essential network architecture for the Internet, can significantly improve the quality of video streaming applications. The mobile devices transferring video flow are often exposed to hostile environment, where they would be damaged by different attackers. Accordingly, Mobile Edge Computing Network is often vulnerable under disruptions, against either natural disasters or human intentional attacks. Therefore, research on secure hub location in MEC, which could obviously enhance the robustness of the network, is highly invaluable. At present, most of the attacks encountered by edge nodes in MEC in the IoT are random attacks or random failures. According to network science, scale-free networks are more robust than the other types of network under the random failures. In this paper, an optimization algorithm is proposed to reorganize the structure of the network according to the amount of information transmitted between edge nodes. BA networks are more robust under random attacks, while WS networks behave better under human intentional attacks. Therefore, we change the structure of the network accordingly, when the attack type is different. Besides, in the MEC networks for mobile video communication, the capacity of each device and the size of the video data influence the structure significantly. The algorithm sufficiently takes the capability of edge nodes and the amount of the information between them into consideration. In robustness test, we set the number of network nodes to be 200 and 500 and increase the attack scale from 0% to 100% to observe the behaviours of the size of the giant component and the robustness calculated for each attack method. Evaluation results show that the proposed algorithm can significantly improve the robustness of the MEC networks and has good potential to be applied in real-world MEC systems.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Arsham Ghavasieh ◽  
Massimo Stella ◽  
Jacob Biamonte ◽  
Manlio De Domenico

AbstractComplex systems are large collections of entities that organize themselves into non-trivial structures, represented as networks. One of their key emergent properties is robustness against random failures or targeted attacks —i.e., the networks maintain their integrity under removal of nodes or links. Here, we introduce network entanglement to study network robustness through a multiscale lens, encoded by the time required for information diffusion through the system. Our measure’s foundation lies upon a recently developed statistical field theory for information dynamics within interconnected systems. We show that at the smallest temporal scales, the node-network entanglement reduces to degree, whereas at extremely large scales, it measures the direct role played by each node in keeping the network connected. At the meso-scale, entanglement plays a more important role, measuring the importance of nodes for the transport properties of the system. We use entanglement as a centrality measure capturing the role played by nodes in keeping the overall diversity of the information flow. As an application, we study the disintegration of empirical social, biological and transportation systems, showing that the nodes central for information dynamics are also responsible for keeping the network integrated.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jan-jaap Moerman ◽  
Jan Braaksma ◽  
Leo van Dongen

PurposeAsset-intensive organizations rely heavily on physical assets that are often expensive, complex and have a significant impact on organizational performance. Past introductions of critical assets in various industries showed that despite many preparations in maintenance and operations, shortcomings were identified after deployment resulting in unreliable performance. The main purpose of this qualitative study is to explore the factors that determine how asset-intensive organizations can achieve reliable outcomes in critical asset introductions despite random failures as a result of increasing complexity and infant mortalities.Design/methodology/approachTo gain a detailed understanding of the issues and challenges of critical asset introductions, a case study in railways (rolling stock introductions) was conducted and analyzed using qualitative analysis.FindingsThe case showed that organizational factors were perceived as decisive factors for a reliable performance of the introduction, while the main focus of the introduction was on the asset and its technical systems. This suggests that more consideration toward organizational factors is needed. Therefore, a critical asset introduction framework was proposed based on 15 identified factors.Originality/valueReliable performance is often associated with technical systems only. This empirical study emphasizes the need for a more holistic perspective and the inclusion of organizational factors when introducing critical assets seeking reliable performance. This study demonstrated the application of the affinity diagramming technique in collectively analyzing the data adopting a multidisciplinary orientation.


Sign in / Sign up

Export Citation Format

Share Document