scholarly journals New Computing Technology in Reliability Engineering

2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Radim Briš ◽  
Simona Domesová

Reliability engineering is relatively new scientific discipline which develops in close connection with computers. Rapid development of computer technology recently requires adequate novelties of source codes and appropriate software. New parallel computing technology based on HPC (high performance computing) for availability calculation will be demonstrated in this paper. The technology is particularly effective in context with simulation methods; nevertheless, analytical methods are taken into account as well. In general, basic algorithms for reliability calculations must be appropriately modified and improved to achieve better computation efficiency. Parallel processing is executed by two ways, firstly by the use of the MATLAB function parfor and secondly by the use of the CUDA technology. The computation efficiency was significantly improved which is clearly demonstrated in numerical experiments performed on selected testing examples as well as on industrial example. Scalability graphs are used to demonstrate reduction of computation time caused by parallel computing.

2018 ◽  
Vol 35 (3) ◽  
pp. 380-388 ◽  
Author(s):  
Wei Zheng ◽  
Qi Mao ◽  
Robert J Genco ◽  
Jean Wactawski-Wende ◽  
Michael Buck ◽  
...  

Abstract Motivation The rapid development of sequencing technology has led to an explosive accumulation of genomic data. Clustering is often the first step to be performed in sequence analysis. However, existing methods scale poorly with respect to the unprecedented growth of input data size. As high-performance computing systems are becoming widely accessible, it is highly desired that a clustering method can easily scale to handle large-scale sequence datasets by leveraging the power of parallel computing. Results In this paper, we introduce SLAD (Separation via Landmark-based Active Divisive clustering), a generic computational framework that can be used to parallelize various de novo operational taxonomic unit (OTU) picking methods and comes with theoretical guarantees on both accuracy and efficiency. The proposed framework was implemented on Apache Spark, which allows for easy and efficient utilization of parallel computing resources. Experiments performed on various datasets demonstrated that SLAD can significantly speed up a number of popular de novo OTU picking methods and meanwhile maintains the same level of accuracy. In particular, the experiment on the Earth Microbiome Project dataset (∼2.2B reads, 437 GB) demonstrated the excellent scalability of the proposed method. Availability and implementation Open-source software for the proposed method is freely available at https://www.acsu.buffalo.edu/~yijunsun/lab/SLAD.html. Supplementary information Supplementary data are available at Bioinformatics online.


2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Jicheng Yang ◽  
Ning Du ◽  
Wei Jiang ◽  
Chenzhe Liu

With the rapid development of the Internet of Things, 5G, and communication technologies, the growth of various types of data has shown an exponential trend. Edge computing technology provides users with almost unlimited computing power through a large number of high-performance servers in the data center. It is one of the important solutions for big data analysis and processing. Volleyball has caused a great wave in China as early as the 1960s, but people pay little attention to the physical quality of volleyball players. At the same time, in the medical field, it is difficult to give a clear value to the athlete’s protein requirement. Therefore, this article aims to observe the specific values of protein metabolism in volleyball at different levels of protein nutrition. By designing controlled experiments, then these rats under three nutrient levels of protein were observed and protein metabolism was analyzed after volleyball. The results of the study show that volleyball exercise can reduce the nitrogen balance and gastrocnemius nitrogen content. The nitrogen balance of the 17% group decreased from 388 mg/day before exercise to 336 mg/day, and the gastrocnemius nitrogen content decreased by up to 5.2%; serum urea nitrogen concentration and liver nitrogen content are increased, indicating the enhancement of protein catabolism. Different protein nutrition levels have different effects on protein metabolism during volleyball. The protein intake level of 17% is more conducive to resist the protein decomposition caused by volleyball. It can be seen that, based on edge computing technology, the influence factors of protein nutrition level on protein metabolism during volleyball sports can be well explored, and the research results are also very valuable.


2020 ◽  
Vol 13 (4) ◽  
pp. 1132-1153 ◽  
Author(s):  
Tianpei Zhou ◽  
Nan Zhang ◽  
Changzheng Wu ◽  
Yi Xie

Surface/interface nanoengineering of electrocatalysts and air electrodes will promote the rapid development of high-performance rechargeable Zn–air batteries.


2012 ◽  
Vol 17 (4) ◽  
pp. 207-216 ◽  
Author(s):  
Magdalena Szymczyk ◽  
Piotr Szymczyk

Abstract The MATLAB is a technical computing language used in a variety of fields, such as control systems, image and signal processing, visualization, financial process simulations in an easy-to-use environment. MATLAB offers "toolboxes" which are specialized libraries for variety scientific domains, and a simplified interface to high-performance libraries (LAPACK, BLAS, FFTW too). Now MATLAB is enriched by the possibility of parallel computing with the Parallel Computing ToolboxTM and MATLAB Distributed Computing ServerTM. In this article we present some of the key features of MATLAB parallel applications focused on using GPU processors for image processing.


2020 ◽  
Vol 16 ◽  
Author(s):  
Alper Gökbulut

Background: Chromatographic techniques such as TLC basically and, HPLC, GC, HPTLC equipped with various detectors are most frequently used for the qualitative and quantitative examination of herbals. Method: An overview of the recent literature concerning the usage of HPTLC for the analysis of medicinal plants has been reviewed. Results: During the last decade/s, HPTLC, a modern, sophisticated and automatized TLC technique with better and advanced separation efficiency, detection limit, data acquisition and processing, has been used for the analysis of herbal materials and preparations since the rapid development of technology in chromatography world. HPTLC with various detectors is a powerful analytical tool especially for the phytochemical applications such as herbal drug quantification and fingerprint analysis. Conclusion: In this review, a latest perspective has been established and some of the previous studies were summarized for the usage of HPTLC in the analysis of herbal remedies, dietary supplements and nutraceuticals.


Author(s):  
Gerald Bloom ◽  
Hayley MacGregor

Rapid development has brought significant economic and health benefits, but it has also exposed populations to new health risks. Public health as a scientific discipline and major government responsibility developed during the nineteenth century to help mitigate these risks. Public health actions need to take into account large inequalities in the benefits and harms associated with development between countries, between social groups, and between generations. This is especially important in the present context of very rapid change. It is important to acknowledge the global nature of the challenges people face and the need to involve countries with different cultures and historical legacies in arriving at consensus on an ethical basis for global cooperation in addressing these challenges. This chapter provides an analysis of these issues, using examples on the management of health risks associated with global development and rapid urbanization and on the emergence of organisms that are resistant to antibiotics.


Micromachines ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 169
Author(s):  
Mengcheng Wang ◽  
Shenglin Ma ◽  
Yufeng Jin ◽  
Wei Wang ◽  
Jing Chen ◽  
...  

Through Silicon Via (TSV) technology is capable meeting effective, compact, high density, high integration, and high-performance requirements. In high-frequency applications, with the rapid development of 5G and millimeter-wave radar, the TSV interposer will become a competitive choice for radio frequency system-in-package (RF SIP) substrates. This paper presents a redundant TSV interconnect design for high resistivity Si interposers for millimeter-wave applications. To verify its feasibility, a set of test structures capable of working at millimeter waves are designed, which are composed of three pieces of CPW (coplanar waveguide) lines connected by single TSV, dual redundant TSV, and quad redundant TSV interconnects. First, HFSS software is used for modeling and simulation, then, a modified equivalent circuit model is established to analysis the effect of the redundant TSVs on the high-frequency transmission performance to solidify the HFSS based simulation. At the same time, a failure simulation was carried out and results prove that redundant TSV can still work normally at 44 GHz frequency when failure occurs. Using the developed TSV process, the sample is then fabricated and tested. Using L-2L de-embedding method to extract S-parameters of the TSV interconnection. The insertion loss of dual and quad redundant TSVs are 0.19 dB and 0.46 dB at 40 GHz, respectively.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1074
Author(s):  
Raul Rotar ◽  
Sorin Liviu Jurj ◽  
Flavius Opritoiu ◽  
Mircea Vladutiu

This paper presents a mathematical approach for determining the reliability of solar tracking systems based on three fault coverage-aware metrics which use system error data from hardware, software as well as in-circuit testing (ICT) techniques, to calculate a solar test factor (STF). Using Euler’s named constant, the solar reliability factor (SRF) is computed to define the robustness and availability of modern, high-performance solar tracking systems. The experimental cases which were run in the Mathcad software suite and the Python programming environment show that the fault coverage-aware metrics greatly change the test and reliability factor curve of solar tracking systems, achieving significantly reduced calculation steps and computation time.


2013 ◽  
Vol 411-414 ◽  
pp. 585-588
Author(s):  
Liu Yang ◽  
Tie Ying Liu

This paper introduces parallel feature of the GPU, which will help GPU parallel computation methods to achieve the parallelization of PSO parallel path search process; and reduce the increasingly high problem of PSO (PSO: Particle Swarm Optimization) in time and space complexity. The experimental results show: comparing with CPU mode, GPU platform calculation improves the search rate and shortens the calculation time.


Sign in / Sign up

Export Citation Format

Share Document