computational requirement
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 22)

H-INDEX

6
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Yonatan Aguirre ◽  
Flabio Gutierrez ◽  
Richard Abramonte ◽  
Nilthon Arce ◽  
Antenor Aliaga

The accurate measurement of RMS values of voltage and current is crucial for the monitoring and protection of power systems and in general for electrical power distribution systems. The authors in this paper have developed an algorithm to calculate the RMS value of sinusoidal signals of varying frequency, by applying phasors obtained from digital filters SAL and CAL. In conditions in which the frequency of the grid varies, a phase shift is presented in the phasor of the grid voltage which is used for the design of a regulator that allows to obtain accurate RMS voltage values. The choice of the SAL and CAL digital filters is due to their low computational requirement, so they can be implemented in general-purpose microcontrollers.


2021 ◽  
Vol 2021 (12) ◽  
pp. 024
Author(s):  
Abinash Kumar Shaw ◽  
Somnath Bharadwaj ◽  
Debanjan Sarkar ◽  
Arindam Mazumdar ◽  
Sukhdeep Singh ◽  
...  

Abstract The dependence of the bispectrum on the size and shape of the triangle contains a wealth of cosmological information. Here we consider a triangle parameterization which allows us to separate the size and shape dependence. We have implemented an FFT based fast estimator for the three dimensional (3D) bin averaged bispectrum, and we demonstrate that it allows us to study the variation of the bispectrum across triangles of all possible shapes (and also sizes). The computational requirement is shown to scale as ∼ N g 3 log N g 3 where N g is the number of grid points along each side of the volume. We have validated the estimator using a non-Gaussian field for which the bispectrum can be analytically calculated. The estimated bispectrum values are found to be in good agreement (< 10 % deviation) with the analytical predictions across much of the triangle-shape parameter space. We also introduce linear redshift space distortion, a situation where also the bispectrum can be analytically calculated. Here the estimated bispectrum is found to be in close agreement with the analytical prediction for the monopole of the redshift space bispectrum.


2021 ◽  
Author(s):  
Chun Chieh Fan ◽  
Clare E Palmer ◽  
John Iverson ◽  
Diliana Pecheva ◽  
Wesley K Thompson ◽  
...  

Despite the importance and versatility of linear mixed effects models (LME), they have seldom been used in whole brain imaging analyses due to the computational requirement. Here, we introduce a fast and efficient mixed-effects algorithm (FEMA) that makes whole brain voxelwise imaging LME analyses possible. We demonstrate the equivalency of statistical power and control of type I errors between FEMA and classical LME, whilst showing an order of magnitude improvement in the speed of FEMA compared to classical LME. By applying FEMA on diffusion images and resting state functional connectivity matrices from the ABCD StudySM release 4.0 data, we show voxelwise annualized changes in fractional anisotropy (FA) and functional connectomes in early adolescence, highlighting a critical time of establishing associations among cortical and subcortical regions.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2174
Author(s):  
Sunghyun Cho ◽  
Dongwoo Kang ◽  
Joseph Sang-Il Kwon ◽  
Minsu Kim ◽  
Hyungtae Cho ◽  
...  

Explosives, especially those used for military weapons, have a short lifespan and their performance noticeably deteriorates over time. These old explosives need to be disposed of safely. Fluidized bed incinerators (FBIs) are safe for disposal of explosive waste (such as TNT) and produce fewer gas emissions compared to conventional methods, such as the rotary kiln. However, previous studies on this FBI process have only focused on minimizing the amount of NOx emissions without considering the operating and unitality costs (i.e., total cost) associated with the process. It is important to note that, in general, a number of different operating conditions are available to achieve a target NOx emission concentration and, thus, it requires a significant computational requirement to compare the total costs among those candidate operating conditions using a computational fluid dynamics simulation. To this end, a novel framework is proposed to quickly determine the most economically viable FBI process operating condition for a target NOx concentration. First, a surrogate model was developed to replace the high-fidelity model of an FBI process, and utilized to determine a set of possible operating conditions that may lead to a target NOx emission concentration. Second, the candidate operating conditions were fed to the Aspen Plus™ process simulation program to determine the most economically competitive option with respect to its total cost. The developed framework can provide operational guidelines for a clean and economical incineration process of explosive waste.


2021 ◽  
pp. 1-16
Author(s):  
Jun Jet Tai ◽  
Swee King Phang ◽  
Felicia Yen Myan Wong

Obstacle avoidance and navigation (OAN) algorithms typically employ offline or online methods. The former is fast but requires knowledge of a global map, while the latter is usually more computationally heavy in explicit solution methods, or is lacking in configurability in the form of artificial intelligence (AI) enabled agents. In order for OAN algorithms to be brought to mass produced robots, more specifically for multirotor unmanned aerial vehicles (UAVs), the computational requirement of these algorithms must be brought low enough such that its computation can be done entirely onboard a companion computer, while being flexible enough to function without a prior map, as is the case of most real life scenarios. In this paper, a highly configurable algorithm, dubbed Closest Obstacle Avoidance and A* (COAA*), that is lightweight enough to run on the companion computer of the UAV is proposed. This algorithm frees up from the conventional drawbacks of offline and online OAN algorithms, while having guaranteed convergence to a global minimum. The algorithms have been successfully implemented on the Heavy Lift Experimental (HLX) UAV of the Autonomous Robots Research Cluster in Taylor’s University, and the simulated results match the real results sufficiently to show that the algorithm has potential for widespread implementation.


GigaScience ◽  
2021 ◽  
Vol 10 (8) ◽  
Author(s):  
Haris Zafeiropoulos ◽  
Anastasia Gioti ◽  
Stelios Ninidakis ◽  
Antonis Potirakis ◽  
Savvas Paragkamian ◽  
...  

Abstract High-performance computing (HPC) systems have become indispensable for modern marine research, providing support to an increasing number and diversity of users. Pairing with the impetus offered by high-throughput methods to key areas such as non-model organism studies, their operation continuously evolves to meet the corresponding computational challenges. Here, we present a Tier 2 (regional) HPC facility, operating for over a decade at the Institute of Marine Biology, Biotechnology, and Aquaculture of the Hellenic Centre for Marine Research in Greece. Strategic choices made in design and upgrades aimed to strike a balance between depth (the need for a few high-memory nodes) and breadth (a number of slimmer nodes), as dictated by the idiosyncrasy of the supported research. Qualitative computational requirement analysis of the latter revealed the diversity of marine fields, methods, and approaches adopted to translate data into knowledge. In addition, hardware and software architectures, usage statistics, policy, and user management aspects of the facility are presented. Drawing upon the last decade’s experience from the different levels of operation of the Institute of Marine Biology, Biotechnology, and Aquaculture HPC facility, a number of lessons are presented; these have contributed to the facility’s future directions in light of emerging distribution technologies (e.g., containers) and Research Infrastructure evolution. In combination with detailed knowledge of the facility usage and its upcoming upgrade, future collaborations in marine research and beyond are envisioned.


Author(s):  
Inas Ali Abdulmunem

Cryptography and steganography are significant tools for data security. Hybrid the cryptography with Steganography can give more security by taking advantage of each technique. This work has proposed a method for improving the crypto-stego method by utilizing the proposed dictionary method to modified ciphertext then hiding modified encrypt ciphertext in the text by used the proposed modified space method. For cryptography, we have been utilized an advanced encryption standard (AES) to the encrypted message, The AES algorithm is utilized a 128bit Block Size and 256bit key size. The ciphertext characters is then replaced by the characters identified by dictionary list. The dictionary is time-dependent, where each of the equivalent words will shifting based on the time-shift equation. The modified ciphertext is then embedded into a cover text so that the attacker cannot separate them by applying cryptanalysis.  The “Modifying Spaces” method used “Spaces” to build a steganography tool that hide the secret message. The experimental results show that the proposed method has achieved high-security level when combined cryptography and steganography in such way that the ciphertext is changed to another value by a used dictionary with time sequence that makes cryptanalysis test failed to guess and identify the algorithm that been used for encryption. The stego. test shows the proposed method achieved good results in term of capacity and visibility which is approved it hard to notice. The tests also approved that the proposed methods run fast with a less computational requirement.


Author(s):  
Abderrahmane Ouadi ◽  
Abdelkader Zitouni

During a transient operation condition of power smart grid, line current may include unwanted components that may cause unnecessary tripping of protection system. The disturbance mainly appears in a form of harmonics and sub-harmonics. In this case of signal waveforms including harmonics, the low pass filter may be used. However, this type of filter does not provide the ability to reject sub-harmonics. This chapter presents the digital filtering design issue based on optimization approach for removing sub-harmonics and hence improving the measurement. The first point of view is to reach an unified accurate phasor measurement algorithm that is immune to nearly all disturbances (sub-harmonics) in power grid including FACT devices and renewable energy sources, simultaneously with required speed of convergence. The second point focuses on reducing the computational requirement and algorithm complexity through designing recursive digital filter with reduced order.


Author(s):  
B Shayak ◽  
Mohit M Sharma

ABSTRACTIn this work we propose a delay differential equation as a lumped parameter or compartmental infectious disease model featuring high descriptive and predictive capability, extremely high adaptability and low computational requirement. Whereas the model has been developed in the context of COVID-19, it is general enough to be applicable mutatis mutandis to other diseases as well. Our fundamental modeling philosophy consists of a decoupling of public health intervention effects, immune response effects and intrinsic infection properties into separate terms. All parameters in the model are directly related to the disease and its management; we can measure or calculate their values a priori basis our knowledge of the phenomena involved, instead of having to extrapolate them from solution curves. Our model can accurately predict the effects of applying or withdrawing interventions, individually or in combination, and can quickly accommodate any newly released information regarding, for example, the infection properties and the immune response to an emerging infectious disease. After demonstrating that the baseline model can successfully explain the COVID-19 case trajectories observed all over the world, we systematically show how the model can be expanded to account for heterogeneous transmissibility, detailed contact tracing drives, mass testing endeavours and immune responses featuring different combinations of limited-time sterilizing immunity, severity-reducing immunity and antibody dependent enhancement.


Author(s):  
Jiayu Huang ◽  
Nurretin Sergin ◽  
Akshay Dua ◽  
Erfan Bank Tavakoli ◽  
Hao Yan ◽  
...  

Abstract This paper develops a unified framework for training and deploying deep neural networks on the edge computing framework for image defect detection and classification. In the proposed framework, we combine the transfer learning and data augmentation with the improved accuracy given the small sample size. We further implement the edge computing framework to satisfy the real-time computational requirement. After the implement of the proposed model into a rolling manufacturing system, we conclude that deep learning approaches can perform around 30–40% better than some traditional machine learning algorithms such as random forest, decision tree, and SVM in terms of prediction accuracy. Furthermore, by deploying the CNNs in the edge computing framework, we can significantly reduce the computational time and satisfy the real-time computational requirement in the high-speed rolling and inspection system. Finally, the saliency map and embedding layer visualization techniques are used for a better understanding of proposed deep learning models.


Sign in / Sign up

Export Citation Format

Share Document