computer resources
Recently Published Documents


TOTAL DOCUMENTS

293
(FIVE YEARS 59)

H-INDEX

13
(FIVE YEARS 3)

Author(s):  
Ahmad Sharadqeh

Software defined networks (SDN) have replaced the traditional network architecture by separating the control from forwarding planes. SDN technology utilizes computer resources to provide worldwide effective service than the aggregation of single internet resources usage. Breakdown while resource allocation is a major concern in cloud computing due to the diverse and highly complex architecture of resources. These resources breakdowns cause delays in job completion and have a negative influence on attaining quality of service (QoS). In order to promote error-free task scheduling, this study represents a promising fault-tolerance scheduling technique. For optimum QoS, the suggested restricted Boltzmann machine (RBM) approach takes into account the most important characteristics like current consumption of the resources and rate of failure. The proposed approach's efficiency is verified using the MATLAB toolbox by employing widely used measures such as resource consumption, average processing time, throughput and rate of success.


Author(s):  
Pietro Roversi ◽  
Dale E. Tronrud

Macromolecular refinement is an optimization process that aims to produce the most likely macromolecular structural model in the light of experimental data. As such, macromolecular refinement is one of the most complex optimization problems in wide use. Macromolecular refinement programs have to deal with the complex relationship between the parameters of the atomic model and the experimental data, as well as a large number of types of prior knowledge about chemical structure. This paper draws attention to areas of unfinished business in the field of macromolecular refinement. In it, we describe ten refinement topics that we think deserve attention and discuss directions leading to macromolecular refinement software that would make the best use of modern computer resources to meet the needs of structural biologists of the twenty-first century.


2021 ◽  
pp. 1-20
Author(s):  
Gang Sha ◽  
Junsheng Wu ◽  
Bin Yu

Purpose: at present, more and more deep learning algorithms are used to detect and segment lesions from spinal CT (Computed Tomography) images. But these algorithms usually require computers with high performance and occupy large resources, so they are not suitable for the clinical embedded and mobile devices, which only have limited computational resources and also expect a relative good performance in detecting and segmenting lesions. Methods: in this paper, we present a model based on Yolov3-tiny to detect three spinal fracture lesions, cfracture (cervical fracture), tfracture (thoracic fracture), and lfracture (lumbar fracture) with a small size model. We construct this novel model by replacing the traditional convolutional layers in YoloV3-tiny with fire modules from SqueezeNet, so as to reduce the parameters and model size, meanwhile get accurate lesions detection. Then we remove the batch normalization layers in the fire modules after the comparative experiments, though the overall performance of fire module without batch normalization layers is slightly improved, we can reduce computation complexity and low occupations of computer resources for fast lesions detection. Results: the experiments show that the shrank model only has a size of 13 MB (almost a third of Yolov3-tiny), while the mAP (mean Average Precsion) is 91.3%, and IOU (intersection over union) is 90.7. The detection time is 0.015 second per CT image, and BFLOP/s (Billion Floating Point Operations per Second) value is less than Yolov3-tiny. Conclusion: the model we presented can be deployed in clinical embedded and mobile devices, meanwhile has a relative accurate and rapid real-time lesions detection.


Author(s):  
Neha Dutta ◽  
◽  
Pardeep Cheema ◽  

Cloud computing is internet computing that offers metering-based services to customers. It implies accessing data from a consolidated pool of computer resources that may be requested and consumed on-demand. It also offers computer resources via virtualized over the internet. The data center is the most essential in cloud computing which includes a group of servers on which Business records are kept and applications operate. The data center which contains servers, cables, air conditioning units, networks, etc. uses more electricity and emits a large quantity of Carbon-di-oxide (CO2) to the atmosphere. One of the most significant problems encountered in cloud technology is the optimizing of Energy Usage. Hence the idea of cloud computing originated. It is a concept for allowing all over, on-demand access to a shared pool of customizable computer resources[ Wanneng Shu et. A.,2014].


Author(s):  
Rimma Padovano

"Cloud computing" refers to large-scale parallel and distributed systems, which are essentially collections of autonomous. As a result, the “cloud organization” is made up on a wide range of ideas and experiences collected since the first digital computer was used to solve algorithmically complicated problems. Due to the complexity of established parallel and distributed computing ontologies, it is necessary for developers to have a high level of expertise to get the most out of the consolidated computer resources. The directions for future research for parallel and distributed computing are critically presented in this research: technology and application and cross-cutting concerns.


Author(s):  
Denis Zolotariov ◽  

The approach for building cloud-ready fault-tolerant calculations by approximating functions method, which is an analytical-numerical part of Volterra integral equation method for solving 1D+T nonlinear electromagnetic problems, is presented. The solving process of the original algorithm of the method is modified: it is broken down into the sequential chain of stages with a fixed number of sequential or parallel steps, each of which is built in a fault-tolerant manner and saves execution results in fault-tolerant storage for high availability. This economizes RAM and other computer resources and does not damage the calculated results in the case of a failure, and allows stopping and starting the calculations easily after manual or accidental shutdown. Also, the proposed algorithm has self-healing and data deduplication for cases of corrupted saved results. The presented approach is universal and does not depend on the type of medium or the initial signal. Also, it does not violate the natural description of non-stationary and nonlinear features, the unified definition of the inner and outer problems, as well as the inclusion of the initial and boundary conditions in the same equation as the original approximating functions method. The developed approach stress-tested on the known problems, stability checked and errors compared.


Author(s):  
Denis Zolotariov ◽  

Abstract The approach for building cloud-ready fault-tolerant calculations by approximating functions method, which is an analytical-numerical part of Volterra integral equation method for solving 1D+T nonlinear electromagnetic problems, is presented. The solving process of the original algorithm of the method is modified: it is broken down into the sequential chain of stages with a fixed number of sequential or parallel steps, each of which is built in a fault-tolerant manner and saves execution results in fault-tolerant storage for high availability. This economizes RAM and other computer resources and does not damage the calculated results in the case of a failure, and allows stopping and starting the calculations easily after manual or accidental shutdown. Also, the proposed algorithm has self-healing and data deduplication for cases of corrupted saved results. The presented approach is universal and does not depend on the type of medium or the initial signal. Also, it does not violate the natural description of non-stationary and nonlinear features, the unified definition of the inner and outer problems, as well as the inclusion of the initial and boundary conditions in the same equation as the original approximating functions method. The developed approach stress-tested on the known problems, stability checked and errors compared.


2021 ◽  
pp. 106-115
Author(s):  
A.D. Matveev

It is important to know the error or the approximate solution when calculating the strength of elastic composite structures (bodies) using the finite element method (FEM). To construct a sequence of solutions according to the FEM is necessary for the evaluation of the approximation error. It involves the grinding procedure for discrete models. The implementation of the grinding procedure for basic models that take into account the inhomogeneous, micro-homogeneous structures of bodies within the microapproach requires ample computer resources. This paper proposes a method of equivalent strength conditions (MESC) to calculate the static strength of elastic bodies with a non-uniform, microuniform regular structure. The calculation of composite bodies strength according to the MESC is reduced to the calculation of isotropic homogeneous bodies strength using equivalent strength conditions. Adjusted equivalent strength conditions are used in the numerical implementation of the MESC. They take into account the error of the approximate solutions. If a set of loads is specified for a composite body, then generalized equivalent strength conditions are applied. The FEM-based calculation of composite bodies strength that follows the MESC using multigrid finite elements requires 103 ÷ 105 times less computer memory than a similar calculation using ground basic models of composite bodies. The provided example of strength calculation for a beam with an inhomogeneous regular fiber structure using the MESC shows its high efficiency. The main MESC implementation procedures are outlined.


2021 ◽  
Vol 9 (18) ◽  
pp. 1-13
Author(s):  
José Luis Abeleira Ortíz ◽  
Noelio Vázquez Vargas

Video analysis is one of the computer resources used to develop experimental activities in the teaching of Physics. It carries particular features that enables users to examine real physical systems in a virtual environment and model the changes that take place in such real physical systems. This current research paper aimed to design and implement experimental investigations assisted by video analysis in order to enhance learner performance through the exploration of real physical systems with the aid of the systematization of an experimental research procedure. As an example, the modeling of an activity is offered which is based on the need of determining the water consumption of a real-world physical system. The methodology used consisted of assessing learner performance using different assessment activities such as pedagogical tests and observations. The statistical processing of results showed significant differences regarding learner performance comparing the learning results before and after the execution of the designed experimental activities.


Sign in / Sign up

Export Citation Format

Share Document