scholarly journals Toward Interoperable Cyberinfrastructure: Common Descriptions for Computational Resources and Applications

Author(s):  
Joe Stubbs ◽  
Suresh Marru ◽  
Daniel Mejia ◽  
Daniel S. Katz ◽  
Kyle Chard ◽  
...  
Author(s):  
C.L. Woodcock

Despite the potential of the technique, electron tomography has yet to be widely used by biologists. This is in part related to the rather daunting list of equipment and expertise that are required. Thanks to continuing advances in theory and instrumentation, tomography is now more feasible for the non-specialist. One barrier that has essentially disappeared is the expense of computational resources. In view of this progress, it is time to give more attention to practical issues that need to be considered when embarking on a tomographic project. The following recommendations and comments are derived from experience gained during two long-term collaborative projects.Tomographic reconstruction results in a three dimensional description of an individual EM specimen, most commonly a section, and is therefore applicable to problems in which ultrastructural details within the thickness of the specimen are obscured in single micrographs. Information that can be recovered using tomography includes the 3D shape of particles, and the arrangement and dispostion of overlapping fibrous and membranous structures.


2019 ◽  
Vol 20 (3) ◽  
pp. 203-208 ◽  
Author(s):  
Lin Ning ◽  
Bifang He ◽  
Peng Zhou ◽  
Ratmir Derda ◽  
Jian Huang

Background:Peptide-Fc fusion drugs, also known as peptibodies, are a category of biological therapeutics in which the Fc region of an antibody is genetically fused to a peptide of interest. However, to develop such kind of drugs is laborious and expensive. Rational design is urgently needed.Methods:We summarized the key steps in peptide-Fc fusion technology and stressed the main computational resources, tools, and methods that had been used in the rational design of peptide-Fc fusion drugs. We also raised open questions about the computer-aided molecular design of peptide-Fc.Results:The design of peptibody consists of four steps. First, identify peptide leads from native ligands, biopanning, and computational design or prediction. Second, select the proper Fc region from different classes or subclasses of immunoglobulin. Third, fuse the peptide leads and Fc together properly. At last, evaluate the immunogenicity of the constructs. At each step, there are quite a few useful resources and computational tools.Conclusion:Reviewing the molecular design of peptibody will certainly help make the transition from peptide leads to drugs on the market quicker and cheaper.


2020 ◽  
Author(s):  
Dianbo Liu

BACKGROUND Applications of machine learning (ML) on health care can have a great impact on people’s lives. At the same time, medical data is usually big, requiring a significant amount of computational resources. Although it might not be a problem for wide-adoption of ML tools in developed nations, availability of computational resource can very well be limited in third-world nations and on mobile devices. This can prevent many people from benefiting of the advancement in ML applications for healthcare. OBJECTIVE In this paper we explored three methods to increase computational efficiency of either recurrent neural net-work(RNN) or feedforward (deep) neural network (DNN) while not compromising its accuracy. We used in-patient mortality prediction as our case analysis upon intensive care dataset. METHODS We reduced the size of RNN and DNN by applying pruning of “unused” neurons. Additionally, we modified the RNN structure by adding a hidden-layer to the RNN cell but reduce the total number of recurrent layers to accomplish a reduction of total parameters in the network. Finally, we implemented quantization on DNN—forcing the weights to be 8-bits instead of 32-bits. RESULTS We found that all methods increased implementation efficiency–including training speed, memory size and inference speed–without reducing the accuracy of mortality prediction. CONCLUSIONS This improvements allow the implementation of sophisticated NN algorithms on devices with lower computational resources.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4805
Author(s):  
Saad Abbasi ◽  
Mahmoud Famouri ◽  
Mohammad Javad Shafiee ◽  
Alexander Wong

Human operators often diagnose industrial machinery via anomalous sounds. Given the new advances in the field of machine learning, automated acoustic anomaly detection can lead to reliable maintenance of machinery. However, deep learning-driven anomaly detection methods often require an extensive amount of computational resources prohibiting their deployment in factories. Here we explore a machine-driven design exploration strategy to create OutlierNets, a family of highly compact deep convolutional autoencoder network architectures featuring as few as 686 parameters, model sizes as small as 2.7 KB, and as low as 2.8 million FLOPs, with a detection accuracy matching or exceeding published architectures with as many as 4 million parameters. The architectures are deployed on an Intel Core i5 as well as a ARM Cortex A72 to assess performance on hardware that is likely to be used in industry. Experimental results on the model’s latency show that the OutlierNet architectures can achieve as much as 30x lower latency than published networks.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nick Le Large ◽  
Frank Bieder ◽  
Martin Lauer

Abstract For the application of an automated, driverless race car, we aim to assure high map and localization quality for successful driving on previously unknown, narrow race tracks. To achieve this goal, it is essential to choose an algorithm that fulfills the requirements in terms of accuracy, computational resources and run time. We propose both a filter-based and a smoothing-based Simultaneous Localization and Mapping (SLAM) algorithm and evaluate them using real-world data collected by a Formula Student Driverless race car. The accuracy is measured by comparing the SLAM-generated map to a ground truth map which was acquired using high-precision Differential GPS (DGPS) measurements. The results of the evaluation show that both algorithms meet required time constraints thanks to a parallelized architecture, with GraphSLAM draining the computational resources much faster than Extended Kalman Filter (EKF) SLAM. However, the analysis of the maps generated by the algorithms shows that GraphSLAM outperforms EKF SLAM in terms of accuracy.


Author(s):  
Jaber Almutairi ◽  
Mohammad Aldossary

AbstractRecently, the number of Internet of Things (IoT) devices connected to the Internet has increased dramatically as well as the data produced by these devices. This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing. Although Edge Computing is a promising enabler for latency-sensitive related issues, its deployment produces new challenges. Besides, different service architectures and offloading strategies have a different impact on the service time performance of IoT applications. Therefore, this paper presents a novel approach for task offloading in an Edge-Cloud system in order to minimize the overall service time for latency-sensitive applications. This approach adopts fuzzy logic algorithms, considering application characteristics (e.g., CPU demand, network demand and delay sensitivity) as well as resource utilization and resource heterogeneity. A number of simulation experiments are conducted to evaluate the proposed approach with other related approaches, where it was found to improve the overall service time for latency-sensitive applications and utilize the edge-cloud resources effectively. Also, the results show that different offloading decisions within the Edge-Cloud system can lead to various service time due to the computational resources and communications types.


Author(s):  
Cecilia Viscardi ◽  
Michele Boreale ◽  
Fabio Corradi

AbstractWe consider the problem of sample degeneracy in Approximate Bayesian Computation. It arises when proposed values of the parameters, once given as input to the generative model, rarely lead to simulations resembling the observed data and are hence discarded. Such “poor” parameter proposals do not contribute at all to the representation of the parameter’s posterior distribution. This leads to a very large number of required simulations and/or a waste of computational resources, as well as to distortions in the computed posterior distribution. To mitigate this problem, we propose an algorithm, referred to as the Large Deviations Weighted Approximate Bayesian Computation algorithm, where, via Sanov’s Theorem, strictly positive weights are computed for all proposed parameters, thus avoiding the rejection step altogether. In order to derive a computable asymptotic approximation from Sanov’s result, we adopt the information theoretic “method of types” formulation of the method of Large Deviations, thus restricting our attention to models for i.i.d. discrete random variables. Finally, we experimentally evaluate our method through a proof-of-concept implementation.


2021 ◽  
Vol 2 (3) ◽  
pp. 1-24
Author(s):  
Chih-Kai Huang ◽  
Shan-Hsiang Shen

The next-generation 5G cellular networks are designed to support the internet of things (IoT) networks; network components and services are virtualized and run either in virtual machines (VMs) or containers. Moreover, edge clouds (which are closer to end users) are leveraged to reduce end-to-end latency especially for some IoT applications, which require short response time. However, the computational resources are limited in edge clouds. To minimize overall service latency, it is crucial to determine carefully which services should be provided in edge clouds and serve more mobile or IoT devices locally. In this article, we propose a novel service cache framework called S-Cache , which automatically caches popular services in edge clouds. In addition, we design a new cache replacement policy to maximize the cache hit rates. Our evaluations use real log files from Google to form two datasets to evaluate the performance. The proposed cache replacement policy is compared with other policies such as greedy-dual-size-frequency (GDSF) and least-frequently-used (LFU). The experimental results show that the cache hit rates are improved by 39% on average, and the average latency of our cache replacement policy decreases 41% and 38% on average in these two datasets. This indicates that our approach is superior to other existing cache policies and is more suitable in multi-access edge computing environments. In the implementation, S-Cache relies on OpenStack to clone services to edge clouds and direct the network traffic. We also evaluate the cost of cloning the service to an edge cloud. The cloning cost of various real applications is studied by experiments under the presented framework and different environments.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 28
Author(s):  
Anna V. Kalyuzhnaya ◽  
Nikolay O. Nikitin ◽  
Alexander Hvatov ◽  
Mikhail Maslyaev ◽  
Mikhail Yachmenkov ◽  
...  

In this paper, we describe the concept of generative design approach applied to the automated evolutionary learning of mathematical models in a computationally efficient way. To formalize the problems of models’ design and co-design, the generalized formulation of the modeling workflow is proposed. A parallelized evolutionary learning approach for the identification of model structure is described for the equation-based model and composite machine learning models. Moreover, the involvement of the performance models in the design process is analyzed. A set of experiments with various models and computational resources is conducted to verify different aspects of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document