scholarly journals Joint QoS-aware and Cost-efficient Task Scheduling for Fog-cloud Resources in a Volunteer Computing System

2021 ◽  
Vol 21 (4) ◽  
pp. 1-21
Author(s):  
Farooq Hoseiny ◽  
Sadoon Azizi ◽  
Mohammad Shojafar ◽  
Rahim Tafazolli

Volunteer computing is an Internet-based distributed computing in which volunteers share their extra available resources to manage large-scale tasks. However, computing devices in a Volunteer Computing System (VCS) are highly dynamic and heterogeneous in terms of their processing power, monetary cost, and data transferring latency. To ensure both of the high Quality of Service (QoS) and low cost for different requests, all of the available computing resources must be used efficiently. Task scheduling is an NP-hard problem that is considered as one of the main critical challenges in a heterogeneous VCS. Due to this, in this article, we design two task scheduling algorithms for VCSs, named Min-CCV and Min-V . The main goal of the proposed algorithms is jointly minimizing the computation, communication, and delay violation cost for the Internet of Things (IoT) requests. Our extensive simulation results show that proposed algorithms are able to allocate tasks to volunteer fog/cloud resources more efficiently than the state-of-the-art. Specifically, our algorithms improve the deadline satisfaction task rates around 99.5% and decrease the total cost between 15 to 53% in comparison with the genetic-based algorithm.

2021 ◽  
Vol 12 ◽  
Author(s):  
Wenyi Xu ◽  
Tianda Chen ◽  
Yuwei Pei ◽  
Hao Guo ◽  
Zhuanyu Li ◽  
...  

Characterization of the bacterial composition and functional repertoires of microbiome samples is the most common application of metagenomics. Although deep whole-metagenome shotgun sequencing (WMS) provides high taxonomic resolution, it is generally cost-prohibitive for large longitudinal investigations. Until now, 16S rRNA gene amplicon sequencing (16S) has been the most widely used approach and usually cooperates with WMS to achieve cost-efficiency. However, the accuracy of 16S results and its consistency with WMS data have not been fully elaborated, especially by complicated microbiomes with defined compositional information. Here, we constructed two complex artificial microbiomes, which comprised more than 60 human gut bacterial species with even or varied abundance. Utilizing real fecal samples and mock communities, we provided solid evidence demonstrating that 16S results were of poor consistency with WMS data, and its accuracy was not satisfactory. In contrast, shallow whole-metagenome shotgun sequencing (shallow WMS, S-WMS) with a sequencing depth of 1 Gb provided outputs that highly resembled WMS data at both genus and species levels and presented much higher accuracy taxonomic assignments and functional predictions than 16S, thereby representing a better and cost-efficient alternative to 16S for large-scale microbiome studies.


2017 ◽  
Vol 13 (S337) ◽  
pp. 21-24
Author(s):  
Colin J. Clark ◽  
Jason Wu ◽  
Holger J. Pletsch ◽  
Lucas Guillemot

AbstractSince the launch of the Fermi Gamma-ray Space Telescope in 2008, the onboard Large Area Telescope (LAT) has detected gamma-ray pulsations from more than 200 pulsars. A large fraction of these remain undetected in radio observations, and could only be found by directly searching the LAT data for pulsations. However, the sensitivity of such “blind” searches is limited by the sparse photon data and vast computational requirements. In this contribution we present the latest large-scale blind-search survey for gamma-ray pulsars, which ran on the distributed volunteer computing system, Einstein@Home, and discovered 19 new gamma-ray pulsars. We explain how recent improvements to search techniques and LAT data reconstruction have boosted the sensitivity of blind searches, and present highlights from the survey’s discoveries. These include: two glitching pulsars; the youngest known radio-quiet gamma-ray pulsar; and two isolated millisecond pulsars (MSPs), one of which is the only known radio-quiet rotationally powered MSP.


Author(s):  
Ulrika Linderhed ◽  
Ioannis Petsagkourakis ◽  
Peter Andersson Ersman ◽  
Valerio Beni ◽  
Klas Tybrandt

Abstract The advent of the Internet of Things and the growing interest in continuous monitoring by wearables have created a need for conformable and stretchable displays. Electrochromic displays (ECDs) are receiving attention as a cost-effective solution for many simple applications. However, stretchable ECDs have yet to be produced in a robust, large scale and cost-efficient manner. Here we develop a process for making fully screen printed stretchable ECDs. By evaluating commercially available inks with respect to electromechanical properties, including electrochromic PEDOT:PSS inks, our process can be directly applied in the manufacturing of stretchable organic electronic devices. The manufactured ECDs retained colour contrast with useful switching times at static strains up to 50 % and strain cycling up to 30 % strain. To further demonstrate the applicability of the technology, double-digit 7-segment ECDs were produced, which could conform to curved surfaces and be mounted onto stretchable fabrics while remaining fully functional. Based on their simplicity, robustness and processability, we believe that low cost printed stretchable ECDs can be easily scaled up and will find many applications within the rapidly growing markets of wearable electronics and the Internet of Things.


Author(s):  
Yuling Fang ◽  
Qingkui Chen ◽  
Neal N. Xiong ◽  
Deyu Zhao ◽  
Jingjuan Wang

This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things computing. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSN. Then, using the CUDA Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.


Water ◽  
2021 ◽  
Vol 13 (18) ◽  
pp. 2585
Author(s):  
Tianrui Li ◽  
Jiangjun Hu ◽  
Liandong Zhu

The development of clean and renewable biofuels has been of wide concern on the topic of energy and environmental issues. As a kind of biomass energy with great application prospects, microalgae have many advantages and are used in the fields of environmental protection and biofuels as well as food or feed production for humans and animals. However, the high cost of microalgae harvesting is the main bottleneck of industrial production on a large scale. Self-flocculation is a cost-efficient and promising method for harvesting microalgal biomass. This article briefly describes the current commonly used technology for microalgae harvesting, focusing on the research progress of self-flocculation. This article explores the relative mechanisms and influencing factors of self-flocculation and discusses a proposal for the integration of algae cultivation and harvesting as well as the co-cultivation of algae and bacteria in an effort to provide a reference for microalgae harvesting with high efficiency and low cost.


Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 862
Author(s):  
Rih-Lung Chung ◽  
Chen-Wei Chen ◽  
Chiung-An Chen ◽  
Patricia Angela R. Abu ◽  
Shih-Lun Chen

This paper presents a low-cost and high-quality, hardware-oriented, two-dimensional discrete cosine transform (2-D DCT) signal analyzer for image and video encoders. In order to reduce memory requirement and improve image quality, a novel Loeffler DCT based on a coordinate rotation digital computer (CORDIC) technique is proposed. In addition, the proposed algorithm is realized by a recursive CORDIC architecture instead of an unfolded CORDIC architecture with approximated scale factors. In the proposed design, a fully pipelined architecture is developed to efficiently increase operating frequency and throughput, and scale factors are implemented by using four hardware-sharing machines for complexity reduction. Thus, the computational complexity can be decreased significantly with only 0.01 dB loss deviated from the optimal image quality of the Loeffler DCT. Experimental results show that the proposed 2-D DCT spectral analyzer not only achieved a superior average peak signal–noise ratio (PSNR) compared to the previous CORDIC-DCT algorithms but also designed cost-efficient architecture for very large scale integration (VLSI) implementation. The proposed design was realized using a UMC 0.18-μm CMOS process with a synthesized gate count of 8.04 k and core area of 75,100 μm2. Its operating frequency was 100 MHz and power consumption was 4.17 mW. Moreover, this work had at least a 64.1% gate count reduction and saved at least 22.5% in power consumption compared to previous designs.


Sign in / Sign up

Export Citation Format

Share Document