commodity clusters
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 8)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Anis Solekha ◽  
Andri Widianto ◽  
Anita Karunia

Bank Indonesia (BI) is the Central Bank of the Republic of Indonesia which is responsible for achieving and maintaining the stability of the rupiah value, such as maintaining the stability of volatile food. Volatile food can be maintain using cluster development, one of them is the shallot commodity cluster. This study aimed to determine the development of shallot commodity clusters in the context of controlling inflation at the Tegal Bank Indonesia Representative Office. Data collection techniques used were observation, interviews, and literature study. The data analysis technique used is descriptive qualitative analysis through a fishbone diagram instrument. The results of the fishbone diagram analysis showed the factors that influence the shallot commodity on inflation control in Brebes Regency are farmers, upstream factors, environmental factors, and downstream factors. Shallot is the main contributor to inflation in Brebes district, in 2016 these commodities contributed to inflation by 0.33%, in 2017-2019 commodities contributed to deflation of -0.26%, -0.22274% and -0.0883%. The conclusion is the development of shallot commodity cluster in the context of controlling inflation in the Tegal Bank Indonesia Representative Office using a fishbone diagram instrument considered to be good enough.


2020 ◽  
Vol 10 (1) ◽  
pp. 64-84 ◽  
Author(s):  
Shweta Kaushik ◽  
Charu Gandhi

Cloud computing has introduced a paradigm which support data outsourcing to third parties for processing using commodity clusters. It allows the owner to outsource sensitive data and share it with the authorized user while reducing the computation and management cost. Since owners store sensitive data over the cloud, the requirements of access control and data security have also been increasing. To alleviate all the problem requirements, the need has arisen for providing a safe, secure, and sound model. The existing solutions for these problems use pure cryptographic techniques, which increases the computation cost. In this article, the security problems are solved by using a trusted third party and a quorum of key managers. A service provider is responsible for capability-based access control to ensure that only authorized users will be able to access the data. Whenever any data revocation is required, the data owner simply updates this information to the master key manager to revoke a specific number of shares. The model for the proposed work has been presented and its analysis shows how it introduces security features.


2019 ◽  
Vol 21 (3) ◽  
pp. 191
Author(s):  
Sunarso Sunarso ◽  
Taufik Taufik ◽  
Irwan Kurniawan

This paper analyzed the efficiency of grant fund, creativity, assistance and implementation of appropriate technology on the economic value of the partner’s business results based on the technological dissemination at two farmer groups that have been chosen as partners of the program. The objects of this research are two groups of farmers namely KUBE Sukamakmur and Saung Cipamingkis Farmers Groups in the village of Sukamulya, Bogor. Data collection methods used field observations and interviews with respondents consisting of partner farmers. It consists of 20 DMUs that divided into 5 food commodity clusters from 2 partner groups.The interview aims to obtain a general picture of the food crop farming of the farmers who are partners who are the object of research. Analysis of research data based on Data Envelopment Analysis (DEA). The research result showed that in the aspect of fund efficiency which has a technical efficient value reaches 100% only in 2 DMUs, namely DMU 19 and DMU 10, which are cluster S (cassava commodities). In the creativity efficiency, it can be seen that only in DMU 19 and DMU 20 (cluster S or cassava commodity) are efficient. The level of relative efficiency in the mentoring efficiency there was only 1 DMU which is efficient, namely DMU 19 in cluster S (cassava commodity). The level of efficiency in the aspect of appropriate technology (technical efficiency), only achieved by 1 DMU, namely DMU 19 in cluster S (Cassava). This also shows that as many as 19 DMU are marked inefficiencies from the acquisition of efficiency values


2019 ◽  
Vol 5 (1) ◽  
pp. 65-79
Author(s):  
Yunhong Ji ◽  
Yunpeng Chai ◽  
Xuan Zhou ◽  
Lipeng Ren ◽  
Yajie Qin

AbstractIntra-query fault tolerance has increasingly been a concern for online analytical processing, as more and more enterprises migrate data analytical systems from mainframes to commodity computers. Most massive parallel processing (MPP) databases do not support intra-query fault tolerance. They may suffer from prolonged query latency when running on unreliable commodity clusters. While SQL-on-Hadoop systems can utilize the fault tolerance support of low-level frameworks, such as MapReduce and Spark, their cost-effectiveness is not always acceptable. In this paper, we propose a smart intra-query fault tolerance (SIFT) mechanism for MPP databases. SIFT achieves fault tolerance by performing checkpointing, i.e., materializing intermediate results of selected operators. Different from existing approaches, SIFT aims at promoting query success rate within a given time. To achieve its goal, it needs to: (1) minimize query rerunning time after encountering failures and (2) introduce as less checkpointing overhead as possible. To evaluate SIFT in real-world MPP database systems, we implemented it in Greenplum. The experimental results indicate that it can improve success rate of query processing effectively, especially when working with unreliable hardware.


2019 ◽  
Vol 34 ◽  
Author(s):  
Muhammad Hanif ◽  
Choonhwa Lee

Abstract Recently, valuable knowledge that can be retrieved from a huge volume of datasets (called Big Data) set in motion the development of frameworks to process data based on parallel and distributed computing, including Apache Hadoop, Facebook Corona, and Microsoft Dryad. Apache Hadoop is an open source implementation of Google MapReduce that attracted strong attention from the research community both in academia and industry. Hadoop MapReduce scheduling algorithms play a critical role in the management of large commodity clusters, controlling QoS requirements by supervising users, jobs, and tasks execution. Hadoop MapReduce comprises three schedulers: FIFO, Fair, and Capacity. However, the research community has developed new optimizations to consider advances and dynamic changes in hardware and operating environments. Numerous efforts have been made in the literature to address issues of network congestion, straggling, data locality, heterogeneity, resource under-utilization, and skew mitigation in Hadoop scheduling. Recently, the volume of research published in journals and conferences about Hadoop scheduling has consistently increased, which makes it difficult for researchers to grasp the overall view of research and areas that require further investigation. A scientific literature review has been conducted in this study to assess preceding research contributions to the Apache Hadoop scheduling mechanism. We classify and quantify the main issues addressed in the literature based on their jargon and areas addressed. Moreover, we explain and discuss the various challenges and open issue aspects in Hadoop scheduling optimizations.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Robert K. L. Kennedy ◽  
Taghi M. Khoshgoftaar ◽  
Flavio Villanustre ◽  
Timothy Humphrey

2018 ◽  
Vol 4 (3) ◽  
pp. 396-407 ◽  
Author(s):  
Bo Wang ◽  
Jinlei Jiang ◽  
Yongwei Wu ◽  
Guangwen Yang ◽  
Keqin Li
Keyword(s):  

Author(s):  
Enrico Calore ◽  
Alessandro Gabbana ◽  
Sebastiano Fabio Schifano ◽  
Raffaele Tripiccione

GPUs deliver higher performance than traditional processors, offering remarkable energy efficiency, and are quickly becoming very popular processors for HPC applications. Still, writing efficient and scalable programs for GPUs is not an easy task as codes must adapt to increasingly parallel architecture features. In this chapter, the authors describe in full detail design and implementation strategies for lattice Boltzmann (LB) codes able to meet these goals. Most of the discussion uses a state-of-the art thermal lattice Boltzmann method in 2D, but all lessons learned in this particular case can be immediately extended to most LB and other scientific applications. The authors describe the structure of the code, discussing in detail several key design choices that were guided by theoretical models of performance and experimental benchmarks, having in mind both single-GPU codes and massively parallel implementations on commodity clusters of GPUs. The authors then present and analyze performances on several recent GPU architectures, including data on energy optimization.


Sign in / Sign up

Export Citation Format

Share Document