scholarly journals A Data Dictionary Approach To Meeting User Requests For Accounting Information

1999 ◽  
Vol 3 (1) ◽  
pp. 53-60
Author(s):  
Kristi Yuthas ◽  
Dennis F. Togo

In this era of massive data accumulation, dynamic development of large-scale data-bases and interfaces intended to be user-friendly, there is still an increasing demand on analysts as actual user access to databases is still not a common practice. A data dictionary approach, that includes providing users with a list of relevant data items within the database, can expedite the analysis of information requirements and the development of user-requested information systems. Furthermore, this approach enhances user involvement and reduces the demands on the analysts for systems devel-opment projects.

1994 ◽  
Vol 83 (03) ◽  
pp. 135-141 ◽  
Author(s):  
P. Fisher ◽  
R. Van Haselen

AbstractLarge scale data collection combined with modern information technology is a powerful tool to evaluate the efficacy and safety of homoeopathy. It also has great potential to improve homoeopathic practice. Data collection has not been widely used in homoeopathy. This appears to be due to the clumsiness sof the methodology and the perception that it is of little value to daily practice. 3 protocols addressing different aspects of this issue are presented.- A proposal to establish common basic data collection methodology for homoeopaths throughout Europe.- A systematic survey of the results of homoeopathic treatment of patients with rheumatoid arthritis using quality of life and objective assessments.- Verification of a set of homoeopathic prescribing features for Rhus toxicodendron.These proposals are designed to be ‘user-friendly’ and to provide practical information relevant to daily homoeopathic practice.


Author(s):  
Zachary B Abrams ◽  
Caitlin E Coombes ◽  
Suli Li ◽  
Kevin R Coombes

Abstract Summary Unsupervised machine learning provides tools for researchers to uncover latent patterns in large-scale data, based on calculated distances between observations. Methods to visualize high-dimensional data based on these distances can elucidate subtypes and interactions within multi-dimensional and high-throughput data. However, researchers can select from a vast number of distance metrics and visualizations, each with their own strengths and weaknesses. The Mercator R package facilitates selection of a biologically meaningful distance from 10 metrics, together appropriate for binary, categorical and continuous data, and visualization with 5 standard and high-dimensional graphics tools. Mercator provides a user-friendly pipeline for informaticians or biologists to perform unsupervised analyses, from exploratory pattern recognition to production of publication-quality graphics. Availabilityand implementation Mercator is freely available at the Comprehensive R Archive Network (https://cran.r-project.org/web/packages/Mercator/index.html).


2020 ◽  
Vol 10 (5) ◽  
pp. 314
Author(s):  
Jingbin Yuan ◽  
Jing Zhang ◽  
Lijun Shen ◽  
Dandan Zhang ◽  
Wenhuan Yu ◽  
...  

Recently, with the rapid development of electron microscopy (EM) technology and the increasing demand of neuron circuit reconstruction, the scale of reconstruction data grows significantly. This brings many challenges, one of which is how to effectively manage large-scale data so that researchers can mine valuable information. For this purpose, we developed a data management module equipped with two parts, a storage and retrieval module on the server-side and an image cache module on the client-side. On the server-side, Hadoop and HBase are introduced to resolve massive data storage and retrieval. The pyramid model is adopted to store electron microscope images, which represent multiresolution data of the image. A block storage method is proposed to store volume segmentation results. We design a spatial location-based retrieval method for fast obtaining images and segments by layers rapidly, which achieves a constant time complexity. On the client-side, a three-level image cache module is designed to reduce latency when acquiring data. Through theoretical analysis and practical tests, our tool shows excellent real-time performance when handling large-scale data. Additionally, the server-side can be used as a backend of other similar software or a public database to manage shared datasets, showing strong scalability.


2021 ◽  
Vol 9 (1) ◽  
pp. 39-58
Author(s):  
Efri Syamsul Bahri ◽  
◽  
Ade Suhaeti ◽  
Nursanita Nasution ◽  
◽  
...  

This study tests the factors that influence the decision of muzaki in channeling zakat, namely: trust, religiosity, income, and quality of accounting information. This study is a survey of 40 muzaki from Amil Zakat Institution (known as LAZ) Zakat Sukses in Depok. This study uses the purposive sampling method. Data analysis using SPSS 25 software with multiple linear regression analysis. The results of this study indicate that trust, religiosity, income, and the quality of accounting information simultaneously influence the decision of muzaki to distribute zakat through LAZ Zakat Sukses in Depok. Partially, trust, religiosity, and income positively affect the decision of muzaki to distribute zakat through LAZ Zakat Sukses. Meanwhile, the quality of accounting information has a negative impact on muzakki's decisions in distributing zakat through LAZ Zakat Sukses. This study's scope is on the muzaki at LAZ Zakat Sukses Depok. Therefore, the results may not represent nationally. Therefore, similar studies in collecting more large-scale data and broader areas will be useful. The implication is that LAZ Zakat Sukses need to show zakat management's performance to increase muzaki trust.


2014 ◽  
Vol 568-570 ◽  
pp. 1539-1546
Author(s):  
Xin Li Li

Large-scale data streams processing is now fundamental to many data processing applications. There is growing focus on manipulating Large-scale data streams on GPUs in order to improve the data throughput. Hence, there is a need to investigate the parallel scheduling strategy at the task level for the Large-scale data streamsprocessing, and to support them efficiently. We propose two different parallel scheduling strategies to handle massive data streamsin real time. Additionally, massive data streamsprocessing on GPUs is energy-consumed computation task. So we consider the power efficiency as an important factor to the parallel strategies. We present an approximation method to quantify the power efficiency for massive data streams during the computing phase. Finally, we test and compare the two parallel scheduling strategies on a large quantity of synthetic and real stream datas. The simulation experiments and compuatation results in practice both prove the accuracy of analysis on performance and power efficiency.


2018 ◽  
Vol 26 (4) ◽  
pp. 1-17
Author(s):  
J. K. Verma ◽  
C. P. Katti

Increasing demand for high computing power led to the establishment of large-scale data centers. A data center is a collection of millions of servers. These large-scale data centers consume a huge amount of electrical energy. Managing these servers for provisioning and de-provisioning of resources in an automatic and efficient way is a great challenge. We attempt to minimise the power consumption by reducing the number of servers and maximizing the resource utilization of the servers that are in use through virtualization of resources. We work upon Virtual Machine Consolidation for consolidating them on fewer servers for maximizing resource utilization. In this article, we propose a resource request based heuristic for offloading the overloaded servers to optimise power consumption efficiency.


Author(s):  
HooYoung Ahn ◽  
Junsu Kim ◽  
YoonJoon Lee

Devices in IoE (Internet of Everything) environment generate massive data from various sensors. To store and process the rapidly incoming large-scale data, SSDs are used for improving performance and reliability of storage systems. However, they have typical problem called write amplification which is caused by out-of-place updates characteristics. As the write amplification increases, it degrades I/O performance and shortens SSDs' lifetime. This paper presents a new approach to reduce write amplification of SSD arrays. To solve the problem, this paper proposes a new parity update scheme, called LPUS. LPUS transforms random parity updates to sequential writes with additional log blocks in SSD arrays by using parity logs and lazy parity updates. The experimental results show that, LPUS reduces write amplification up to 37% and the number of erases up to 50% with the reasonable size of log space.


Sign in / Sign up

Export Citation Format

Share Document