scholarly journals Improving ALMA’s data processing efficiency using a holistic approach

Author(s):  
Theodoros Nakos ◽  
Harold Francke ◽  
Kouichiro Nakanishi ◽  
Dirk Petry ◽  
Thomas Stanke ◽  
...  
2015 ◽  
Vol 13 (03) ◽  
Author(s):  
Debbie Deborah. S. Mokolinug ◽  
Novi S. Budiarso

Breakthroughs related to the application of Information Technology in taxation activity continues to be done in order to facilitate, improve and optimize the service to taxpayers. There are many factors that influence the success of applied information technology. It is associated with a recent study also analyzes the success of information technology established by the Directorate General of Taxation (DGT) is an e-SPT This study was conducted to verify the success of e-SPT system used by employers taxable. The purpose of this study was to determine the effect of practicality, ease of calculation, the ease of reporting and reliability of e-tax return on tax data processing efficiency. The analytical method used is multiple regression analysis. The results showed that the practicality and ease of calculation significantly affect the efficiency of data processing tax on taxable employers in Tomohon while reporting the ease and convenience of e-SPT does not significantly affect the efficiency of data processing tax on taxable employers in Tomohon


2021 ◽  
Author(s):  
Hongjie Zheng ◽  
Hanyu Chang ◽  
Yongqiang Yuan ◽  
Qingyun Wang ◽  
Yuhao Li ◽  
...  

<p>Global navigation satellite systems (GNSS) have been playing an indispensable role in providing positioning, navigation and timing (PNT) services to global users. Over the past few years, GNSS have been rapidly developed with abundant networks, modern constellations, and multi-frequency observations. To take full advantages of multi-constellation and multi-frequency GNSS, several new mathematic models have been developed such as multi-frequency ambiguity resolution (AR) and the uncombined data processing with raw observations. In addition, new GNSS products including the uncalibrated phase delay (UPD), the observable signal bias (OSB), and the integer recovery clock (IRC) have been generated and provided by analysis centers to support advanced GNSS applications.</p><p>       However, the increasing number of GNSS observations raises a great challenge to the fast generation of multi-constellation and multi-frequency products. In this study, we proposed an efficient solution to realize the fast updating of multi-GNSS real-time products by making full use of the advanced computing techniques. Firstly, instead of the traditional vector operations, the “level-3 operations” (matrix by matrix) of Basic Liner Algebra Subprograms (BLAS) is used as much as possible in the Least Square (LSQ) processing, which can improve the efficiency due to the central processing unit (CPU) optimization and faster memory data transmission. Furthermore, most steps of multi-GNSS data processing are transformed from serial mode to parallel mode to take advantage of the multi-core CPU architecture and graphics processing unit (GPU) computing resources. Moreover, we choose the OpenBLAS library for matrix computation as it has good performances in parallel environment.</p><p>       The proposed method is then validated on a 3.30 GHz AMD CPU with 6 cores. The result demonstrates that the proposed method can substantially improve the processing efficiency for multi-GNSS product generation. For the precise orbit determination (POD) solution with 150 ground stations and 128 satellites (GPS/BDS/Galileo/GLONASS/QZSS) in ionosphere-free (IF) mode, the processing time can be shortened from 50 to 10 minutes, which can guarantee the hourly updating of multi-GNSS ultra-rapid orbit products. The processing time of uncombined POD can also be reduced by about 80%. Meanwhile, the multi-GNSS real-time clock products can be easily generated in 5 seconds or even higher sampling rate. In addition, the processing efficiency of UPD and OSB products can also be increased by 4-6 times.</p>


2019 ◽  
Vol 8 (S1) ◽  
pp. 87-88
Author(s):  
S. Annapoorani ◽  
B. Srinivasan

This paper is concerned with the study and implementation of effective Data Emplacement Algorithm in large set of databases called Big Data and proposes a model for improving the efficiency of data processing and storage utilization for dynamic load imbalance among nodes in a heterogeneous cloud environment. With the era of explosive information and data receiving, more and more fields need to deal with massive, large scale of data. A method has been proposed with an improved Data Placement algorithm called Effective Data Emplacement Algorithm with computing capacity of each node as a predominant factor that promotes and improves the efficiency in data processing in a short duration time from large set of data. The adaptability of the proposed model can be obtained by minimizing the time with processing efficiency through the computing capacity of each node in the cluster. The proposed solution improves the performance of the heterogeneous cluster environment by effectively distributing data based on the performance oriented sampling as the experimental results made with word count applications.


Author(s):  
Q. Song ◽  
Y. G. Hu ◽  
M. L. Hou

Abstract. The ancient city wall contains rich cultural values. Due to environmental and human factors, there are many diseases in the ancient city wall: bulging, cracking, etc., which will lead to the collapse or even death of the ancient city wall. Therefore, the monitoring and protection of the ancient city wall is imminent. This paper proposes a new scheme for bulging monitoring for wall bulging. The feature plane is fitted according to the actual scan data, the degree of bulging, the trend and the area size are determined, and the bulging deformation of the city wall is displayed in the form of an image. Simplify workflow, improve data processing efficiency, and display more intuitively.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yanchao Rao ◽  
Ken Huijin Guo

Purpose The US Securities and Exchange Commission (SEC) requires public companies to file structured data in eXtensible Business Reporting Language (XBRL). One of the key arguments behind the XBRL mandate is that the technical standard can help improve processing efficiency for data aggregators. This paper aims to empirically test the data processing efficiency hypothesis. Design/methodology/approach To test the data processing efficiency hypothesis, the authors adopt a two-sample research design by using data from Compustat: a pooled sample (N = 61,898) and a quasi-experimental sample (N = 564). The authors measure data processing efficiency as the time lag between the dates of 10-K filings on the SEC’s EDGAR system and the dates of related data finalized in the Compustat database. Findings The statistical results show that after controlling for potential effects of firm size, age, fiscal year and industry, XBRL has a non-significant impact on data efficiency. It suggests that the data processing efficiency benefit may have been overestimated. Originality/value This study provides some timely empirical evidence to the debate as to whether XBRL can improve data processing efficiency. The non-significant results suggest that it may be necessary to revisit the mandate of XBRL reporting in the USA and many other countries.


2019 ◽  
Vol 31 (1) ◽  
pp. 21-28
Author(s):  
Isabella Toschi

Abstract If the use of oblique aerial camera systems is steadily growing for 3D capture of urban areas, their combination with a LiDAR unit seems to have all the potential to lead the airborne mapping sector a step forward. To fully exploit the complementary sensor behaviour, a new perspective should be adopted that looks beyond the traditional data processing chains and extends them towards an hybrid data processing concept. Assisted tie point matching, integrated sensor orientation and augmented 3D reconstruction are the keystones of a rigorous hybrid workflow for hybrid sensors. They should all rely on a deep understanding of the different properties of active and passive 3D imaging, and of the uncertainty components in their measurements. The paper will focus on the most recent answers to these issues, that open new opportunities for boosting the quality of the geospatial products w.r.t completeness, geometric quality, object detection and processing efficiency


2018 ◽  
Vol 18 (3) ◽  
pp. 715-724 ◽  
Author(s):  
Xiao Li ◽  
Xin Liu ◽  
Clyde Zhengdao Li ◽  
Zhumin Hu ◽  
Geoffrey Qiping Shen ◽  
...  

Foundation pit displacement is a critical safety risk for both building structure and people lives. The accurate displacement monitoring and prediction of a deep foundation pit are essential to prevent potential risks at early construction stage. To achieve accurate prediction, machine learning methods are extensively applied to fulfill this purpose. However, these approaches, such as support vector machines, have limitations in terms of data processing efficiency and prediction accuracy. As an emerging approach derived from support vector machines, least squares support vector machine improve the data processing efficiency through better use of equality constraints in the least squares loss functions. However, the accuracy of this approach highly relies on the large volume of influencing factors from the measurement of adjacent critical points, which is not normally available during the construction process. To address this issue, this study proposes an improved least squares support vector machine algorithm based on multi-point measuring techniques, namely, multi-point least squares support vector machine. To evaluate the effectiveness of the proposed multi-point least squares support vector machine approach, a real case study project was selected, and the results illustrated that the multi-point least squares support vector machine approach on average outperformed single-point least squares support vector machine in terms of prediction accuracy during the foundation pit monitoring and prediction process.


Author(s):  
Wei Wang ◽  
Hui Lin ◽  
Junshu Wang

Abstract At present, the number of vehicle owners is increasing, and the cars with autonomous driving functions have attracted more and more attention. The lane detection combined with cloud computing can effectively solve the drawbacks of traditional lane detection relying on feature extraction and high definition, but it also faces the problem of excessive calculation. At the same time, cloud data processing combined with edge computing can effectively reduce the computing load of the central nodes. The traditional lane detection method is improved, and the current popular convolutional neural network (CNN) is used to build a dual model based on instance segmentation. In the image acquisition and processing processes, the distributed computing architecture provided by edge-cloud computing is used to improve data processing efficiency. The lane fitting process generates a variable matrix to achieve effective detection in the scenario of slope change, which improves the real-time performance of lane detection. The method proposed in this paper has achieved good recognition results for lanes in different scenarios, and the lane recognition efficiency is much better than other lane recognition models.


Sign in / Sign up

Export Citation Format

Share Document