scholarly journals Building an organic block storage service at CERN with Ceph

2014 ◽  
Vol 513 (4) ◽  
pp. 042047 ◽  
Author(s):  
Daniel van der Ster ◽  
Arne Wiebalck
2019 ◽  
Vol 6 (1) ◽  
pp. 1
Author(s):  
Yuli Anwar

Revenue and cost recognitions is the most important thing to be done by an entity,  time and the recognition method must be based on the rules from Financial Accounting Standards. Revenue and cost recognition which is done by PT. EMKL Jelutung Subur located on Pangkalpinang, Bangka Belitung province is done by using the accrual basis, and it can be seen with its influences to company profits every year.  This research is useful to get a data and information for preparing this thesis and improving my knowledge and also for comparing between theories accepted against facts applied in the field.  The result of this research shows that PT. EMKL Jelutung Subur has implemented one of the revenue and cost recognition method (accrual basis) continually, so that profit accuracy is accountable to be used for developing this kind of expedition business in order to become a better company. The accuracy is evaluated because all revenues received and cost spent  have clear evidence and found in the period of time.  The evaluation shows there is one thing that miss from revenue and cost recognition done by PT. EMKL Jelutung Subur, that is charge to the customers who use the storage service temporary, because some customers keep their goods for a long time in the warehouse, and it will increase the costs of loading, warehouse maintenance, damaged goods and decreasing a quantity of goods. If the storage service is charged to the customers, PT. EMKL Jelutung Subur will earn additional revenue to cover all the expenses above


Author(s):  
Neha Thakur ◽  
Aman Kumar Sharma

Cloud computing has been envisioned as the definite and concerning solution to the rising storage costs of IT Enterprises. There are many cloud computing initiatives from IT giants such as Google, Amazon, Microsoft, IBM. Integrity monitoring is essential in cloud storage for the same reasons that data integrity is critical for any data centre. Data integrity is defined as the accuracy and consistency of stored data, in absence of any alteration to the data between two updates of a file or record.  In order to ensure the integrity and availability of data in Cloud and enforce the quality of cloud storage service, efficient methods that enable on-demand data correctness verification on behalf of cloud users have to be designed. To overcome data integrity problem, many techniques are proposed under different systems and security models. This paper will focus on some of the integrity proving techniques in detail along with their advantages and disadvantages.


2016 ◽  
Vol 24 (2) ◽  
pp. 57-74 ◽  
Author(s):  
Chen-Shu Wang ◽  
Cheng-Yu Lai ◽  
Shiang-Lin Lin

In recent year, mobile devices have become an indispensable product in our daily life. Extensive amount of mobile applications (Apps) have been developed and used on these devices. Restated, in terms of the Apps future development and popularization, to understand why people have willingness to pay for use certain Apps has apparently became an important issue. However, there are various homogeneity Apps, which people can easily find some free succedaneum for use. Consequently, it would be an interesting issue to realize individual's intention to pay for use the Apps. In this study, the authors conducted a survey in Taiwan to realize individuals' willingness to pay for Cloud Storage Service (CSS), since CSS is one of the frequently adopted App for most mobile device users. The results show that both the perceived service quality and conformity positively affect the perceived value and then increases the user's willingness to pay indirectly. In addition, the findings also support that the users' product knowledge about CSS produce negative moderating effects on the perceived value and the willingness of pay.


Author(s):  
Amit Warke ◽  
Mohamed Mohamed ◽  
Robert Engel ◽  
Heiko Ludwig ◽  
Wayne Sawdon ◽  
...  

2014 ◽  
Vol 687-691 ◽  
pp. 4906-4909
Author(s):  
Yan Li Wang ◽  
Ji Meng Du ◽  
Sai Sai Xu

Because of a vast amount of business process, difficult tracking, low turnover efficiency, processing of logistics management information behind time and with antiquated means, the application of advanced RFID technology in the field of logistics and storage management is proposed to solve the above problems on the basis of analyzing the shortages. In this paper, the logistics warehouse management information system based on RFID is conducted. The working flow and process structure of the logistics and storage arc given on the basis of analysis of the traditional logistics and storage service flow. The integral framework of the logistics and storage service system based on RFID technology is established. This thesis mainly does research about the pattern of RFID application in warehouse, the system framework and the information flow. On the base, it designs the information system and realizes its function module.


2016 ◽  
Vol 27 (3) ◽  
pp. e1932 ◽  
Author(s):  
Konrad Karolewicz ◽  
Andrzej Beben ◽  
Jordi Mongay Batalla ◽  
George Mastorakis ◽  
Constandinos X. Mavromoustakis

2017 ◽  
pp. 197-219 ◽  
Author(s):  
Surya Nepal ◽  
Shiping Chen ◽  
Jinhui Yao

2020 ◽  
Vol 245 ◽  
pp. 04017
Author(s):  
Dario Barberis ◽  
Igor Aleksandrov ◽  
Evgeny Alexandrov ◽  
Zbigniew Baranowski ◽  
Gancho Dimitrov ◽  
...  

The ATLAS EventIndex was designed in 2012-2013 to provide a global event catalogue and limited event-level metadata for ATLAS analysis groups and users during the LHC Run 2 (2015-2018). It provides a good and reliable service for the initial use cases (mainly event picking) and several additional ones, such as production consistency checks, duplicate event detection and measurements of the overlaps of trigger chains and derivation datasets. The LHC Run 3, starting in 2021, will see increased data-taking and simulation production rates, with which the current infrastructure would still cope but may be stretched to its limits by the end of Run 3. This proceeding describes the implementation of a new core storage service that will be able to provide at least the same functionality as the current one for increased data ingestion and search rates, and with increasing volumes of stored data. It is based on a set of HBase tables, with schemas derived from the current Oracle implementation, coupled to Apache Phoenix for data access; in this way we will add to the advantages of a BigData based storage system the possibility of SQL as well as NoSQL data access, allowing to re-use most of the existing code for metadata integration.


Sign in / Sign up

Export Citation Format

Share Document