scholarly journals Matching Pragmatic Lithic Analysis and Proper Data Architecture

2021 ◽  
pp. 1-13
Author(s):  
Daniel A. Contreras ◽  
Zachary Batist ◽  
Ciara Zogheib ◽  
Tristan Carter

Abstract The documentation and analysis of archaeological lithics must navigate a basic tension between examining and recording data on individual artifacts or on aggregates of artifacts. This poses a challenge both for artifact processing and for database construction. We present here an R Shiny solution that enables lithic analysts to enter data for both individual artifacts and aggregates of artifacts while maintaining a robust yet flexible data structure. This takes the form of a browser-based database interface that uses R to query existing data and transform new data as necessary so that users entering data of varying resolutions still produce data structured around individual artifacts. We demonstrate the function and efficacy of this tool (termed the Queryable Artifact Recording Interface [QuARI]) using the example of the Stelida Naxos Archaeological Project (SNAP), which, focused on a Paleolithic and Mesolithic chert quarry, has necessarily confronted challenges of processing and analyzing large quantities of lithic material.

Author(s):  
JIGAR JAIN ◽  
Rushikesh Gaidhani ◽  
Pranjal Bahuguna

Non-blocking transactional structures are now been there for quite a while now. With the transactional structure we want to achieve atomicity of multiple operation of a transaction and consistency of the structure after rollback. Older solutions like software transactional memory (STM) and transactional boost provide synchronization at an external layer over the structure itself. This introduces an overhead which is not necessarily required. Thus, it is a probable problem. To solve this issue, researchers provided with a solution which make structural changes in the existing data structure and make it transactional, lock-free. In this work we present a sequential re-implantation of a the above provided solution. We applied transactional transformation to a linked-list and made a sequential version of it. Next, we introduced multi-resource lock version of the same which will used locking to support multi- thread operation on the linked list. As anyone would expect, we get constant time graph for a sequential version of the transactional structure. But the case for locked based multi thread version is different. We get worse performance as compared to sequential version as we get the overhead of maintain a resource vector. As the number of threads increase the size of resource vector increase and thus contention increases. In future we plan to completely re-implement a lock free transactional linked data structure and yet again compare its result with the results from this paper.


2018 ◽  
Author(s):  
Marten Postma ◽  
Joachim Goedhart

AbstractReporting of the actual data in graphs and plots increases transparency and enables independent evaluation. On the other hand, data summaries are often used in graphs since they aid interpretation. State-of-the art data visualizations can be made with the ggplot2 package, which uses the ideas of a ‘grammar of graphics’ to generate a graphic from multiple layers of data. However, ggplot2 requires coding skills and an understanding of the tidy data structure. To democratize state-of-the-art data visualization of raw data with a selection of statistical summaries, a web app was written using R/shiny that uses the ggplot2 package for generating plots. A multilayered approach together with adjustable transparency offers a unique flexibility, enabling users can to choose how to display the data and which of the data summaries to add. Four data summaries are provided, mean, median, boxplot, violinplot, to accommodate several types of data distributions. In addition, 95% confidence intervals can be added for visual inferences. By adjusting the transparency of the layers, the visualization of the raw data together with the summary can be tuned for optimal presentation and interpretation. The app is dubbed PlotsOfData and is available at: https://huygens.science.uva.nl/PlotsOfData/


2018 ◽  
Vol 41 ◽  
Author(s):  
Benjamin C. Ruisch ◽  
Rajen A. Anderson ◽  
David A. Pizarro

AbstractWe argue that existing data on folk-economic beliefs (FEBs) present challenges to Boyer & Petersen's model. Specifically, the widespread individual variation in endorsement of FEBs casts doubt on the claim that humans are evolutionarily predisposed towards particular economic beliefs. Additionally, the authors' model cannot account for the systematic covariance between certain FEBs, such as those observed in distinct political ideologies.


1975 ◽  
Vol 26 ◽  
pp. 341-380 ◽  
Author(s):  
R. J. Anderle ◽  
M. C. Tanenbaum

AbstractObservations of artificial earth satellites provide a means of establishing an.origin, orientation, scale and control points for a coordinate system. Neither existing data nor future data are likely to provide significant information on the .001 angle between the axis of angular momentum and axis of rotation. Existing data have provided data to about .01 accuracy on the pole position and to possibly a meter on the origin of the system and for control points. The longitude origin is essentially arbitrary. While these accuracies permit acquisition of useful data on tides and polar motion through dynamio analyses, they are inadequate for determination of crustal motion or significant improvement in polar motion. The limitations arise from gravity, drag and radiation forces on the satellites as well as from instrument errors. Improvements in laser equipment and the launch of the dense LAGEOS satellite in an orbit high enough to suppress significant gravity and drag errors will permit determination of crustal motion and more accurate, higher frequency, polar motion. However, the reference frame for the results is likely to be an average reference frame defined by the observing stations, resulting in significant corrections to be determined for effects of changes in station configuration and data losses.


1988 ◽  
Vol 102 ◽  
pp. 107-110
Author(s):  
A. Burgess ◽  
H.E. Mason ◽  
J.A. Tully

AbstractA new way of critically assessing and compacting data for electron impact excitation of positive ions is proposed. This method allows one (i) to detect possible printing and computational errors in the published tables, (ii) to interpolate and extrapolate the existing data as a function of energy or temperature, and (iii) to simplify considerably the storage and transfer of data without significant loss of information. Theoretical or experimental collision strengths Ω(E) are scaled and then plotted as functions of the colliding electron energy, the entire range of which is conveniently mapped onto the interval (0,1). For a given transition the scaled Ω can be accurately represented - usually to within a fraction of a percent - by a 5 point least squares spline. Further details are given in (2). Similar techniques enable thermally averaged collision strengths upsilon (T) to be obtained at arbitrary temperatures in the interval 0 < T < ∞. Application of the method is possible by means of an interactive program with graphical display (2). To illustrate this practical procedure we use the program to treat Ω for the optically allowed transition 2s → 2p in ArXVI.


2009 ◽  
Vol 29 (S 01) ◽  
pp. S7-S12
Author(s):  
M. Spannagl ◽  
W. Schramm ◽  
H. Krebs ◽  

SummarySince 1978 an annual multicentric survey regarding the epidemiology of patients suffering of haemophilia is performed with support of haemophilia treating centres of any size. Again the actual compilation is resting upon a broad database returning to over 30 years of inquiry well representing both the actual and retrospective status of mortality. Prompted was exclusively information about patients with haemophilia A, B and von Willebrand disease. In particular anonymous data concerning the last 12 months about number of treated patients, type and severity of illness, HIV-status and detailed information about causes of death was inquired. This data was merged with existing data and analyzed statistically. In the 2007/2008 survey, a total


2019 ◽  
Vol 2 (1) ◽  
pp. 1-16
Author(s):  
Anjas Tryana

With the development of technology today, it is very important for every company to plan and develop a system to support business processes in each company. Achieving the goals of an enterprise faces challenges and changes that require strategies for effective measures and efficient use of resources. One important and increasingly widely used strategy is the use and improvement of information system support for the enterprise. This plan can utilize enterprise architecture planning methodology that produces data architecture, application architecture, technology architecture, and the direction of its implementation plan for the enterprise.CV Biensi Fesyenindo is engaged in retail garment, with branches throughout Indonesia, covering the areas of Kalimantan, Sulawesai, NTB, NTT, Bali, Java and Sumatra. In their daily activities, they carry out production to distribution processes to meet market and employee needs.The enterprise architecture model used in this study is by using Enterprise Architecture Planning (EAP). EAP is a process of defining enterprise architecture that focuses on data architecture, applications and technology in supporting business and plans to implement the architecture, where the EAP method has several stages, starting from planning in planning, business modeling , Current System and Technology (Current System & Technology), Data Architecture (Data Architecture), Application Architecture (Applications Architecture), Technology Architecture (Technology Architecture), Implementation Plans (Implementation Plans).The results of this study are recommendations for information systems for Fesyenindo Biensi CV in the form of enterprise architecture planing blue print planning that is successful in defining 5 main business processes, which consist of application architecture data architecture and for technological architecture to produce technology architecture proposals divided into 5 chapters 110 pages .


Sign in / Sign up

Export Citation Format

Share Document