data granularity
Recently Published Documents


TOTAL DOCUMENTS

57
(FIVE YEARS 29)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Andreas Hecht

PurposeEmpirical evidence on the determinants of corporate FX speculation is ambiguous. We note that the conflicting findings of prior studies could be the result of different methodologies in determining speculation. Using a novel approach to defining speculative activities, we seek to help solve the puzzle of the determinants of speculation and examine which firms engage in such activities and why they do so.Design/methodology/approachThis paper examines an unexplored regulatory environment that contains publicly reported FX risk data on the firms' exposures before and after hedging per year and currency. This unprecedented data granularity allows us to use actual reported volumes instead of proxy variables in defining speculation and to examine whether the convexity theories are empirically supported in FX risk management.FindingsWe find that frequent speculators are smaller, have more growth opportunities and possess lower internal resources, which indicates unprecedented empirical evidence for the convexity theories in FX risk management. Further, we provide evidence that corporate speculation might be linked to the application of hedge accounting.Practical implicationsWe help solve the questions of which and why firms engage in speculative activities. This can provide valuable information to various stakeholders such as financial analysts, investors, or regulators, which can help prevent imperiling corporate losses and curb excessive speculative financial activities.Originality/valueIn order to question the unresolved issue of the determinants of speculation, this paper is the first to use openly available accounting data with actual reported FX exposure information before and after hedging in defining speculation, instead of relying on proxy variables for FX exposure and derivative usage with potential estimation errors.


2021 ◽  
Author(s):  
Zeeshan Ahmed

Advancing frontiers of clinical research, we discuss the need for intelligent health systems to support a deeper investigation of COVID-19. We hypothesize that the convergence of the healthcare data and staggering developments in artificial intelligence have the potential to elevate the recovery process with diagnostic and predictive analysis to identify major causes of mortality, modifiable risk factors and actionable information that supports the early detection and prevention of COVID-19. However, current constraints include the recruitment of COVID-19 patients for research; translational integration of electronic health records and diversified public datasets; and the development of artificial intelligence systems for data-intensive computational modeling to assist clinical decision making. We propose a novel nexus of machine learning algorithms to examine COVID-19 data granularity from population studies to subgroups stratification and ensure best modeling strategies within the data continuum.


2021 ◽  
Vol 3 ◽  
Author(s):  
Wouter Lueks ◽  
Justus Benzler ◽  
Dan Bogdanov ◽  
Göran Kirchner ◽  
Raquel Lucas ◽  
...  

Digital proximity tracing (DPT) for Sars-CoV-2 pandemic mitigation is a complex intervention with the primary goal to notify app users about possible risk exposures to infected persons. DPT not only relies on the technical functioning of the proximity tracing application and its backend server, but also on seamless integration of health system processes such as laboratory testing, communication of results (and their validation), generation of notification codes, manual contact tracing, and management of app-notified users. Policymakers and DPT operators need to know whether their system works as expected in terms of speed or yield (performance) and whether DPT is making an effective contribution to pandemic mitigation (also in comparison to and beyond established mitigation measures, particularly manual contact tracing). Thereby, performance and effectiveness are not to be confused. Not only are there conceptual differences but also diverse data requirements. For example, comparative effectiveness measures may require information generated outside the DPT system, e.g., from manual contact tracing. This article describes differences between performance and effectiveness measures and attempts to develop a terminology and classification system for DPT evaluation. We discuss key aspects for critical assessments of whether the integration of additional data measurements into DPT apps may facilitate understanding of performance and effectiveness of planned and deployed DPT apps. Therefore, the terminology and a classification system may offer some guidance to DPT system operators regarding which measurements to prioritize. DPT developers and operators may also make conscious decisions to integrate measures for epidemic monitoring but should be aware that this introduces a secondary purpose to DPT. Ultimately, the integration of further information (e.g., regarding exact exposure time) into DPT involves a trade-off between data granularity and linkage on the one hand, and privacy on the other. More data may lead to better epidemiological information but may also increase the privacy risks associated with the system, and thus decrease public DPT acceptance. Decision-makers should be aware of the trade-off and take it into account when planning and developing DPT systems or intending to assess the added value of DPT relative to the existing contact tracing systems.


2021 ◽  
Vol 15 (4) ◽  
pp. 817-829 ◽  
Author(s):  
Davide Cirillo ◽  
Iker Núñez‐Carpintero ◽  
Alfonso Valencia

Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 377
Author(s):  
Ignacio González García ◽  
Alfonso Mateos Caballero

In this research, we used Spanish wealth distribution microdata for the period 2015–2020 to provide a general framework for comparing different models and explaining different empirical datasets related to wealth distribution. We present a methodology to output the current value of assets and participations held by the population in order to calculate their real and current distribution. We propose a new methodology for mixture analysis, whereby we identify and analyze subpopulations and then go on to study their influence on wealth distribution. We use concepts of symmetry to identify two internal processes that are characteristic of the wealth accumulation process for the subpopulations of entrepreneurs and non-entrepreneurs. Finally, we propose a method to adjust these results to other empirical data in other countries and periods, providing a methodology for comparing results output with differing data granularity.


2021 ◽  
Vol 25 (1) ◽  
pp. 1305-1316
Author(s):  
Xue Mei ◽  
Carlos Jimenez-Bescos

Abstract Degree-days are to normalise energy consumption data and furthermore can generate forecasting predictions for energy demand being used to compare between different properties across different location and years. The base temperature is the main factor to consider the accuracy of degree days. The aim of this study was to evaluate the impact of data granularity to understand its effect on a correlation between energy consumption and Degree Days. Degree Days were calculated using the standard 18.3 °C base temperature as taking in the United States of America and compare the Degree Days calculations against the calculation based on hourly, daily and monthly data for base temperature. The methodology followed is based on the analysis of 23 houses located in Texas, Austin. The properties under study are from different construction periods and with a variety of total floor areas. This study had demonstrated the effect of the granularity of the data collected to generate Degree Days and its impact on the correlation between energy consumption and degree-days for different base temperatures. While the higher correlations are achieved using a monthly granularity, this approach is not recommended due to the small number of data points and a much more preferred approach that should be taken is a daily approach, which would generate a much more reliable correlation. In this study, higher correlation values were achieved when using the standard 18.3 °C base temperature for the Degree Days calculations, 70 % correlation in daily approach versus 56.67 % using indoor temperature, showing better results across the board against the use of indoor temperature at all granularity levels.


Author(s):  
Oluwaseun David Adepoju ◽  
Ruti Dauphin

Data has been named the new oil that drives development both in academia and in the business world. It has been used to develop new products across all sectors of the global economy and recently in agriculture through precision agriculture and robotic farming. It should, however, be noted that no matter the potential for change and development a data set contains, it is very important to deploy the right interpretation techniques to make the data useful and meaningful. This chapter explores themes such as data visualization tools, data granularity, and data visualization tools. It also explained the advantages of data visualization and the types of data African libraries should be collecting.


Author(s):  
Olga Yakimova ◽  
Timofey Samsonov ◽  
Daniil Potemkin ◽  
Elina Usmanova

The article is devoted to the problem of evaluating the detailing of spatial data. In geoinformatics, spatial data detailing determines how detailed a particular object is representeda map image, and the detail score allows you to analyze the permissible accuracy of spatial objects for a specific user task. An approach to the definition of detailing concept is proposed. The evaluation of the object’s detail depends on its characteristics: geometric, semantic, and topological. A study is being conducted to select the geometric characteristics of the object that reflect its detail. For linear objects, in addition to the characteristics of the line as a whole (length, number of points, sinuosity, average rotation angle), it is suggested to consider its smaller details, such as bends and triplets. A bend is a section of a line where the angle of rotation retains its sign. A triplet is a combination of three consecutive points. Based on the results of the study, the geometric characteristics that change in the trend depending on the scale were selected. The paper presents the developed software for assessing map detail—the MapAnalyser toolbar for the QGIS geoinformation system. The functional capabilities of the developed software are described. The toolbar allows you to get the geometric, semantic, and topological characteristics of a layer or set of layers, as well as to evaluate the graphical complexity of a map image based on RlE encoding. The program code is written in the PyQGIS language. The software has passed state registration and is hosted on the github server. With its help, new results were obtained on the evaluation of spatial data granularity. New software, embedded in QgIS, to assess the detail of the map and spatial data, based on taking into account geometric and symbolic (used in the display) parameters. The software allows to calculate the metrics of spatial data detail, as well as to assess the complexity of the cartographic image. It’s can be used in the integration of data obtained from different sources, assess the compliance of data detail and the map scale, to assess the complexity of the map for different purposes and scales.


Sign in / Sign up

Export Citation Format

Share Document