Non-Synoptic Winds Data Basis

Author(s):  
Uwe Ulbrich ◽  
Edmund P. Meredith

A high-quality data basis is essential for reliable assessment of non-synoptic wind hazards and determination of any mitigation measures needed. Common data sources, however, often come with many shortcomings, which, if not taken into account, may lead to unsound estimation of risks from non-synoptic wind hazards. In this chapter, the range of potential data sources for assessing non-synoptic winds is discussed, including observational and model-based products. Observational products include station-based observational networks and remote sensing techniques, while model products range from global analyses to high-resolution large-eddy simulations. Both traditional and latest generation products are presented, including an explanation of how the respective data are produced and any limitations that end users should be aware of when working with such data. Sources of data deficiencies are additionally discussed, as well as factors to consider when assessing the suitability of a chosen data source as a basis for decision-making (e.g., its representativeness).

BMJ Open ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. e034400
Author(s):  
Marianne Gillam ◽  
Matthew Leach ◽  
Jessica Muller ◽  
David Gonzalez-Chica ◽  
Martin Jones ◽  
...  

IntroductionThe health workforce is an integral component of the healthcare system. Comprehensive, high-quality data on the health workforce are essential to identifying gaps in health service provision, as well as informing future health workforce and health services planning, and health policy. While many data sources are used in Australia for these purposes, the quality of the data sources with respect to relevance, accessibility and accuracy is not clear.Methods and analysisThis scoping review aims to identify and appraise publicly available data sources describing the Australian health workforce. The review will include any data source (eg, registry, administrative database and survey) or document reporting a data source (eg, journal article, report) on the Australian health workforce, which is publicly available and describes the characteristics of the workforce. The search will be conducted in 10 bibliographic databases and the grey literature using an iterative process. Screening of titles and abstracts will be undertaken by two investigators, independently, using Covidence software. Any disagreement between investigators will be resolved by a third investigator. Documents/data sources identified as potentially eligible will be retrieved in full text and reviewed following the same process. Data will be extracted using a customised data extraction tool. A customised appraisal tool will be used to assess the relevance, accessibility and accuracy of included data sources.Ethics and disseminationThe scoping review is a secondary analysis of existing, publicly available data sources and does not require ethics approval. The findings of this scoping review will further our understanding of the quality and availability of data sources used for health workforce and health services planning in Australia. The results will be submitted for publication in peer-reviewed journals and presented at conferences targeted at health workforce and public health topics.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Muhammad Sajid Qureshi ◽  
Ali Daud ◽  
Malik Khizar Hayat ◽  
Muhammad Tanvir Afzal

PurposeAcademic rankings are facing various issues, including the use of data sources that are not publicly verifiable, subjective parameters, a narrow focus on research productivity and regional biases and so forth. This research work is intended to enhance creditability of the ranking process by using the objective indicators based on publicly verifiable data sources.Design/methodology/approachThe proposed ranking methodology – OpenRank – drives the objective indicators from two well-known publicly verifiable data repositories: the ArnetMiner and DBpedia.FindingsThe resultant academic ranking reflects common tendencies of the international academic rankings published by the Shanghai Ranking Consultancy (SRC), Quacquarelli Symonds (QS) and Times Higher Education (THE). Evaluation of the proposed methodology advocates its effectiveness and quick reproducibility with low cost of data collection.Research limitations/implicationsImplementation of the OpenRank methodology faced the issue of availability of the quality data. In future, accuracy of the academic rankings can be improved further by employing more relevant public data sources like the Microsoft Academic Graph, millions of graduate's profiles available in the LinkedIn repositories and the bibliographic data maintained by Association for Computing Machinery and Scopus and so forth.Practical implicationsThe suggested use of open data sources would offer new dimensions to evaluate academic performance of the higher education institutions (HEIs) and having comprehensive understanding of the catalyst factors in the higher education.Social implicationsThe research work highlighted the need of a purposely built, publicly verifiable electronic data source for performance evaluation of the global HEIs. Availability of such a global database would help in better academic planning, monitoring and analysis. Definitely, more transparent, reliable and less controversial academic rankings can be generated by employing the aspired data source.Originality/valueWe suggested a satisfying solution for improvement of the HEIs' ranking process by making the following contributions: (1) enhancing creditability of the ranking results by merely employing the objective performance indicators extracted from the publicly verifiable data sources, (2) developing an academic ranking methodology based on the objective indicators using two well-known data repositories, the DBpedia and ArnetMiner and (3) demonstrating effectiveness of the proposed ranking methodology on the real data sources.


2016 ◽  
Vol 35 (3) ◽  
pp. 1-32 ◽  
Author(s):  
Roger Simnett ◽  
Elizabeth Carson ◽  
Ann Vanstraelen

SUMMARY We present a comprehensive review of the 130 international archival auditing and assurance research articles that were published in eight leading accounting and auditing journals for 1995–2014. In order to support evidence-based international standard setting and regulation, and to identify what has been learned to date, we map this research to the International Auditing and Assurance Standards Board's (IAASB) Framework for Audit Quality. For the areas that have been well researched, we provide a summary of the findings and outline how they can inform standard setters and regulators. We also observe a significant evolution in international archival research over the 20 years of our study, as evidenced by the measures of audit quality, data sources used, and approaches used to address endogeneity concerns. Finally, we identify some challenges in undertaking international archival auditing and assurance research and identify opportunities for future research. Our review is of interest to researchers, practitioners, and standard setters/regulators involved in international auditing and assurance activities.


Author(s):  
John L. Schroeder

This article reviews the techniques and approaches historically employed to measure non-synoptic wind storms. While most of these efforts have originated from the atmospheric science community, the focus of this article relates to meeting the requirements of the engineering community. While the recognition of the importance of these non-synoptic wind system events is increasing, their engineering-relevant characteristics are still largely unknown. While gaps in knowledge concerning the engineering-relevant aspects of non-synoptic wind systems are plentiful, focused application of high-resolution research instrumentation offers hope to remove many of these unknowns. Future engineering-oriented measurement campaigns will likely make use of both traditional anemometry and remote sensing technologies to document the characteristics of non-synoptic wind systems.


Epidemiologia ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 315-324
Author(s):  
Juan M. Banda ◽  
Ramya Tekumalla ◽  
Guanyu Wang ◽  
Jingyuan Yu ◽  
Tuo Liu ◽  
...  

As the COVID-19 pandemic continues to spread worldwide, an unprecedented amount of open data is being generated for medical, genetics, and epidemiological research. The unparalleled rate at which many research groups around the world are releasing data and publications on the ongoing pandemic is allowing other scientists to learn from local experiences and data generated on the front lines of the COVID-19 pandemic. However, there is a need to integrate additional data sources that map and measure the role of social dynamics of such a unique worldwide event in biomedical, biological, and epidemiological analyses. For this purpose, we present a large-scale curated dataset of over 1.12 billion tweets, growing daily, related to COVID-19 chatter generated from 1 January 2020 to 27 June 2021 at the time of writing. This data source provides a freely available additional data source for researchers worldwide to conduct a wide and diverse number of research projects, such as epidemiological analyses, emotional and mental responses to social distancing measures, the identification of sources of misinformation, stratified measurement of sentiment towards the pandemic in near real time, among many others.


2021 ◽  
Vol 37 (1) ◽  
pp. 161-169
Author(s):  
Dominik Rozkrut ◽  
Olga Świerkot-Strużewska ◽  
Gemma Van Halderen

Never has there been a more exciting time to be an official statistician. The data revolution is responding to the demands of the CoVID-19 pandemic and a complex sustainable development agenda to improve how data is produced and used, to close data gaps to prevent discrimination, to build capacity and data literacy, to modernize data collection systems and to liberate data to promote transparency and accountability. But can all data be liberated in the production and communication of official statistics? This paper explores the UN Fundamental Principles of Official Statistics in the context of eight new and big data sources. The paper concludes each data source can be used for the production of official statistics in adherence with the Fundamental Principles and argues these data sources should be used if National Statistical Systems are to adhere to the first Fundamental Principle of compiling and making available official statistics that honor citizen’s entitlement to public information.


2021 ◽  
pp. 1-11
Author(s):  
Yanan Huang ◽  
Yuji Miao ◽  
Zhenjing Da

The methods of multi-modal English event detection under a single data source and isomorphic event detection of different English data sources based on transfer learning still need to be improved. In order to improve the efficiency of English and data source time detection, based on the transfer learning algorithm, this paper proposes multi-modal event detection under a single data source and isomorphic event detection based on transfer learning for different data sources. Moreover, by stacking multiple classification models, this paper makes each feature merge with each other, and conducts confrontation training through the difference between the two classifiers to further make the distribution of different source data similar. In addition, in order to verify the algorithm proposed in this paper, a multi-source English event detection data set is collected through a data collection method. Finally, this paper uses the data set to verify the method proposed in this paper and compare it with the current most mainstream transfer learning methods. Through experimental analysis, convergence analysis, visual analysis and parameter evaluation, the effectiveness of the algorithm proposed in this paper is demonstrated.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Xiao Fan Liu ◽  
Xiao-Ke Xu ◽  
Ye Wu

AbstractThe 2019 coronavirus disease (COVID-19) is pseudonymously linked to more than 100 million cases in the world as of January 2021. High-quality data are needed but lacking in the understanding of and fighting against COVID-19. We provide a complete and updating hand-coded line-list dataset containing detailed information of the cases in China and outside the epicenter in Hubei province. The data are extracted from public disclosures by local health authorities, starting from January 19. This dataset contains a very rich set of features for the characterization of COVID-19’s epidemiological properties, including individual cases’ demographic information, travel history, potential virus exposure scenario, contacts with known infections, and timelines of symptom onset, quarantine, infection confirmation, and hospitalization. These cases can be considered the baseline COVID-19 transmissibility under extreme mitigation measures, and therefore, a reference for comparative scientific investigation and public policymaking.


2014 ◽  
Vol 668-669 ◽  
pp. 1374-1377 ◽  
Author(s):  
Wei Jun Wen

ETL refers to the process of data extracting, transformation and loading and is deemed as a critical step in ensuring the quality, data specification and standardization of marine environmental data. Marine data, due to their complication, field diversity and huge volume, still remain decentralized, polyphyletic and isomerous with different semantics and hence far from being able to provide effective data sources for decision making. ETL enables the construction of marine environmental data warehouse in the form of cleaning, transformation, integration, loading and periodic updating of basic marine data warehouse. The paper presents a research on rules for cleaning, transformation and integration of marine data, based on which original ETL system of marine environmental data warehouse is so designed and developed. The system further guarantees data quality and correctness in analysis and decision-making based on marine environmental data in the future.


2020 ◽  
Vol 14 (3) ◽  
pp. 320-328
Author(s):  
Long Guo ◽  
Lifeng Hua ◽  
Rongfei Jia ◽  
Fei Fang ◽  
Binqiang Zhao ◽  
...  

With the rapid growth of e-commerce in recent years, e-commerce platforms are becoming a primary place for people to find, compare and ultimately purchase products. To improve online shopping experience for consumers and increase sales for sellers, it is important to understand user intent accurately and be notified of its change timely. In this way, the right information could be offered to the right person at the right time. To achieve this goal, we propose a unified deep intent prediction network, named EdgeDIPN, which is deployed at the edge, i.e., mobile device, and able to monitor multiple user intent with different granularity simultaneously in real-time. We propose to train EdgeDIPN with multi-task learning, by which EdgeDIPN can share representations between different tasks for better performance and saving edge resources in the meantime. In particular, we propose a novel task-specific attention mechanism which enables different tasks to pick out the most relevant features from different data sources. To extract the shared representations more effectively, we utilize two kinds of attention mechanisms, where the multi-level attention mechanism tries to identify the important actions within each data source and the inter-view attention mechanism learns the interactions between different data sources. In the experiments conducted on a large-scale industrial dataset, EdgeDIPN significantly outperforms the baseline solutions. Moreover, EdgeDIPN has been deployed in the operational system of Alibaba. Online A/B testing results in several business scenarios reveal the potential of monitoring user intent in real-time. To the best of our knowledge, EdgeDIPN is the first full-fledged real-time user intent understanding center deployed at the edge and serving hundreds of millions of users in a large-scale e-commerce platform.


Sign in / Sign up

Export Citation Format

Share Document