scholarly journals Assessing the evidentiary value of secondary data analyses: A commentary on Gangestad, Dinh, Grebe, Del Giudice, and Thompson (2019)

2019 ◽  
Author(s):  
Benedict C Jones ◽  
Lisa Marie DeBruine ◽  
Urszula M marcinkowska

Secondary data analyses (analyses of open data from published studies) can play a critical role in hypothesis generation and in maximizing the contribution of collected data to the accumulation of scientific knowledge. However, assessing the evidentiary value of results from secondary data analyses is often challenging because analytical decisions can be biased by knowledge of the results of (and analytical choices made in) the original study and by unacknowledged exploratory analyses of open data sets (Scott & Kline, 2019; Weston, Ritchie, Rohrer, & Przybylski, 2018). Using the secondary data analyses reported by Gangestad et al. (this issue) as a case study, we outline several approaches that, if implemented, would allow readers to assess the evidentiary value of results from secondary data analyses with greater confidence.

2016 ◽  
Vol 39 (11) ◽  
pp. 1477-1501 ◽  
Author(s):  
Victoria Goode ◽  
Nancy Crego ◽  
Michael P. Cary ◽  
Deirdre Thornlow ◽  
Elizabeth Merwin

Researchers need to evaluate the strengths and weaknesses of data sets to choose a secondary data set to use for a health care study. This research method review informs the reader of the major issues necessary for investigators to consider while incorporating secondary data into their repertoire of potential research designs and shows the range of approaches the investigators may take to answer nursing research questions in a variety of context areas. The researcher requires expertise in locating and judging data sets and in the development of complex data management skills for managing large numbers of records. There are important considerations such as firm knowledge of the research question supported by the conceptual framework and the selection of appropriate databases, which guide the researcher in delineating the unit of analysis. Other more complex issues for researchers to consider when conducting secondary data research methods include data access, management and security, and complex variable construction.


2014 ◽  
Vol 08 (04) ◽  
pp. 415-439 ◽  
Author(s):  
Amna Basharat ◽  
I. Budak Arpinar ◽  
Shima Dastgheib ◽  
Ugur Kursuncu ◽  
Krys Kochut ◽  
...  

Crowdsourcing is one of the new emerging paradigms to exploit the notion of human-computation for harvesting and processing complex heterogenous data to produce insight and actionable knowledge. Crowdsourcing is task-oriented, and hence specification and management of not only tasks, but also workflows should play a critical role. Crowdsourcing research can still be considered in its infancy. Significant need is felt for crowdsourcing applications to be equipped with well defined task and workflow specifications ranging from simple human-intelligent tasks to more sophisticated and cooperative tasks to handle data and control-flow among these tasks. Addressing this need, we have attempted to devise a generic, flexible and extensible task specification and workflow management mechanism in crowdsourcing. We have contextualized this problem to linked data management as our domain of interest. More specifically, we develop CrowdLink, which utilizes an architecture for automated task specification, generation, publishing and reviewing to engage crowdworkers for verification and creation of triples in the Linked Open Data (LOD) cloud. The LOD incorporates various core data sets in the semantic web, yet is not in full conformance with the guidelines for publishing high quality linked data on the web. Our approach is not only useful in efficiently processing the LOD management tasks, it can also help in enriching and improving quality of mission-critical links in the LOD. We demonstrate usefulness of our approach through various link creation and verification tasks, and workflows using Amazon Mechanical Turk. Experimental evaluation demonstrates promising results not only in terms of ease of task generation, publishing and reviewing, but also in terms of accuracy of the links created, and verified by the crowdworkers.


2015 ◽  
Vol 2 (1) ◽  
pp. 1
Author(s):  
Rian Rahmat Hidayat ◽  
Irham Zaki

Sharia insurance in Indonesia has experienced a fairly rapid development since the promulgation of MUI fatwa number:21/DSN-MUI/X/2001 about sharia insurance. However, that is still questionable is does the sharia insurance company really run the product operational based on MUI fatwa.This study aims to determine whether product operational of sharia insurance of AJB Bumiputera1912 is in conformity with the sharia rules to follow six indicator akkad, premi, claims, investment, reinsurance, and management of the fund from MUI fatwa or not.The research method is used is a case study with a qualitative descriptive approach. The data used in this study is that the data derived from primary data obtained from fieldwork and secondary data derived from the literature and a wide range of written document. This study using data derived from the management of sharia insurance AJB Bumiputera 1912 in the branch of Surabaya and sharia insurance participants of AJB Bumiputera 1912.The results of this research is operational products of sharia insurance of AJB Bumiputera 1912 were in accordance with Indonesian Ulama Council fatwa DSN Number:21/DSNMUI/X/2001. The suitability reflected from the existence of akkad tabarru’ and akkad tijarah as investment funds (mudharabah), management of premium funds based on sharia, claims fund based on first contract, investment made in accordance with the mandate of participants, then reinsurance process done only to sharia-based reinsurance company.


2001 ◽  
Vol 39 (3) ◽  
pp. 771-799 ◽  
Author(s):  
Anthony B Atkinson ◽  
Andrea Brandolini

This paper examines the role of secondary data-sets in empirical economic research, taking the field of income distribution as a case study. We illustrate problems faced by users of “secondary” statistics, showing how both cross-country comparisons and time-series analysis can depend sensitively on the choice of data. After describing the genealogy of secondary data-sets on income inequality, we consider the main methodological issues and discuss their implications for comparisons of income inequality across OECD countries and over time. The lessons to be drawn for the construction and use of secondary data-sets are summarized at the end of the paper.


Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 258
Author(s):  
Paolo Fosci ◽  
Giuseppe Psaila

How to exploit the incredible variety of JSON data sets currently available on the Internet, for example, on Open Data portals? The traditional approach would require getting them from the portals, then storing them into some JSON document store and integrating them within the document store. However, once data are integrated, the lack of a query language that provides flexible querying capabilities could prevent analysts from successfully completing their analysis. In this paper, we show how the J-CO Framework, a novel framework that we developed at the University of Bergamo (Italy) to manage large collections of JSON documents, is a unique and innovative tool that provides analysts with querying capabilities based on fuzzy sets over JSON data sets. Its query language, called J-CO-QL, is continuously evolving to increase potential applications; the most recent extensions give analysts the capability to retrieve data sets directly from web portals as well as constructs to apply fuzzy set theory to JSON documents and to provide analysts with the capability to perform imprecise queries on documents by means of flexible soft conditions. This paper presents a practical case study in which real data sets are retrieved, integrated and analyzed to effectively show the unique and innovative capabilities of the J-CO Framework.


2021 ◽  
Author(s):  
Anna Laurinavichyute ◽  
Shravan Vasishth

In 2019 the Journal of Memory and Language instituted an open data and code policy; this policy requires that, as a rule, code and data be released at the latest upon publication. Does this policy lead to reproducible results? We looked at whether 57 papers published between 2019 and 2021 were reproducible, in the sense that the published summary statistics should be possible to regenerate given the data, and given the code, when code was provided. We found that for 10 out of the 57 papers, data sets were inaccessible; 29 of the remaining 47 papers provided code, of which 16 were reproducible. Of the 18 papers that did not provide code, one was reproducible. Overall, the reproducibility rate was about 30%. This estimate is similar to the ones reported for psychology, economics, and other areas, but it is probably possible to do better. We provide some suggestions on how reproducibility can be improvedin future work.


Author(s):  
S. ZAIETS

Meeting the needs and demands of consumers of statistical information requires appropriate tools to systematically determine the potential, strengths and weaknesses of state statistical institutions, as well as the risks associated with this. In this regard, the assessment of the quality of statistical information by data users is one of the key areas of work of the statistical service in modern conditions. The aim of the study is to consider approaches to assessing users' needs for high-quality statistical information in the context of global, national and information challenges of our time. The article explores ways to identify the needs of users of statistical information, summarizes the results of questionnaires, which are an integral part of quality reports. The components of the evaluation of the use of open data of the Open Data Barometer rating are analyzed, based on surveys during state self-assessment, expert assessment, and secondary data. The leading positions and bottlenecks of Ukraine in the implementation of open data sets have been identified. The advantages are considered and proposals for improving the Methodology for calculating the user satisfaction index of statistical information, which is introduced by the State Statistics Service of Ukraine in order to meet the needs and demands of consumers of statistical information, are presented. The experience of other countries on assessing the level of user satisfaction with services, which should be used in a comprehensive assessment of various aspects of the domestic statistical service and various characteristics of statistical information for users, such as understanding materials, visual presentation of information, ease of use, and more, is considered. The results of the study allowed us to provide suggestions on the need to transform the domestic statistical service into a coordinating center for the distribution of verified, processed and standardized data sets available for identification using open catalogs and data lists based on strategic partnerships with data providers, technology providers, scientists, researchers and the media.


Author(s):  
Mounir M. El Khatib ◽  
Khadeegha Alzouebi

This study investigates the concept of Collaborative Business Intelligence in general and specifically in three Dubai governmental entities, which are a part of Dubai Open Data Committee: namely Smart Dubai, Dubai Municipality and RTA, in an attempt to improve collaborative Business Intelligence between the three entities thorough a Smart City Project. A qualitative approach was used to collect data. Secondary data derived from academic articles, scholarly literature formulate the literature review and understand the concepts of business intelligence, collaborative business intelligence in general and how such business intelligence works at the three entities in specific. In addition, primary data was derived from interviews conducted with three senior employees from the top management of Smart Dubai, Dubai Municipality and RTA to help further gain in-depth understanding on how the three entities are collaborating with one another in Smart City Project. The results of the study have revealed that all three entities; Smart Dubai – Dubai Data Establishment, Dubai Municipality, and RTA are adopting business intelligence and collaborative BI. In addition, it is evident that the three entities are sharing data through Dubai Pulse for Smart City Projects. As well, the most used systems and software for analyzing and sharing data among all government entities to support decision-making process. The massive volume of data collected from different source required a huge investment in technology, process and people. In addition, because smart city project is still a new project under implementation and reconstruction, the project has reported few challenges with the implementation of BI, collaboration, and integration within the Municipality, RTA, and Smart Dubai, the key challenge is problem is with raising awareness among personnel and individuals working within in these three entities areas to embrace smart services, typical of collaborative business intelligence. As well as privacy and data security in which Dubai Data Establishment has adopted many strategies and policies to improve. Moreover, Limited research on best practice of BI and collaborative BI in UAE based organizations make it difficult to confirm if the organizations achieve the successful implementation of the collaborative BI and that make this assumption an area for further study.


Author(s):  
Reema Jenifer D’Silva ◽  
Ganesh Bhat S.

Purpose: In the Indian food processing sector, the cashew nut processing industry plays a critical role. Often, the cashew is considered as ‘both a poor person’s crop and a rich person’s meal.’ From the cultivators, traders, wholesalers, processors to supermarkets and retailers, the cashew processing sector is a vital source of income. Cashew processing is a labour-intensive sector that has always employed a significant number of rural women. The purpose of this study is to gain an in-depth understanding of the Cashew Processing Sector, its position in the world market, issues it is confronting and future prospects. While doing so, the present study attempts to examine the profile of the Indian Cashew Industry, including cashew processing and international trade. The quality, flavour and appearance of the Indian cashew kernels are highly respected in other nations and are consumed in more than 60 countries worldwide. Unfortunately, it was found that cashew production in India has been fluctuating in recent years. Despite its tremendous expansion, India’s cashew sector has been affected by low-quality cashew cultivated in some regions, which is mostly due to improper harvesting techniques, inadequate drying of the nuts and insufficient storage and warehouse facilities for dried cashew nuts. Design: For the purpose of analysis, this study used secondary data sources - Google Scholar articles, cashew industry and other related websites. Moreover, the literature is used to analyse the position of the industry within SWOC and PESTLE framework analysis. Findings: Based on the analysis, the cashew business needs certain incentives to attain a better rate of production and export growth in the future. Value: This paper emphases on the growth of the cashew industry in India in relation to its current status and future opportunities. Based on findings and their interpretation, the Indian Cashew Industry must prepare itself for the ever-increasing demand of the domestic market and contribute more effectively to the country’s economic growth. Paper Type: Case Study-based Research Analysis


Author(s):  
Pablo B. MARKIN

Objective. This exploratory literature review seeks to identify both emergent consensus areas and research gaps in recent scholarly literature on Open Educational Resources (OERs). Despite the perception of OERs as universally available, these involve persistent barriers. The presence of institutional policies, adequate incentives and support frameworks for the use and sharing of OERs as well as raising awareness about their availability is likely to be critical for their successful deployment. Methods. This study made use of the case study method to arrive at its conclusions. As part of this, secondary data were collected from relevant article searches conducted in Google Scholar and at the Harvard Open Access Tagging Project website. Only papers published in the last five years, e.g., in the years 2016-2021, were taken into consideration. Given that this study has applied the methodology of qualitative comparison and case study construction, this limits the validity of its conclusions to the settings from which the original primary findings were obtained or for which OER recommendations were produced. Results. As part of this research, 16 scholarly articles and research reports were identified as being of relevance for this study. The research questions this study has sought to answer are as follows: How OERs have developed in recent years? What was the impact of the pandemic period on OER use? What are the key barriers for OER deployment? What are the facilitating factors for OER implementation at libraries, colleges and universities? What are the effects of OERs? Conclusions. Recent reports indicate that the pandemic period has both increased the awareness of OERs among education institutions and provided an impetus for capacity building efforts in this domain. Yet, OER effectiveness continues to be under-researched, despite a tentative consensus in scholarly literature concerning the critical role for OER efficacy of institutional support and collaboration frameworks.


Sign in / Sign up

Export Citation Format

Share Document