scholarly journals An Essay on the Challenges of Doing Education Research in Canada

2021 ◽  
pp. 193672442110034
Author(s):  
Karen Robson

In this essay, I discuss the challenges faced by Canadian researchers in trying to undertake research, particularly in the area of education. I begin by focusing on the issue of data availability (with focus on the lack of race data in Canada) and the extreme limitations that these issues place on the potential for research on important Canadian education issues and then discuss what I regard as hypervigilant data access protocols for Canadian data sets. I then turn to practical issues that arise when comparing education data across cities and countries and the process of “harmonizing” the data. I address the compromises that must be made when attempting to make data comparable across different sites. I conclude by discussing how the larger context in which education occurs must be considered when understanding observed comparative differences between educational outcomes.

2016 ◽  
Vol 2016 ◽  
pp. 1-13 ◽  
Author(s):  
Xiuguo Wu

Replication technology is commonly used to improve data availability and reduce data access latency in the cloud storage system by providing users with different replicas of the same service. Most current approaches largely focus on system performance improvement, neglecting management cost in deciding replicas number and their store places, which cause great financial burden for cloud users because the cost for replicas storage and consistency maintenance may lead to high overhead with the number of new replicas increased in a pay-as-you-go paradigm. In this paper, towards achieving the approximate minimum data sets management cost benchmark in a practical manner, we propose a replicas placements strategy from cost-effective view with the premise that system performance meets requirements. Firstly, we design data sets management cost models, including storage cost and transfer cost. Secondly, we use the access frequency and the average response time to decide which data set should be replicated. Then, the method of calculating replicas’ number and their store places with minimum management cost is proposed based on location problem graph. Both the theoretical analysis and simulations have shown that the proposed strategy offers the benefits of lower management cost with fewer replicas.


Author(s):  
Kimberlyn McGrail ◽  
Michael Burgess ◽  
Kieran O'Doherty ◽  
Colene Bentley ◽  
Jack Teng

IntroductionResearch using linked data sets can lead to new insights and discoveries that positively impact society. However, the use of linked data raises concerns relating to illegitimate use, privacy, and security (e.g., identity theft, marginalization of some groups). It is increasingly recognized that the public needs to be consulted to develop data access systems that consider both the potential benefits and risks of research. Indeed, there are examples of data sharing projects being derailed because of backlash in the absence of adequate consultation. (e.g., care.data in the UK). Objectives and methodsThis talk will describe the results of public deliberations held in Vancouver, British Columbia in April 2018 and the fall of 2019. The purpose of these events was to develop informed and civic-minded public advice regarding the use and the sharing of linked data for research in the context of rapidly evolving data availability and researcher aspirations. ResultsIn the first deliberation, participants developed and voted on 19 policy-relevant statements. Taken together, these statements provide a broad view of public support and concerns regarding the use of linked data sets for research and offer guidance on measures that can be taken to improve the trustworthiness of policies and process around data sharing and use. The second deliberation will focus on the interplay between public and private sources of data, and role of individual and collective or community consent I the future. ConclusionGenerally, participants were supportive of research using linked data because of the value such uses can provide to society. Participants expressed a desire to see the data access request process made more efficient to facilitate more research, as long as there are adequate protections in place around security and privacy of the data. These protections include both physical and process-related safeguards as well as a high degree of transparency.


2021 ◽  
Author(s):  
Hoda R.K. Nejad

With the emergence of wireless devices, service delivery for ad-hoc networks has started to attract a lot of attention recently. Ad-hoc networks provide an attractive solution for networking in the situations where network infrastructure or service subscription is not available. We believe that overlay networks, particularly peer-to-peer (P2P) systems, is a good abstraction for application design and deployment over ad-hoc networks. The principal benefit of this approach is that application states are only maintained by the nodes involved in the application execution and all other nodes only perform networking related functions. On the other hand, data access applications in Ad-hoc networks suffer from restricted resources. In this thesis, we explore how to use Cooperative Caching to improve data access efficiency in Ad-hoc network. We propose a Resource-Aware Cooperative Caching P2P system (RACC) for data access applications in Ad-hoc networks. The objective is to improve data availability by considering energy of each node, demand and supply of network. We evaluated and compared the performance of RACC with Simple Cache, CachePath and CacheData schemes. Our simulation results show that RACC improves the lay of query as well as energy usage of the network as compared to Simple Cache, CachePath and CacheData.


1998 ◽  
Vol 14 (suppl 3) ◽  
pp. S117-S123 ◽  
Author(s):  
Anaclaudia Gastal Fassa ◽  
Luiz Augusto Facchini ◽  
Marinel Mór Dall'Agnol

The International Agency for Research on Cancer (IARC) proposed this international historical cohort study trying to solve the controversy about the increased risk of cancer in the workers of the Pulp and Paper Industry. One of the most important aspects presented by this study in Brazil was the strategies used to overcome the methodological challenges, such as: data access, data accuracy, data availability, multiple data sources, and the large follow-up period. Through multiple strategies it was possible to build a Brazilian cohort of 3,622 workers, to follow them with a 93 percent success rate and to identify in 99 percent of the cases the cause of death. This paper, has evaluated the data access, data accuracy and the effectiveness of the strategies used and the different sources of data.


2021 ◽  
Author(s):  
Benjamin Moreno-Torres ◽  
Christoph Völker ◽  
Sabine Kruschwitz

<div> <p>Non-destructive testing (NDT) data in civil engineering is regularly used for scientific analysis. However, there is no uniform representation of the data yet. An analysis of distributed data sets across different test objects is therefore too difficult in most cases.</p> <p>To overcome this, we present an approach for an integrated data management of distributed data sets based on Semantic Web technologies. The cornerstone of this approach is an ontology, a semantic knowledge representation of our domain. This NDT-CE ontology is later populated with the data sources. Using the properties and the relationships between concepts that the ontology contains, we make these data sets meaningful also for machines. Furthermore, the ontology can be used as a central interface for database access. Non-domain data sources can be integrated by linking them with the NDT ontology, making them directly available for generic use in terms of digitization. Based on an extensive literature research, we outline the possibilities that result for NDT in civil engineering, such as computer-aided sorting and analysis of measurement data, and the recognition and explanation of correlations.</p> <p>A common knowledge representation and data access allows the scientific exploitation of existing data sources with data-based methods (such as image recognition, measurement uncertainty calculations, factor analysis or material characterization) and simplifies bidirectional knowledge and data transfer between engineers and NDT specialists.</p> </div>


2021 ◽  
Vol 55 (2) ◽  
pp. 84-89
Author(s):  
D.V. Shutov ◽  
◽  
K.M. Arzamasov ◽  
D.V. Drozdov ◽  
A.E. Demkina ◽  
...  

We performed analysis of the available Russian home-use health monitoring devices that can be connected to a smartphone or pad for data transfer. Specifically, we sought for the gadgets capable to register heart rate, blood pressure, ECG, blood glucose, and respiration rate. There are three options of data processing and storage. Namely, these are storage in and authorized access to the manufacturer's site with minimal opportunity of data handling and interpretation; an autonomous server to hold and handle big data sets and, finally, access protocols and templates enabling gadget integration with external services.


2021 ◽  
Author(s):  
Morten Loell Vinther ◽  
Torbjørn Eide ◽  
Aurelia Paraschiv ◽  
Dickon Bonvik-Stone

Abstract High quality environmental data are critical for any offshore activity relying on data insights to form appropriate planning and risk mitigation routines under challenging weather conditions. Such data are the most significant driver of future footprint reduction in offshore industries, in terms of costs savings, as well as operational safety and efficiency, enabled through ease of data access for all relevant stakeholders. This paper describes recent advancements in methods used by a dual-footprint Pulse-Doppler radar to provide accurate and reliable ocean wave height measurements. Achieved improvements during low wind weather conditions are presented and compared to data collected from other sources such as buoys and acoustic doppler wave and current profiler (ADCP) or legacy. The study is based on comparisons of recently developed algorithms applied to different data sets recorded at various sites, mostly covering calm weather conditions.


2019 ◽  
pp. 254-277 ◽  
Author(s):  
Ying Zhang ◽  
Chaopeng Li ◽  
Na Chen ◽  
Shaowen Liu ◽  
Liming Du ◽  
...  

Since large amount of geospatial data are produced by various sources, geospatial data integration is difficult because of the shortage of semantics. Despite standardised data format and data access protocols, such as Web Feature Service (WFS), can enable end-users with access to heterogeneous data stored in different formats from various sources, it is still time-consuming and ineffective due to the lack of semantics. To solve this problem, a prototype to implement the geospatial data integration is proposed by addressing the following four problems, i.e., geospatial data retrieving, modeling, linking and integrating. We mainly adopt four kinds of geospatial data sources to evaluate the performance of the proposed approach. The experimental results illustrate that the proposed linking method can get high performance in generating the matched candidate record pairs in terms of Reduction Ratio(RR), Pairs Completeness(PC), Pairs Quality(PQ) and F-score. The integrating results denote that each data source can get much Complementary Completeness(CC) and Increased Completeness(IC).


Author(s):  
Ying Zhang ◽  
Chaopeng Li ◽  
Na Chen ◽  
Shaowen Liu ◽  
Liming Du ◽  
...  

Since large amount of geospatial data are produced by various sources, geospatial data integration is difficult because of the shortage of semantics. Despite standardised data format and data access protocols, such as Web Feature Service (WFS), can enable end-users with access to heterogeneous data stored in different formats from various sources, it is still time-consuming and ineffective due to the lack of semantics. To solve this problem, a prototype to implement the geospatial data integration is proposed by addressing the following four problems, i.e., geospatial data retrieving, modeling, linking and integrating. We mainly adopt four kinds of geospatial data sources to evaluate the performance of the proposed approach. The experimental results illustrate that the proposed linking method can get high performance in generating the matched candidate record pairs in terms of Reduction Ratio(RR), Pairs Completeness(PC), Pairs Quality(PQ) and F-score. The integrating results denote that each data source can get much Complementary Completeness(CC) and Increased Completeness(IC).


Author(s):  
Mary Magdalene Jane.F ◽  
R. Nadarajan ◽  
Maytham Safar

Data caching in mobile clients is an important technique to enhance data availability and improve data access time. Due to cache size limitations, cache replacement policies are used to find a suitable subset of items for eviction from the cache. In this paper, the authors study the issues of cache replacement for location-dependent data under a geometric location model and propose a new cache replacement policy RAAR (Re-entry probability, Area of valid scope, Age, Rate of Access) by taking into account the spatial and temporal parameters. Mobile queries experience a popularity drift where the item loses its popularity after the user exhausts the corresponding service, thus calling for a scenario in which once popular documents quickly become cold (small active sets). The experimental evaluations using synthetic datasets for regular and small active sets show that this replacement policy is effective in improving the system performance in terms of the cache hit ratio of mobile clients.


Sign in / Sign up

Export Citation Format

Share Document