scholarly journals Archiving Large-Scale Legacy Multimedia Research Data: A Case Study

2018 ◽  
Vol 12 (2) ◽  
pp. 157-176
Author(s):  
Claudia Yogeswaran ◽  
Kearsy Cormier

In this paper we provide a case study of the creation of the DCAL Research Data Archive at University College London. In doing so, we assess the various challenges associated with archiving large-scale legacy multimedia research data, given the lack of literature on archiving such datasets. We address issues such as the anonymisation of video research data, the ethical challenges of managing legacy data and historic consent, ownership considerations, the handling of large-size multimedia data, as well as the complexity of multi-project data from a number of researchers and legacy data from eleven years of research.

2003 ◽  
Vol 03 (01) ◽  
pp. 95-117 ◽  
Author(s):  
SUNIL PRABHAKAR ◽  
RAHUL CHARI

Multimedia data poses challenges for efficient storage and retrieval due to its large size and playback timing requirements. For applications that store very large volumes of multimedia data, hierarchical storage offers a scalable and economical alternative to store data on magnetic disks. In a hierarchical storage architecture data is stored on a tape or optical disk based tertiary storage layer with the secondary storage disks serving as a cache or buffer. Due to the need for swapping media on drives, retrieving multimedia data from tertiary storage can potentially result in large delays before playback (startup latency) begins as well as during playback (jitter). In this paper we address the important problem of reducing startup latency and jitter for very large multimedia repositories. We propose that secondary storage should not be used as a cache in the traditional manner — instead, most of the secondary storage should be used to permanently store partial objects. Furthermore, replication is employed at the tertiary storage level to avoid expensive media switching. In particular, we show that by saving the initial segments of documents permanently on secondary storage, and replicating them on tertiary storage, startup latency can be significantly reduced. Since we are effectively reducing the amount of secondary storage available for buffering the data from tertiary storage, an increase in jitter may be expected. However, our results show that the technique also reduces jitter, in contrast to the expected behavior. Our technique exploits the pattern of data access. Advance knowledge of the access pattern is helpful, but not essential. Lack of this information or changes in access patterns are handled through adaptive techniques. Our study addresses both single- and multiple-user scenarios. Our results show that startup latency can be reduced by as much as 75% and jitter practically eliminated through the use of these techniques.


Symmetry ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 278 ◽  
Author(s):  
Nuria Novas ◽  
Aránzazu Fernández-García ◽  
Francisco Manzano-Agugliaro

Renewable energy today is no longer just an affordable alternative, but a requirement for mitigating global environmental problems such as climate change. Among renewable energies, the use of solar energy is one of the most widespread. Concentrating Solar Power (CSP) systems, however, is not yet fully widespread despite having demonstrated great efficiency, mainly thanks to parabolic-trough collector (PTC) technology, both on a large scale and on a small scale for heating water in industry. One of the main drawbacks to this energy solution is the large size of the facilities. For this purpose, several models have been developed to avoid shadowing between the PTC lines as much as possible. In this study, the classic shadowing models between the PTC rows are reviewed. One of the major challenges is that they are studied geometrically as a fixed installation, while they are moving facilities, as they have a tracking movement of the sun. In this work, a new model is proposed to avoid shadowing by taking into account the movement of the facilities depending on their latitude. Secondly, the model is tested to an existing facility as a real case study located in southern Spain. The model is applied to the main existing installations in the northern hemisphere, thus showing the usefulness of the model for any PTC installation in the world. The shadow projected by a standard, the PTC (S) has been obtained by means of a polynomial approximation as a function of the latitude (Lat) given by S = 0.001 − Lat2 + 0.0121 − Lat + 10.9 with R2 of 99.8%. Finally, the model has been simplified to obtain in the standard case the shadows in the running time of a PTC facility.


Extremely significant phase of SDLC is Requirement Engineering . Building of software and its functionalities successfully is exclusively based on the requirements gathered from the user of the project. The accomplishment of the end product has direct relationship with this Requirement Phase. Requirement Prioritization Process (one of the process) in the Requirement Engineering phase supports the engineers to work out and identify the prioritization among the requirements. From the available methods to prioritize the requirement, AHP is viable but not for large size projects. This work primarily concentrated on applying AHP for bigger projects. In this paper, we constructed a framework to prioritize the requirements with AHP considering Implementation Simplicity for Large Scale project, reduced number of comparisons and precise Stakeholder’s Participation. The proposed framework has been assessed through an exploratory case study that has fixed number of requirements and the status after the arrival of new requirements to the priority list. This is to know about the actuality of the proposed framework, which has been conducted in a software company. The main findings and lessons erudite from the effort are presented.


2021 ◽  
Vol 8 ◽  
Author(s):  
Claire Collins ◽  
Ana Nuno ◽  
Annette Broderick ◽  
David J. Curnick ◽  
Asha de Vos ◽  
...  

Area coverage of large-scale marine protected areas (MPAs) (LSMPAs, > 100,000 km2) is rapidly increasing globally. Their effectiveness largely depends on successful detection and management of non-compliance. However, for LSMPAs this can be difficult due to their large size, often remote locations and a lack of understanding of the social drivers of non-compliance. Taking a case-study approach, we review current knowledge of illegal fishing within the British Indian Ocean Territory (BIOT) LSMPA. Data stemming from enforcement reports (2010–20), and from fieldwork in fishing communities (2018–19) were combined to explore and characterise drivers of non-compliance. Enforcement data included vessel investigation reports (n = 188), transcripts of arrests (20) and catch seizures (58). Fieldwork data included fisher interviews (95) and focus groups (12), conducted in two communities in Sri Lanka previously associated with non-compliance in BIOT LSMPA. From 2010 to 2020, there were 126 vessels suspected of non-compliance, 76% of which were Sri Lankan. The majority of non-compliant vessels targeted sharks (97%), catching an estimated 14,340 individuals during the study period. Sri Lankan vessels were primarily registered to one district (77%) and 85% operated from just two ports within the fieldwork sites. Social Network Analysis (SNA) showed that 66% of non-compliant vessels were linked by social ties, including sharing crew members, compared with only 34% of compliant vessels. Thematic analysis of qualitative data suggested that perceptions of higher populations of sharks and social ties between vessels may both be important drivers. We discuss our findings within a global context to identify potential solutions for LSMPA management.


1996 ◽  
Vol 5 (1) ◽  
pp. 23-32 ◽  
Author(s):  
Chris Halpin ◽  
Barbara Herrmann ◽  
Margaret Whearty

The family described in this article provides an unusual opportunity to relate findings from genetic, histological, electrophysiological, psychophysical, and rehabilitative investigation. Although the total number evaluated is large (49), the known, living affected population is smaller (14), and these are spread from age 20 to age 59. As a result, the findings described above are those of a large-scale case study. Clearly, more data will be available through longitudinal study of the individuals documented in the course of this investigation but, given the slow nature of the progression in this disease, such studies will be undertaken after an interval of several years. The general picture presented to the audiologist who must rehabilitate these cases is that of a progressive cochlear degeneration that affects only thresholds at first, and then rapidly diminishes speech intelligibility. The expected result is that, after normal language development, the patient may accept hearing aids well, encouraged by the support of the family. Performance and satisfaction with the hearing aids is good, until the onset of the speech intelligibility loss, at which time the patient will encounter serious difficulties and may reject hearing aids as unhelpful. As the histological and electrophysiological results indicate, however, the eighth nerve remains viable, especially in the younger affected members, and success with cochlear implantation may be expected. Audiologic counseling efforts are aided by the presence of role models and support from the other affected members of the family. Speech-language pathology services were not considered important by the members of this family since their speech production developed normally and has remained very good. Self-correction of speech was supported by hearing aids and cochlear implants (Case 5’s speech production was documented in Perkell, Lane, Svirsky, & Webster, 1992). These patients received genetic counseling and, due to the high penetrance of the disease, exhibited serious concerns regarding future generations and the hope of a cure.


2008 ◽  
Author(s):  
D. L. McMullin ◽  
A. R. Jacobsen ◽  
D. C. Carvan ◽  
R. J. Gardner ◽  
J. A. Goegan ◽  
...  

Author(s):  
Lori Stahlbrand

This paper traces the partnership between the University of Toronto and the non-profit Local Food Plus (LFP) to bring local sustainable food to its St. George campus. At its launch, the partnership represented the largest purchase of local sustainable food at a Canadian university, as well as LFP’s first foray into supporting institutional procurement of local sustainable food. LFP was founded in 2005 with a vision to foster sustainable local food economies. To this end, LFP developed a certification system and a marketing program that matched certified farmers and processors to buyers. LFP emphasized large-scale purchases by public institutions. Using information from in-depth semi-structured key informant interviews, this paper argues that the LFP project was a disruptive innovation that posed a challenge to many dimensions of the established food system. The LFP case study reveals structural obstacles to operationalizing a local and sustainable food system. These include a lack of mid-sized infrastructure serving local farmers, the domination of a rebate system of purchasing controlled by an oligopolistic foodservice sector, and embedded government support of export agriculture. This case study is an example of praxis, as the author was the founder of LFP, as well as an academic researcher and analyst.


Sign in / Sign up

Export Citation Format

Share Document