scholarly journals Participative Decision Making and the Sharing of Benefits: Laws, ethics, and data protection for building extended global communities

Author(s):  
Jutta Buschbom ◽  
Breda Zimkus ◽  
Andrew Bentley ◽  
Mariko Kageyama ◽  
Christopher Lyal ◽  
...  

Transdisciplinary and cross-cultural cooperation and collaboration are needed to build extended, densely interconnected information resources. These are the prerequisites for the successful implementation and execution of, for example, an ambitious monitoring framework accompanying the post-2020 Global Biodiversity Framework (GBF) of the Convention on Biological Diversity (CBD; SCBD 2021). Data infrastructures that meet the requirements and preferences of concerned communities can focus and attract community involvement, thereby promoting participatory decision making and the sharing of benefits. Community acceptance, in turn, drives the development of the data resources and data use. Earlier this year, the alliance for biodiversity knowledge (2021a) conducted forum-based consultations seeking community input on designing the next generation of digital specimen representations and consequently enhanced infrastructures. The multitudes of connections that arise from extending the digital specimen representations through linkages in all “directions” will form a powerful network of information for research and application. Yet, with the power of an extended, accessible data network comes the responsibility to protect sensitive information (e.g., the locations of threatened populations, culturally context-sensitive traditional knowledge, or businesses’ fundamental data and infrastructure assets). In addition, existing legislation regulates access and the fair and equitable sharing of benefits. Current negotiations on ‘Digital Sequence Information’ under the CBD suggest such obligations might increase and become more complex in the context of extensible information networks. For example, in the case of data and resources funded by taxpayers in the EU, such access should follow the general principle of being “as open as possible; as closed as is legally necessary” (cp. EC 2016). At the same time, the international regulations of the CBD Nagoya Protocol (SCBD 2011) need to be taken into account. Summarizing main outcomes from the consultation discussions in the forum thread “Meeting legal/regulatory, ethical and sensitive data obligations” (alliance for biodiversity knowledge 2021b), we propose a framework of ten guidelines and functionalities to achieve community building and drive application: Substantially contribute to the conservation and protection of biodiversity (cp. EC 2020). Use language that is CBD conformant. Show the importance of the digital and extensible specimen infrastructure for the continuing design and implementation of the post-2020 GBF, as well as the mobilisation and aggregation of data for its monitoring elements and indicators. Strive to openly publish as much data and metadata as possible online. Establish a powerful and well-thought-out layer of user and data access management, ensuring security of ‘sensitive data’. Encrypt data and metadata where necessary at the level of an individual specimen or digital object; provide access via digital cryptographic keys. Link obligations, rights and cultural information regarding use to the digital key (e.g. CARE principles (Carroll et al. 2020), Local Context-labels (Local Contexts 2021), licenses, permits, use and loan agreements, etc.). Implement a transactional system that records every transaction. Amplify workforce capacity across the digital realm, its work areas and workflows. Do no harm (EC 2020): Reduce the social and ecological footprint of the implementation, aiming for a long-term sustainable infrastructure across its life-cycle, including development, implementation and management stages. Substantially contribute to the conservation and protection of biodiversity (cp. EC 2020). Use language that is CBD conformant. Show the importance of the digital and extensible specimen infrastructure for the continuing design and implementation of the post-2020 GBF, as well as the mobilisation and aggregation of data for its monitoring elements and indicators. Strive to openly publish as much data and metadata as possible online. Establish a powerful and well-thought-out layer of user and data access management, ensuring security of ‘sensitive data’. Encrypt data and metadata where necessary at the level of an individual specimen or digital object; provide access via digital cryptographic keys. Link obligations, rights and cultural information regarding use to the digital key (e.g. CARE principles (Carroll et al. 2020), Local Context-labels (Local Contexts 2021), licenses, permits, use and loan agreements, etc.). Implement a transactional system that records every transaction. Amplify workforce capacity across the digital realm, its work areas and workflows. Do no harm (EC 2020): Reduce the social and ecological footprint of the implementation, aiming for a long-term sustainable infrastructure across its life-cycle, including development, implementation and management stages. Balancing the needs for open access, as well as protection, accountability and sustainability, the framework is designed to function as a robust interface between the (research) infrastructure implementing the extensible network of digital specimen representations, and the myriad of applications and operations in the real world. With the legal, ethical and data protection layers of the framework in place, the infrastructure will provide legal clarity and security for data providers and users, specifically in the context of access and benefit sharing under the CBD and its Nagoya Protocol. Forming layers of protection, the characteristics and functionalities of the framework are envisioned to be flexible and finely-grained, adjustable to fulfill the needs and preferences of a wide range of stakeholders and communities, while remaining focused on the protection and rights of the natural world. Respecting different value systems and national policies, the framework is expected to allow a divergence of views to coexist and balance differing interests. Thus, the infrastructure of the digital extensible specimen network is fair and equitable to many providers and users. This foundation has the capacity and potential to bring together the diverse global communities using, managing and protecting biodiversity.

Logistics ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 46
Author(s):  
Houssein Hellani ◽  
Layth Sliman ◽  
Abed Ellatif Samhat ◽  
Ernesto Exposito

Data transparency is essential in the modern supply chain to improve trust and boost collaboration among partners. In this context, Blockchain is a promising technology to provide full transparency across the entire supply chain. However, Blockchain was originally designed to provide full transparency and uncontrolled data access. This leads many market actors to avoid Blockchain as they fear for their confidentiality. In this paper, we highlight the requirements and challenges of supply chain transparency. We then investigate a set of supply chain projects that tackle data transparency issues by utilizing Blockchain in their core platform in different manners. Furthermore, we analyze the projects’ techniques and the tools utilized to customize transparency. As a result of the projects’ analyses, we identified that further enhancements are needed to set a balance between the data transparency and process opacity required by different partners, to ensure the confidentiality of their processes and to control access to sensitive data.


2010 ◽  
Vol 5 (2) ◽  
pp. 86
Author(s):  
Scott Marsalis

A Review of: Enger, K. B. (2009). Using citation analysis to develop core book collections in academic libraries. Library & Information Science Research, 31(2), 107-112. Objective – To test whether acquiring books written by authors of highly cited journal articles is an effective method for building a collection in the social sciences. Design – Comparison Study. Setting – Academic library at a public university in the US. Subjects – A total of 1,359 book titles, selected by traditional means (n=1,267) or based on citation analysis (n=92). Methods – The researchers identified highly-ranked authors, defined as the most frequently cited authors publishing in journals with an impact factor greater than one, with no more than six journals in any category, using 1999 ISI data. They included authors in the categories Business, Anthropology, Criminology & Penology, Education & Education Research, Political Science, Psychology, Sociology/Anthropology, and General Social Sciences. The Books in Print bibliographic tool was searched to identify monographs published by these authors, and any titles not already owned were purchased. All books in the study were available to patrons by Fall 2005. The researchers collected circulation data in Spring 2007, and used it to compare titles acquired by this method with titles selected by traditional means. Main Results – Overall, books selected by traditional methods circulated more than those selected by citation analysis, with differences significant at the .001 level. However, at the subject category level, there was no significant difference at the .05 level. Most books selected by the test method circulated one to two times. Conclusion – Citation analysis can be an effective method for building a relevant book collection, and may be especially effective for identifying works relevant to a discipline beyond local context.


Author(s):  
Christian Groes-Green

Current studies of human poverty and suffering seem to lack reflexivity on the kinds of alternatives or solutions that might be found in the local context of study. By reference to experiences from fieldwork among indigenous Sateré-Mawé immigrants in Manaus, Brazil it is illustrated how “participation” as an anthropological method is more than a tool for collecting ethno-graphic data. It is a practice like any other, involving social obligations towards our informants and a necessary engagement in the field of study. As an inseparable part of the production of anthropological know-ledge this engagement should be more explicitly addressed and reflected in anthropological writing and in the very idea of the anthropological project. If this engagement is properly reflected and addressed, a constructive anthropological critique might evolve that points to viable solutions to the social problems and sufferings encountered. Re-inscribing “participation” in the anthropological project as fundamental to any knowledge might also remind us that fieldwork, like any other social engagement, is fraught with social obligations that do not vanish with theoretical distance and abstraction.  


2021 ◽  
Author(s):  
Mark Howison ◽  
Mintaka Angell ◽  
Michael Hicklen ◽  
Justine S. Hastings

A Secure Data Enclave is a system that allows data owners to control data access and ensure data security while facilitating approved uses of data by other parties. This model of data use offers additional protections and technical controls for the data owner compared to the more commonly used approach of transferring data from the owner to another party through a data sharing agreement. Under the data use model, the data owner retains full transparency and auditing over the other party’s access, which can be difficult to achieve in practice with even the best legal instrument for data sharing. We describe the key technical requirements for a Secure Data Enclave and provide a reference architecture for its implementation on the Amazon Web Services platform using managed cloud services.


2014 ◽  
Vol 8 (2) ◽  
pp. 13-24 ◽  
Author(s):  
Arkadiusz Liber

Introduction: Medical documentation ought to be accessible with the preservation of its integrity as well as the protection of personal data. One of the manners of its protection against disclosure is anonymization. Contemporary methods ensure anonymity without the possibility of sensitive data access control. it seems that the future of sensitive data processing systems belongs to the personalized method. In the first part of the paper k-Anonymity, (X,y)- Anonymity, (α,k)- Anonymity, and (k,e)-Anonymity methods were discussed. these methods belong to well - known elementary methods which are the subject of a significant number of publications. As the source papers to this part, Samarati, Sweeney, wang, wong and zhang’s works were accredited. the selection of these publications is justified by their wider research review work led, for instance, by Fung, Wang, Fu and y. however, it should be noted that the methods of anonymization derive from the methods of statistical databases protection from the 70s of 20th century. Due to the interrelated content and literature references the first and the second part of this article constitute the integral whole.Aim of the study: The analysis of the methods of anonymization, the analysis of the methods of protection of anonymized data, the study of a new security type of privacy enabling device to control disclosing sensitive data by the entity which this data concerns.Material and methods: Analytical methods, algebraic methods.Results: Delivering material supporting the choice and analysis of the ways of anonymization of medical data, developing a new privacy protection solution enabling the control of sensitive data by entities which this data concerns.Conclusions: In the paper the analysis of solutions for data anonymization, to ensure privacy protection in medical data sets, was conducted. the methods of: k-Anonymity, (X,y)- Anonymity, (α,k)- Anonymity, (k,e)-Anonymity, (X,y)-Privacy, lKc-Privacy, l-Diversity, (X,y)-linkability, t-closeness, confidence Bounding and Personalized Privacy were described, explained and analyzed. The analysis of solutions of controlling sensitive data by their owner was also conducted. Apart from the existing methods of the anonymization, the analysis of methods of the protection of anonymized data was included. In particular, the methods of: δ-Presence, e-Differential Privacy, (d,γ)-Privacy, (α,β)-Distributing Privacy and protections against (c,t)-isolation were analyzed. Moreover, the author introduced a new solution of the controlled protection of privacy. the solution is based on marking a protected field and the multi-key encryption of sensitive value. The suggested way of marking the fields is in accordance with Xmlstandard. For the encryption, (n,p) different keys cipher was selected. to decipher the content the p keys of n were used. The proposed solution enables to apply brand new methods to control privacy of disclosing sensitive data.


2021 ◽  
Author(s):  
◽  
Bridget Payne

<p>Forest carbon farming offers customary landowners an alternative livelihood to socially and environmentally unsustainable logging, through the sale of carbon offset credits. REDD+, the global forest carbon scheme to address deforestation in developing countries, has attracted scholarly criticism for the risks it poses to communities. Critics warn that REDD+: (1) benefits may be captured by elites, (2) threatens forest-dependent livelihoods, (3) reduces local forest governance, and (4) a results-based payments mechanism can undermine conservation. Community-owned forest carbon farming may mitigate these risks by empowering communities to manage forest resources locally. The Loru project in Vanuatu is the first of its kind, and Indigenous landowners legally own the carbon rights and manage the carbon project. This thesis examines the community ownership and the social impact of the Loru project on its Indigenous project owners, the ni-Vanuatu Ser clan. The thesis uses a ‘semi’-mixed-methods approach, based primarily on interviews conducted in in Espiritu Santo, Vanuatu with Indigenous landowners and supplemented with quantitative data from a monitoring exercise conducted by the author. Grounded in social constructivism, the thesis makes a genuine attempt to decolonize the research process, adopting a self-reflexive approach. The research finds that the project is leading to positive social and economic impacts at the community level. Further, the Loru project is legitimately community-owned and driven, meaning it adapts effectively to the local context. Overall, the findings suggest that implementing REDD+ through a multi-scalar institutional network and building local capacity could mitigate the risks of REDD+ to forest communities.</p>


2009 ◽  
pp. 101-124
Author(s):  
Nicola Adduci

- The Italian Social Republic as a historiographic problem proposes an interpretive key for a broader analysis of the Italian Social Republic (Rsi), from its formation to its collapse. The Party is seen both as the central actor of the Social Republic and the voice of its overall political project, within a prolonged confrontation and clash with the State. The relations of the Pfr with the different actors in the city of Turin are also explored: the urban community, the Church, the industrialists, the Germans and the Resistance. The interpretation reflects a micro-historical methodological approach, and proposes themes hitherto ignored, such as juvenile discontent and the generational break that resulted. The purpose is to propose new research tracks that make it possible to go beyond the local context, redefining some wider in historiographic questions.Key words: Fascist Republican Party, Italian Social Republic, Turin, Generation, Community.Parole chiave: Pfr, Rsi, Torino, generazione, comunitŕ.


2016 ◽  
pp. 1756-1773
Author(s):  
Grzegorz Spyra ◽  
William J. Buchanan ◽  
Peter Cruickshank ◽  
Elias Ekonomou

This paper proposes a new identity, and its underlying meta-data, model. The approach enables secure spanning of identity meta-data across many boundaries such as health-care, financial and educational institutions, including all others that store and process sensitive personal data. It introduces the new concepts of Compound Personal Record (CPR) and Compound Identifiable Data (CID) ontology, which aim to move toward own your own data model. The CID model ensures authenticity of identity meta-data; high availability via unified Cloud-hosted XML data structure; and privacy through encryption, obfuscation and anonymity applied to Ontology-based XML distributed content. Additionally CID via XML ontologies is enabled for identity federation. The paper also suggests that access over sensitive data should be strictly governed through an access control model with granular policy enforcement on the service side. This includes the involvement of relevant access control model entities, which are enabled to authorize an ad-hoc break-glass data access, which should give high accountability for data access attempts.


Author(s):  
Bernhard Kittel ◽  
Sylvia Kritzinger ◽  
Hajo Boomgaarden ◽  
Barbara Prainsack ◽  
Jakob-Moritz Eberl ◽  
...  

Abstract Systematic and openly accessible data are vital to the scientific understanding of the social, political, and economic consequences of the COVID-19 pandemic. This article introduces the Austrian Corona Panel Project (ACPP), which has generated a unique, publicly available data set from late March 2020 onwards. ACPP has been designed to capture the social, political, and economic impact of the COVID-19 crisis on the Austrian population on a weekly basis. The thematic scope of the study covers several core dimensions related to the individual and societal impact of the COVID-19 crisis. The panel survey has a sample size of approximately 1500 respondents per wave. It contains questions that are asked every week, complemented by domain-specific modules to explore specific topics in more detail. The article presents details on the data collection process, data quality, the potential for analysis, and the modalities of data access pertaining to the first ten waves of the study.


Sign in / Sign up

Export Citation Format

Share Document