scholarly journals Twitter as Research Data: Tools, Costs, Skillsets, and Lessons Learned

2021 ◽  
pp. 1-23
Author(s):  
Kaiping Chen ◽  
Sijia Yang ◽  
Zening Duan
2017 ◽  
Vol 24 ◽  
Author(s):  
M. David Merrill

In this paper I will chronicle my 50+ year career, from my interest in making education more effective, to an epiphany about theories, and some of my published work that, for a time, gained the attention of others in the field of instructional technology.  My extensive experience with computer-assisted learning covers early efforts to teach concepts to attempts to design automated authoring systems. My most recent work attempts to identify underlying principles common to most theories of instruction.The professional press publishes reports of theory, research, data, prescriptions, and opinions, but seldom do we get the back story. Where did these ideas originate?  What events led to a particular theoretical or research approach?  What were the challenges—personal and interpersonal—that affected a given approach, theory or research study?  In this paper, in addition to identifying a few of the most notable contributions to this literature, I will provide some of the back story that contributed to my career and inspired or significantly influenced my work. I will also highlight some of the lessons learned along the way.Download the PDF and read more...


FORUM ◽  
2017 ◽  
Vol 15 (1) ◽  
pp. 51-66
Author(s):  
Edina Robin

Abstract According to the results of translation-based empirical research within the descriptive paradigm, transfer operations and the shifts that occur as a result of translators’ interventions are governed by norms, which represent general, standard practices built on informal social consensus (Toury 1995). Based on the scientific analysis of norms and general rules, the so-called translation universals were formulated describing the factors and qualities that distinguish translations from source texts and from authentic texts not produced through translation but originally written in the target language (Baker 1993). In the present study, I aim to summarise the theoretical conclusions drawn so far from the description of these observed translational features, as well as the results of the research into linguistic phenomena and laws that characterise translations in general, then I will synthesise and graphically represent the lessons learned in a theoretical model. Hopefully, it will provide help to understand and process the research data gained so far and in the future.


2017 ◽  
Vol 11 (2) ◽  
pp. 39-47 ◽  
Author(s):  
Laura Rueda ◽  
Martin Fenner ◽  
Patricia Cruse

Data are the infrastructure of science and they serve as the groundwork for scientific pursuits. Data publication has emerged as a game-changing breakthrough in scholarly communication. Data form the outputs of research but also are a gateway to new hypotheses, enabling new scientific insights and driving innovation. And yet stakeholders across the scholarly ecosystem, including practitioners, institutions, and funders of scientific research are increasingly concerned about the lack of sharing and reuse of research data. Across disciplines and countries, researchers, funders, and publishers are pushing for a more effective research environment, minimizing the duplication of work and maximizing the interaction between researchers. Availability, discoverability, and reproducibility of research outputs are key factors to support data reuse and make possible this new environment of highly collaborative research. An interoperable e-infrastructure is imperative in order to develop new platforms and services for to data publication and reuse. DataCite has been working to establish and promote methods to locate, identify and share information about research data. Along with service development, DataCite supports and advocates for the standards behind persistent identifiers (in particular DOIs, Digital Object Identifiers) for data and other research outputs. Persistent identifiers allow different platforms to exchange information consistently and unambiguously and provide a reliable way to track citations and reuse. Because of this, data publication can become a reality from a technical standpoint, but the adoption of data publication and data citation as a practice by researchers is still in its early stages. Since 2009, DataCite has been developing a series of tools and services to foster the adoption of data publication and citation among the research community. Through the years, DataCite has worked in a close collaboration with interdisciplinary partners on these issues and we have gained insight into the development of data publication workflows. This paper describes the types of different actions and the lessons learned by DataCite. 


2011 ◽  
Vol 6 (2) ◽  
pp. 232-244 ◽  
Author(s):  
Robin Rice ◽  
Jeff Haywood

During the last decade, national and international attention has been increasingly focused on issues of research data management and access to publicly funded research data. The pressure brought to bear on researchers to improve their data management and data sharing practice has come from research funders seeking to add value to expensive research and solve cross-disciplinary grand challenges; publishers seeking to be responsive to calls for transparency and reproducibility of the scientific record; and the public seeking to gain and re-use knowledge for their own purposes using new online tools. Meanwhile higher education institutions have been rather reluctant to assert their role in either incentivising or supporting their academic staff in meeting these more demanding requirements for research practice, partly due to lack of knowledge as to how to provide suitable assistance or facilities for data storage and curation/preservation. This paper discusses the activities and drivers behind one institution’s recent attempts to address this gap, with reflection on lessons learned and future direction.


2021 ◽  
Author(s):  
Jennifer M. Schopf ◽  
Katrina Turner ◽  
Dan Doyle ◽  
Andrew Lake ◽  
Jason Leigh ◽  
...  

AbstractData sharing is required for research collaborations, but effective data transfer performance continues to be difficult to achieve. The NetSage Measurement and Analysis Framework can assist in understanding research data movement. It collects a broad set of monitoring data and builds performance Dashboards to visualize the data. Each Dashboard is specifically designed to address a well-defined analysis need of the stakeholders. This paper describes the design methodology, the resulting architecture, the development approach and lessons learned, and a set of discoveries that NetSage Dashboards made possible.


2015 ◽  
Vol 49 (4) ◽  
pp. 461-474 ◽  
Author(s):  
Birgit Schmidt ◽  
Jens Dierkes

Purpose – The purpose of this paper is to describe the design and implementation of policies, digital infrastructures and hands-on support for eResearch at the University of Göttingen. Core elements of this activity are to provide support for research data management to researchers of all disciplines and to coordinate on-campus activities. These activities are actively aligned with disciplinary, national and international policies and e-infrastructures. Design/methodology/approach – The process of setting up and implementing an institutional data policy and its necessary communications and workflows are described and analysed. A first assessment of service development and uptake is provided in the area of embedded research data support. Findings – A coordination unit for eResearch brings together knowledge about methods and tools that are otherwise scattered across disciplinary units. This provides a framework for policy implementation and improves the quality of institutional research environments. Practical implications – The study provides information about an institutional implementation strategy for infrastructure and services related to research data. The lessons learned allow insights into current challenges and work ahead. Originality/value – With a cross-cutting, “horizontal” approach, in the Göttingen eResearch Alliance, two research-orientated infrastructure providers, a library and an IT service, combine their services and expertise to develop an eResearch service and support portfolio for the Göttingen Campus.


2013 ◽  
Vol 8 (2) ◽  
pp. 123-133
Author(s):  
Laura Molloy ◽  
Simon Hodson ◽  
Meik Poschen ◽  
Jonathan Tedds

The work of the Jisc Managing Research Data programme is – along with the rest of the UK higher education sector – taking place in an environment of increasing pressure on research funding. In order to justify the investment made by Jisc in this activity – and to help make the case more widely for the value of investing time and money in research data management – individual projects and the programme as a whole must be able to clearly express the resultant benefits to the host institutions and to the broader sector. This paper describes a structured approach to the measurement and description of benefits provided by the work of these projects for the benefit of funders, institutions and researchers. We outline the context of the programme and its work; discuss the drivers and challenges of gathering evidence of benefits; specify benefits as distinct from aims and outputs; present emerging findings and the types of metrics and other evidence which projects have provided; explain the value of gathering evidence in a structured way to demonstrate benefits generated by work in this field; and share lessons learned from progress to date.


Sign in / Sign up

Export Citation Format

Share Document