scholarly journals Research-Embedded Health Librarians as Facilitators of a Multidisciplinary Scoping Review

Author(s):  
Gina Brander ◽  
Colleen Pawliuk

Program  objective:  To  advance  the  methodology  and  improve  the  data  management  of  the  scoping  review through  the  integration  of  two  health  librarians  onto  the  clinical  research  team.  Participants  and  setting:  Two  librarians were  embedded  on  a  multidisciplinary,  geographically  dispersed  pediatric  palliative  and  end-of-life  research  team  conducting a  scoping  review  headquartered  at  the  British  Columbia  Children’s  Hospital  Research  Institute.  Program:  The  team’s embedded  librarians  guided  and  facilitated  all  stages  of  a  scoping  review  of  180  Q3  conditions  and  10  symptoms.  Outcomes: The  scoping  review  was  enhanced  in  quality  and  efficiency  through  the  integration  of  librarians  onto  the  team.  Conclusions: Health  librarians  embedded  on  clinical  research  teams  can  help  guide  and  facilitate  the  scoping  review  process  to  improve workflow  management  and  overall  methodology.  Librarians  are  particularly  well  equipped  to  solve  challenges  arising  from large  data  sets,  broad  research  questions  with  a  high  level  of  specificity,  and  geographically  dispersed  team  members. Knowledge  of  emerging  and  established  citation-screening  and  bibliographic  software  and  review  tools  can  help  librarians  to address  these  challenges  and  provide  efficient  workflow  management. 

2016 ◽  
Vol 12 (11) ◽  
pp. 1020-1028 ◽  
Author(s):  
David E. Gerber ◽  
Torsten Reimer ◽  
Erin L. Williams ◽  
Mary Gill ◽  
Laurin Loudat Priddy ◽  
...  

This article describes the care processes for a 64-year-old man with newly diagnosed advanced non–small-cell lung cancer who was enrolled in a first-line clinical trial of a new immunotherapy regimen. The case highlights the concept of multiteam systems in cancer clinical research and clinical care. Because clinical research represents a highly dynamic entity—with studies frequently opening, closing, and undergoing modifications—concerted efforts of multiple teams are needed to respond to these changes while continuing to provide consistent, high-level care and timely, accurate clinical data. The case illustrates typical challenges of multiteam care processes. Compared with clinical tasks that are routinely performed by single teams, multiple-team care greatly increases the demands for communication, collaboration, cohesion, and coordination among team members. As the case illustrates, the described research team and clinical team are separated, resulting in suboptimal function. Individual team members interact predominantly with members of their own team. A considerable number of team members lack regular interaction with anyone outside their team. Accompanying this separation, the teams enact rivalries that impede collaboration. The teams have misaligned goals and competing priorities that create competition. Collective identity and cohesion across the two teams are low. Research team and clinical team members have limited knowledge of the roles and work of individuals outside their team. Recommendations to increase trust and collaboration are provided. Clinical providers and researchers may incorporate these themes into development and evaluation of multiteam systems, multidisciplinary teams, and cross-functional teams within their own institutions.


2013 ◽  
Vol 2013 ◽  
pp. 1-15 ◽  
Author(s):  
Domenico Talia

The wide availability of high-performance computing systems, Grids and Clouds, allowed scientists and engineers to implement more and more complex applications to access and process large data repositories and run scientific experiments in silico on distributed computing platforms. Most of these applications are designed as workflows that include data analysis, scientific computation methods, and complex simulation techniques. Scientific applications require tools and high-level mechanisms for designing and executing complex workflows. For this reason, in the past years, many efforts have been devoted towards the development of distributed workflow management systems for scientific applications. This paper discusses basic concepts of scientific workflows and presents workflow system tools and frameworks used today for the implementation of application in science and engineering on high-performance computers and distributed systems. In particular, the paper reports on a selection of workflow systems largely used for solving scientific problems and discusses some open issues and research challenges in the area.


2020 ◽  
Vol 1 (1) ◽  
pp. 31-40
Author(s):  
Hina Afzal ◽  
Arisha Kamran ◽  
Asifa Noreen

The market nowadays, due to the rapid changes happening in the technologies requires a high level of interaction between the educators and the fresher coming to going the market. The demand for IT-related jobs in the market is higher than all other fields, In this paper, we are going to discuss the survival analysis in the market of parallel two programming languages Python and R . Data sets are growing large and the traditional methods are not capable enough of handling the large data sets, therefore, we tried to use the latest data mining techniques through python and R programming language. It took several months of effort to gather such an amount of data and process it with the data mining techniques using python and R but the results showed that both languages have the same rate of growth over the past years.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10545
Author(s):  
Matt A. White ◽  
Nicolás E. Campione

Classifying isolated vertebrate bones to a high level of taxonomic precision can be difficult. Many of Australia’s Cretaceous terrestrial vertebrate fossil-bearing deposits, for example, produce large numbers of isolated bones and very few associated or articulated skeletons. Identifying these often fragmentary remains beyond high-level taxonomic ranks, such as Ornithopoda or Theropoda, is difficult and those classified to lower taxonomic levels are often debated. The ever-increasing accessibility to 3D-based comparative techniques has allowed palaeontologists to undertake a variety of shape analyses, such as geometric morphometrics, that although powerful and often ideal, require the recognition of diagnostic landmarks and the generation of sufficiently large data sets to detect clusters and accurately describe major components of morphological variation. As a result, such approaches are often outside the scope of basic palaeontological research that aims to simply identify fragmentary specimens. Herein we present a workflow in which pairwise comparisons between fragmentary fossils and better known exemplars are digitally achieved through three-dimensional mapping of their surface profiles and the iterative closest point (ICP) algorithm. To showcase this methodology, we compared a fragmentary theropod ungual (NMV P186153) from Victoria, Australia, identified as a neovenatorid, with the manual unguals of the megaraptoran Australovenator wintonensis (AODF604). We discovered that NMV P186153 was a near identical match to AODF604 manual ungual II-3, differing only in size, which, given their 10–15Ma age difference, suggests stasis in megaraptoran ungual morphology throughout this interval. Although useful, our approach is not free of subjectivity; care must be taken to eliminate the effects of broken and incomplete surfaces and identify the human errors incurred during scaling, such as through replication. Nevertheless, this approach will help to evaluate and identify fragmentary remains, adding a quantitative perspective to an otherwise qualitative endeavour.


2021 ◽  
Author(s):  
Stephen Taylor

Molecular biology experiments are generating an unprecedented amount of information from a variety of different experimental modalities. DNA sequencing machines, proteomics mass cytometry and microscopes generate huge amounts of data every day. Not only is the data large, but it is also multidimensional. Understanding trends and getting actionable insights from these data requires techniques that allow comprehension at a high level but also insight into what underlies these trends. Lots of small errors or poor summarization can lead to false results and reproducibility issues in large data sets. Hence it is essential we do not cherry-pick results to suit a hypothesis but instead examine all data and publish accurate insights in a data-driven way. This article will give an overview of some of the problems faced by the researcher in understanding epigenetic changes (which are related to changes in the physical structure of DNA) when presented with raw analysis results using visualization methods. We will also discuss the new challenges faced by using machine learning which can be helped by visualization.


2020 ◽  
pp. 0887302X2093119 ◽  
Author(s):  
Rachel Rose Getman ◽  
Denise Nicole Green ◽  
Kavita Bala ◽  
Utkarsh Mall ◽  
Nehal Rawat ◽  
...  

With the proliferation of digital photographs and the increasing digitization of historical imagery, fashion studies scholars must consider new methods for interpreting large data sets. Computational methods to analyze visual forms of big data have been underway in the field of computer science through computer vision, where computers are trained to “read” images through a process called machine learning. In this study, fashion historians and computer scientists collaborated to explore the practical potential of this emergent method by examining a trend related to one particular fashion item—the baseball cap—across two big data sets—the Vogue Runway database (2000–2018) and the Matzen et al. Streetstyle-27K data set (2013–2016). We illustrate one implementation of high-level concept recognition to map a fashion trend. Tracking trend frequency helps visualize larger patterns and cultural shifts while creating sociohistorical records of aesthetics, which benefits fashion scholars and industry alike.


2010 ◽  
Vol 13 (1) ◽  
pp. 101-108 ◽  
Author(s):  
Johan Fellman ◽  
Aldur W. Eriksson

AbstractAttempts have been made to identify factors influencing the sex ratio at birth (number of males per 100 females). Statistical analyses have shown that comparisons between sex ratios demand large data sets. The secondary sex ratio has been believed to vary inversely with the frequency of prenatal losses. This hypothesis suggests that the ratio is highest among singletons, medium among twins and lowest among triplets. Birth data in Sweden for the period 1869–2004 showed that among live births the secondary sex ratio was on average 105.9 among singletons, 103.2 among twins and 99.1 among triplets. The secondary sex ratio among stillbirths for both singletons and twins started at a high level, around 130, in the 1860s, but approached live birth values in the 1990s. This trend is associated with the decrease and convergence of stillbirth rates among males and females. For detailed studies, we considered data for Sweden in 1869–1878 and in 1901–1967. Marital status or place of residence (urban or rural) had no marked influence on the secondary sex ratio among twins. For triplets, the sex ratio showed large random fluctuations and was on average low. During the period 1901–1967, 20 quadruplet, two quintuplet and one sextuplet set were registered. The sex ratio was low, around 92.0.


2011 ◽  
Vol 2011 ◽  
pp. 1-9 ◽  
Author(s):  
Robert Oostenveld ◽  
Pascal Fries ◽  
Eric Maris ◽  
Jan-Mathijs Schoffelen

This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.


Author(s):  
Б.А. Абжалова ◽  
А.Е. Шахарова ◽  
B. Abzhalova ◽  
A. Shakharova

В статье исследуются ключевые аспекты информатизации органов внешнего государственного аудита в Республике Казахстан, которые оцениваются как на достаточно высоком уровне. Однако, автором отмечается, что анализ больших массивов данных не представляется возможным ввиду их хранения в различных источниках, в том числе в связи с низким качестовм предоставляемых данных, неточностью, устареванию и т.д. В силу огромного объема информации, подлежащей постоянному анализу в целях обеспечения быстроты и точности принимаемых решений, действенный государственный аудит не может существовать и развиваться без высокоэффективной системы управления, основанной на цифровых технологиях. В работе проанализированы основные результаты трансформации государственного аудита за 2015-2019 годы и определены основные направления по совершенствованию деятельности органов внешнего государственного аудита посредством применения современных цифровых технологий. Также автором сделаны выводы и предолжены пути решения множества проблем в области информатизации органов государственного аудита, в частности Счетного комитета РК. Для дальнейшей цифровой трансформации аудиторской деятельности предлагается повышение эффективности существующей информационной системы, а также создание качественно новой единой цифровой транзакционной среды посредством интеграции базы данных государственных органов. The article discusses the key aspects of informatization of external state audit bodies in the Republic of Kazakhstan, which are assessed as being at a fairly high level. However, the author notes that the analysis of large data sets is impossible due to their storage in various sources, including due to the poor quality of the data provided, inaccuracy, obsolescence, etc. Due to the huge amount of information that is subject to constant analysis in order to ensure the efficiency and accuracy of decisions made, an effective state audit cannot exist and develop without a highly effective management system based on digital technologies. The article analyzes the main results of the transformation of state audit for 2015-2019 and identifies the main directions for improving the activities of external state audit bodies based on the use of modern digital technologies. The author also draws conclusions and suggests ways to solve many problems in the field of informatization of state audit bodies, in particular the Accounts Committee of the Republic of Kazakhstan. For further digital transformation of audit activities, it is proposed to increase the efficiency of the existing information system, as well as to create a qualitatively new unified digital transaction environment by integrating the database of state bodies.


Author(s):  
Penelope Smyth ◽  
Clair Birkman ◽  
Carol S Hodgson

Background: It is challenging to develop professionalism curricula for all members of a medical community of practice. We collected and developed professionalism vignettes for an interactive professionalism curriculum around our institutional professionalism norms following social constructivist learning theory principles. Methods: Medical students, residents, physicians, nurses and research team members provided real-life professionalism vignettes. We collected stories about professionalism framed within the categories of our Faculty’s code of conduct:  honesty; confidentiality; respect; responsibility; and excellence. Altruism was from the Nursing Code of Ethics. Two expert committees anonymously rated and then discussed vignettes on their educational value and degree of unprofessional behaviour. Through consensus, the research team finalized vignette selection. Results: Eighty cases were submitted: 22 from another study; 20 from learners and nurses; and 30 from physicians; and eight from research team members. Two expert committees reviewed 53 and 42 vignettes, respectively. The final 18 were selected based upon: educational value; diversity in professionalism ratings; and representation of the professionalism categories. Conclusion: Realistic and relevant professionalism vignettes can be systematically gathered from a community of practice and their representation of an institutional norm, educational value, and level of professional behaviour can be judged by experts with a high level of consensus.


Sign in / Sign up

Export Citation Format

Share Document