scholarly journals What Is the Best Way For Developers to Learn New Software Tools? An Empirical Comparison Between a Text and a Video Tutorial

Author(s):  
Verena Käfer ◽  
Daniel Kulesz ◽  
Stefan Wagner
2016 ◽  
Author(s):  
Verena Käfer ◽  
Daniel Kulesz ◽  
Stefan Wagner

Tutorials for software developers are supposed to help them to adapt to new tools quickly. While in the early days of computing, mostly text tutorials were available, nowadays software developers can choose among a huge number of tutorials for almost any popular software tool. However, almost no research was conducted to understand how text tutorials differ from other tutorials, which tutorial types are preferred and, especially, which tutorial types yield the best learning experience in terms of efficiency and effectiveness. To evaluate these questions, we converted a “proven” video tutorial for a novel software tool into a content-equivalent text tutorial. We then conducted an experiment in three groups where 42 undergraduate students from a software engineering course were commissioned to operate the software tool after using a tutorial: the first group was provided only with the video tutorial, the second group only with the text tutorial and the third group with both. Surprisingly, the differences in terms of efficiency are almost negligible: we could observe that participants using only the text tutorial completed the tutorial faster than the participants with the video tutorial. However, the participants using only the video tutorial applied the learned content faster, achieving roughly the same bottom line performance. We also found that if both tutorial types are offered, participants clearly prefer video tutorials for learning new content but text tutorials for looking up “missed” information. So while it would be ideal if software tool makers would offer both tutorial types, we think that it is more efficient to produce only text tutorials – provided you manage to motivate developers to use them.


Author(s):  
Jose-Maria Carazo ◽  
I. Benavides ◽  
S. Marco ◽  
J.L. Carrascosa ◽  
E.L. Zapata

Obtaining the three-dimensional (3D) structure of negatively stained biological specimens at a resolution of, typically, 2 - 4 nm is becoming a relatively common practice in an increasing number of laboratories. A combination of new conceptual approaches, new software tools, and faster computers have made this situation possible. However, all these 3D reconstruction processes are quite computer intensive, and the middle term future is full of suggestions entailing an even greater need of computing power. Up to now all published 3D reconstructions in this field have been performed on conventional (sequential) computers, but it is a fact that new parallel computer architectures represent the potential of order-of-magnitude increases in computing power and should, therefore, be considered for their possible application in the most computing intensive tasks.We have studied both shared-memory-based computer architectures, like the BBN Butterfly, and local-memory-based architectures, mainly hypercubes implemented on transputers, where we have used the algorithmic mapping method proposed by Zapata el at. In this work we have developed the basic software tools needed to obtain a 3D reconstruction from non-crystalline specimens (“single particles”) using the so-called Random Conical Tilt Series Method. We start from a pair of images presenting the same field, first tilted (by ≃55°) and then untilted. It is then assumed that we can supply the system with the image of the particle we are looking for (ideally, a 2D average from a previous study) and with a matrix describing the geometrical relationships between the tilted and untilted fields (this step is now accomplished by interactively marking a few pairs of corresponding features in the two fields). From here on the 3D reconstruction process may be run automatically.


Author(s):  
Debi A. LaPlante ◽  
Heather M. Gray ◽  
Pat M. Williams ◽  
Sarah E. Nelson

Abstract. Aims: To discuss and review the latest research related to gambling expansion. Method: We completed a literature review and empirical comparison of peer reviewed findings related to gambling expansion and subsequent gambling-related changes among the population. Results: Although gambling expansion is associated with changes in gambling and gambling-related problems, empirical studies suggest that these effects are mixed and the available literature is limited. For example, the peer review literature suggests that most post-expansion gambling outcomes (i. e., 22 of 34 possible expansion outcomes; 64.7 %) indicate no observable change or a decrease in gambling outcomes, and a minority (i. e., 12 of 34 possible expansion outcomes; 35.3 %) indicate an increase in gambling outcomes. Conclusions: Empirical data related to gambling expansion suggests that its effects are more complex than frequently considered; however, evidence-based intervention might help prepare jurisdictions to deal with potential consequences. Jurisdictions can develop and evaluate responsible gambling programs to try to mitigate the impacts of expanded gambling.


2019 ◽  
Vol 45 (7) ◽  
pp. 1151-1165 ◽  
Author(s):  
Antonia Krefeld-Schwalb ◽  
Chris Donkin ◽  
Ben R. Newell ◽  
Benjamin Scheibehenne

2016 ◽  
Vol 21 (10) ◽  
pp. 48-49
Author(s):  
Guntram Doelfs
Keyword(s):  

Bei Asklepios wissen Manager und Chefärzte dank eines Software-Tools immer genau, wie es aktuell um die Qualität in allen Kliniken des Konzerns bestellt ist. Im Interview schildert Projektmanager Stefan Kruse die Vorteile der IT-Lösung.


Sign in / Sign up

Export Citation Format

Share Document