A National Survey of Microcomputer Use by Academic Psychologists

1989 ◽  
Vol 16 (3) ◽  
pp. 145-147 ◽  
Author(s):  
James V. Couch ◽  
Michael L. Stoloff

A national survey of academic psychologists indicated increasing use of microcomputers for instructional purposes and that such use was unrelated to department size. Apple and IBM microcomputers, the predominant brands, were represented about equally. Software was used most frequently for statistical analysis and word processing. Microcomputers were used most often in research methods and statistics courses.

1987 ◽  
Vol 14 (2) ◽  
pp. 92-94 ◽  
Author(s):  
Michael L. Stoloff ◽  
James V. Couch

The various uses of computers in instruction, faculty research, and departmental administration were assessed by a survey of the 36 psychology departments at four-year colleges in Virginia. Complete responses were obtained from 29 schools. The results indicated that many faculty and clerical staff use microcomputers for a variety of purposes, including word processing, statistical analysis, data-base management, and test generation. Students frequently use microcomputers for statistical analysis and word processing. Simulation and tutorial programs are in use at over half of the responding departments. More than 50% of the schools indicated that computer use is required in undergraduate statistics or research courses, and computers are being used in many other courses as well. Apple II computers are the most popular, although IBM and 13 other brands are also being used. Our data may be useful for academic psychologists who need to know how computers are used in psychology programs, and especially for those who are planning to expand their use of computers.


2017 ◽  
Vol 9 (1) ◽  
pp. 120 ◽  
Author(s):  
Gina Valdivieso ◽  
Efstathios Stefos ◽  
Ruth Lalama

The present study describes the social and educational characteristics of the Ecuadorian Amazon population. For this purpose, the data obtained from the National Survey of Employment, Unemployment and Underemployment of 2014 was used in this research. A descriptive statistical analysis presents the frequency, the percentages and the graphs of the variables related to the area in which people live, gender, age, ethnic self-identification, language spoken, marital status and level of instruction. Other variables are the use of computer and internet, place of birth, reason why they live in the Amazon region, type of activity or inactivity, how do they feel in their jobs, and groups of occupation. Also, a factorial analysis was used to show the main and most important criteria of differentiation and the the clusters of people with similar characteristics.


2017 ◽  
Vol 9 (2) ◽  
pp. 35
Author(s):  
Andrés Bonilla Marchán ◽  
Ramiro Delgado ◽  
Efstathios Stefos

The purpose of this study is to investigate social characteristics of postgraduate students in Ecuador. The study was conducted with the use of a descriptive and multidimensional statistical analysis, and data from the National Survey of Employment, Unemployment and Underemployment corresponding to 2015. The descriptive analysis has shown the frequencies and percentages of the variables of the research. The multidimensional statistical analysis was used in order to show the main and most important criteria of differentiation and the classification in clusters of people being studied. The methods used are the factorial analysis of multiple correspondences that presents the criteria of differentiation and the hierarchical clustering that defines the groups of people due to their common characteristics.


2020 ◽  
Vol 134 (1) ◽  
pp. 15-25
Author(s):  
Sabri Soussi ◽  
Gary S. Collins ◽  
Peter Jüni ◽  
Alexandre Mebazaa ◽  
Etienne Gayat ◽  
...  

SUMMARY Interest in developing and using novel biomarkers in critical care and perioperative medicine is increasing. Biomarkers studies are often presented with flaws in the statistical analysis that preclude them from providing a scientifically valid and clinically relevant message for clinicians. To improve scientific rigor, the proper application and reporting of traditional and emerging statistical methods (e.g., machine learning) of biomarker studies is required. This Readers’ Toolbox article aims to be a starting point to nonexpert readers and investigators to understand traditional and emerging research methods to assess biomarkers in critical care and perioperative medicine.


2021 ◽  
pp. 53-57
Author(s):  
V.P. Musina ◽  

Researched are gender characteristics of experiencing a midlife crisis among university teachers. Empirically reveals the gender specifics of the parameters of the experience of the crisis: the severity of the crisis and crisis events, as well as in the perception of the time perspective and perception of subjective age. Research methods are testing and statistical analysis. Based on the data obtained, recommendations are offered on psychological assistance to young university teachers who are experiencing a midlife crisis.


1986 ◽  
Vol 30 (1) ◽  
pp. 14-18 ◽  
Author(s):  
Andrew M. Cohill ◽  
David M. Gilfoil ◽  
John V. Pilitsis

A methodology for evaluating applications software is proposed, using five different categories of criteria. Three of the categories, functionality, usability, and performance, are tailored for each class of applications software. The other two categories, support and documentation, have generic criteria that can be applied to all types of application software. After a software package has been scored according to the criteria of a category, statistical analysis is used to convert the raw data to a numeric score that can be used to make between-product comparisons. The methodology has been successfully tested with UNIX-based* word processing and data base packages.


2019 ◽  
Vol 40 (1) ◽  
pp. 231-248
Author(s):  
Andrew Wedel ◽  
Adam Ussishkin ◽  
Adam King

AbstractListeners incrementally process words as they hear them, progressively updating inferences about what word is intended as the phonetic signal unfolds in time. As a consequence, phonetic cues positioned early in the signal for a word are on average more informative about word-identity because they disambiguate the intended word from more lexical alternatives than cues late in the word. In this contribution, we review two new findings about structure in lexicons and phonological grammars, and argue that both arise through the same biases on phonetic reduction and enhancement resulting from incremental processing.(i) Languages optimize their lexicons over time with respect to the amount of signal allocated to words relative to their predictability: words that are on average less predictable in context tend to be longer, while those that are on average more predictable tend to be shorter. However, the fact that phonetic material earlier in the word plays a larger role in word identification suggests that languages should also optimize the distribution of that information across the word. In this contribution we review recent work on a range of different languages that supports this hypothesis: less frequent words are not only on average longer, but also contain more highly informative segments early in the word.(ii) All languages are characterized by phonological grammars of rules describing predictable modifications of pronunciation in context. Because speakers appear to pronounce informative phonetic cues more carefully than less informative cues, it has been predicted that languages should be less likely to evolve phonological rules that reduce lexical contrast at word beginnings. A recent investigation through a statistical analysis of a cross-linguistic dataset of phonological rules strongly supports this hypothesis. Taken together, we argue that these findings suggest that the incrementality of lexical processing has wide-ranging effects on the evolution of phonotactic patterns.


1983 ◽  
Vol 16 (02) ◽  
pp. 182-188 ◽  
Author(s):  
Carl Grafton ◽  
Anne Permaloff

An impressive number of reasonably priced personal computer software packages of interest to political scientists are now on the market. Owners of computers such as the TRS-80 Model III and the Apple II can purchase software for word processing and statistical analysis which can substantially increase their productivity. Scholars trying to meet publication deadlines need no longer be delayed by harried secretaries trying single-handedly to meet the needs of an entire department. A computer/word processor used by a typist of average ability is nearly the equal of a good professional secretary. And those with even fairly large statistical analysis requirements may no longer be tied to the university's hectic “computer center” where they must wait in line for terminals, try to think amid constant movement and never-ending conversation, or suffer errors produced by noise injected between their terminal and the main frame along telephone lines.This is an analysis of statistical packages sold by four companies for use on a variety of low, moderate, and high priced personal computers. Our focus on these packages reflects our statistical needs for research and teaching. We were looking for programs capable of handling relatively large data bases and with the capacity to perform multiple regression and time series analyses. We needed a program that could be used to analyze data generated from small survey samples. This required both frequency distribution and contingency table development and analysis. Finally, we needed a program or programs in an affordable price range.


Sign in / Sign up

Export Citation Format

Share Document