Guidelines for the Development of Research Proposals following a Structured, Holistic Approach for a Research Proposal (SHARP)

1998 ◽  
Vol 19 (3) ◽  
pp. 268-282 ◽  
Author(s):  
Rainer Gross ◽  
Darwin Karyadi ◽  
Soemilah Sastroamidjojo ◽  
Werner Schultink

SHARP (a Structured, Holistic Approach for a Research Proposal) is a structured method for developing a research proposal that can be used either by individuals or by teams of researchers. The eight steps in SHARP are (1) setting up a causal model, (2) establishing a fact–hypothesis matrix (FaHM), (3) developing a variable–indicator–method matrix (VIM), (4) selecting the study design, (5) defining the sampling procedure and calculating the sample size, (6) selecting the statistical methods, (7) considering the ethical aspects, and (8) setting up an operational plan. The objectives of the research proposal are to help the researcher to define the contents and to plan and execute a research project, and to inform potential collaborators and supporters about the topic. The proposal that is produced during the process can be submitted to agencies for possible funding.

2021 ◽  
Author(s):  
Liliya Baranova

A conclusive fish tumour prevalence assessment has never been conducted in the lower part of the St. Clair River Area of Concern, despite possible re-contamination of the river and anecdotal evidence of fish abnormalities. This paper provides a study design for a comprehensive fish tumour prevalence assessment of the Lower St. Clair River with special focus on Walpole Island First Nation and surrounding waters. Study details such as area of focus, sentinel species, suggested sampling locations, sample size, field protocols and statistical methods are identified. A brief guide for histopathological examination and interpretation is provided. An alternate method of sampling location siting is suggested. This study design is intended to provide a guide and background reference for the implementation of a future full scale fish tumour assessment in the Lower St. Clair River.


Author(s):  
Janet Peacock ◽  
Philip Peacock

Written in an easily accessible style, the Oxford Handbook of Medical Statistics provides doctors and medical students with a concise and thorough account of this often difficult subject. It promotes understanding and interpretation of statistical methods across a wide range of topics, from study design and sample size considerations, through t- and chi-squared tests, to complex multifactorial analyses, using examples from published research.


Author(s):  
Sajjad Bahariniya ◽  
Farzan Madadizadeh

Background: Nowadays, statistical methods are used frequently in research articles. This review study aimed to determine the statistical methods used in original articles published in Iranian journal of public health (IJPH). Methods: Original articles in the period 2015 to 2019 from volumes 44 to 48 and numbers 1 to 12 were reviewed by a 3-member committee consisting of a statistician and two health researchers. The statistical methods, sample size, study design and population, type of used software were investigated. Multiple response analysis (MRA), Kruskal–Wallis test and Spearman correlation coefficient were used to data analysis. All analyzes were performed in SPSS21 software. Significant level was set at 0.05. Results: Statistical population in most of the articles were related to human samples at the field level (36% and 297 articles). 66.6% (549 articles) had the sample size less than 500 cases. Study design in most of them were analytical observational 56.2% (464 cases). Acceptance period was 115.5 ± 52.27 days. All the mentioned variables had no significant relationship with the acceptance period (P>0.05). Both among the total tests and the articles, the highest rate of use of statistical methods was related to descriptive statistical method (34.4%, 75.8% and 532 articles), also, the highest use of tests was related to chi square test  and t-test( (29%(450 articles)). Conclusion: Study design in most of the articles were analytical, to increase thematic diversity, accepting different articles seems necessary. The statistical tests, which used in most articles, were simple, so accepting articles with advanced statistical methods is recommended.


2021 ◽  
Author(s):  
Liliya Baranova

A conclusive fish tumour prevalence assessment has never been conducted in the lower part of the St. Clair River Area of Concern, despite possible re-contamination of the river and anecdotal evidence of fish abnormalities. This paper provides a study design for a comprehensive fish tumour prevalence assessment of the Lower St. Clair River with special focus on Walpole Island First Nation and surrounding waters. Study details such as area of focus, sentinel species, suggested sampling locations, sample size, field protocols and statistical methods are identified. A brief guide for histopathological examination and interpretation is provided. An alternate method of sampling location siting is suggested. This study design is intended to provide a guide and background reference for the implementation of a future full scale fish tumour assessment in the Lower St. Clair River.


Author(s):  
Janet L. Peacock ◽  
Sally M. Kerry

Chapter 1 gives and introduction and description of the resource in general, and describes the presentation of required statistical information through the entire research process, from the development of the research proposal, through to applying for ethical approval, to analysing the data, and then writing up the results. We use the term ‘statistical information’ to include describing the study design, the calculation of sample size, and data processing, as well as the data analysis and reporting of results. It is written for researchers in medicine and in the professions allied to medicine.


2021 ◽  
Vol 4 (2) ◽  
pp. 251524592110181
Author(s):  
Manikya Alister ◽  
Raine Vickers-Jones ◽  
David K. Sewell ◽  
Timothy Ballard

Judgments regarding replicability are vital to scientific progress. The metaphor of “standing on the shoulders of giants” encapsulates the notion that progress is made when new discoveries build on previous findings. Yet attempts to build on findings that are not replicable could mean a great deal of time, effort, and money wasted. In light of the recent “crisis of confidence” in psychological science, the ability to accurately judge the replicability of findings may be more important than ever. In this Registered Report, we examine the factors that influence psychological scientists’ confidence in the replicability of findings. We recruited corresponding authors of articles published in psychology journals between 2014 and 2018 to complete a brief survey in which they were asked to consider 76 specific study attributes that might bear on the replicability of a finding (e.g., preregistration, sample size, statistical methods). Participants were asked to rate the extent to which information regarding each attribute increased or decreased their confidence in the finding being replicated. We examined the extent to which each research attribute influenced average confidence in replicability. We found evidence for six reasonably distinct underlying factors that influenced these judgments and individual differences in the degree to which people’s judgments were influenced by these factors. The conclusions reveal how certain research practices affect other researchers’ perceptions of robustness. We hope our findings will help encourage the use of practices that promote replicability and, by extension, the cumulative progress of psychological science.


2017 ◽  
Vol 23 (5) ◽  
pp. 644-646 ◽  
Author(s):  
Maria Pia Sormani

The calculation of the sample size needed for a clinical study is the challenge most frequently put to statisticians, and it is one of the most relevant issues in the study design. The correct size of the study sample optimizes the number of patients needed to get the result, that is, to detect the minimum treatment effect that is clinically relevant. Minimizing the sample size of a study has the advantage of reducing costs, enhancing feasibility, and also has ethical implications. In this brief report, I will explore the main concepts on which the sample size calculation is based.


2020 ◽  
Vol 1 (1) ◽  
pp. 14-19
Author(s):  
Durga Prasanna Misra ◽  
Vikas Agarwal

A hypothesis is a statement of the expected outcome of a research study, generally based on analysis of prior published knowledge, or with reference to the previous work of the investigators. The hypothesis forms the foundation of a research proposal. A study based, and planned, on a sound hypothesis may have a greater likelihood of meaningfully contributing to science. After the generation of a hypothesis, it is equally important to appropriately design and adequately power a study (by ensuring a sufficient sample size) in order to test the hypothesis. Adhering to principles discussed forthwith shall help young researchers to generate and test their own hypotheses, and these are best learnt with experience.


2020 ◽  
Author(s):  
Chia-Lung Shih ◽  
Te-Yu Hung

Abstract Background A small sample size (n < 30 for each treatment group) is usually enrolled to investigate the differences in efficacy between treatments for knee osteoarthritis (OA). The objective of this study was to use simulation for comparing the power of four statistical methods for analysis of small sample size for detecting the differences in efficacy between two treatments for knee OA. Methods A total of 10,000 replicates of 5 sample sizes (n=10, 15, 20, 25, and 30 for each group) were generated based on the previous reported measures of treatment efficacy. Four statistical methods were used to compare the differences in efficacy between treatments, including the two-sample t-test (t-test), the Mann-Whitney U-test (M-W test), the Kolmogorov-Smirnov test (K-S test), and the permutation test (perm-test). Results The bias of simulated parameter means showed a decreased trend with sample size but the CV% of simulated parameter means varied with sample sizes for all parameters. For the largest sample size (n=30), the CV% could achieve a small level (<20%) for almost all parameters but the bias could not. Among the non-parametric tests for analysis of small sample size, the perm-test had the highest statistical power, and its false positive rate was not affected by sample size. However, the power of the perm-test could not achieve a high value (80%) even using the largest sample size (n=30). Conclusion The perm-test is suggested for analysis of small sample size to compare the differences in efficacy between two treatments for knee OA.


2021 ◽  
Vol 15 (8) ◽  
pp. 1827-1828
Author(s):  
Faiza Gohar ◽  
Syed Sajid Munir ◽  
Sami Ul Haq

Aim: Frequency of sensorineural hearing loss among children presenting with acute bacterial meningitis. Study design: Pediatric wards of Khyber Teaching Hospital, Peshawar with the help of audiology department of Khyber teaching hospital, Peshawar Study design & duration: Descriptive cross sectional study. 5 months from 23/10/2018 to 23/03/2019. Sample size: Sample size was 149 using 44.4% proportions SNHL among children with bacterial meningitis, 95% confidence level and 8% absolute precision using WHO sample size calculations. Methodology: 149 cases i.e. 90 males and 59 females were included with age of 02 to 144 months. All were with diagnosis of bacterial meningitis. Lab tests and CSF examination was performed. The assessment of hearing was done before discharge in the form of BERA and PTA. All findings of hearing assessment was entered in Performa. Results: In the study, mean± SD of age was 28± 35.7. Moreover, 60.4% were males and 39.6% were females. 10(6.7%) of the 149 cases have sensorineural hearing loss while 139(93.3%) were having normal on hearing assessment. Conclusion: Sensorineural hearing loss in patients with bacterial meningitis was 6.7%. Keywords: Sensorineural Hearing Loss, Meningitis, Bacterial Meningitis


Sign in / Sign up

Export Citation Format

Share Document