Evaluating IT Investments — Matching Techniques to Projects

1990 ◽  
Vol 5 (4) ◽  
pp. 215-221 ◽  
Author(s):  
Beat Hochstrasser

This paper presents part of a three-year Kobler Unit study into current practices of managing IT investments involving 60 managers from 34 British companies. Guidelines were collected to assess the true costs of deploying IT, taking into account technological costs, human costs and organizational costs. By identifying examples of best practice, an evaluation methodology was then developed which concentrates on both the primary objectives of systems and on the inevitable second-order effects resulting from the broader human and organizational impact. The methodology is eclectic in that it matches specific evaluation techniques to distinct classes of IT projects.

Author(s):  
K G Swift ◽  
A J Allen

The design of a product largely predetermines its cost and quality, and there are therefore limits to the benefits that can be obtained by the application of best practice in manufacturing and quality control. The paper introduces a general model of design for quality and describes a systematic quality evaluation methodology to aid the development of quality competitive products. The application and performance of the methodology are described and its integration with techniques in design for manufacture and assembly is discussed.


Author(s):  
Michael Shaughnessy

From 1980 to 2000, there were many articles written on the subject of software review and evaluation. Upon initial investigation of educational software methodologies, it appears that there are as many evaluation methodologies as there are authors presenting them. Several articles (methodology analyses) have been written describing these evaluation techniques (Bryson & Cullen, 1984; Eraut, 1989; Holznagel, 1983; Jones et al., 1999; McDougall & Squires, 1995; Reiser & Kegelmann, 1994, 1996; Russell & Blake, 1988). Each of these articles describes various methodologies and presents the most current evaluation methodology available, but fails to provide a complete history of the types of evaluation methodologies. These analyses of evaluation methodologies focus on the individual methodology, but refrain from putting individual methodologies into a greater systematic context.


2021 ◽  
Author(s):  
◽  
Chia-wen Fang

<p>Ontologies are formal specifications of shared conceptualizations of a domain. Important applications of ontologies include distributed knowledge based systems, such as the semantic web, and the evaluation of modelling languages, e.g. for business process or conceptual modelling. These applications require formal ontologies of good quality. In this thesis, we present a multi-method ontology evaluation methodology, which consists of two techniques (sentence verification task and recall) based on principles of cognitive psychology, to test how well a specification of a formal ontology corresponds to the ontology users' conceptualization of a domain. Two experiments were conducted, each evaluating the SUMO ontology and WordNet with an experimental technique, as demonstrations of the multi-method evaluation methodology. We also tested the applicability of the two evaluation techniques by conducting a replication study for each. The replication studies obtained findings that point towards the same direction as the original studies, although no significance was achieved. Overall, the evaluation using the multi-method methodology suggests that neither of the two ontologies we examined is a good specification of the conceptualization of the domain. Both the terminology and the structure of the ontologies, may benefit from improvement.</p>


2020 ◽  
Author(s):  
Martin Archer

&lt;p&gt;Evaluation of drop-in engagement activities, particularly trying to demonstrate impact or change, is difficult given their transient nature and many logistical factors. Many typical evaluation techniques such as surveys are often unsuitable and current best practice recommends integrating evaluation methods into the activity itself. We present a novel implementation and analysis of an established evaluation method, which has the ability to demonstrate change even from a drop-in activity.&lt;/p&gt;&lt;p&gt;A space soundscapes exhibit saw young families taken on a journey experiencing the real sounds of near-Earth space recorded by satellites &amp;#8211; normally inaudible to humans due to their weakness and extremely low pitch. Grafitti walls were placed at the start and end of this journey where participants were prompted by event staff to reflect on what they think space is like. Thematic analysis of the words and drawings from the two walls showed a change from obvious space-themed bodies and typical misconceptions of the lack of sound in space to much more reflective and reactionary results afterwards. Applying quantitative linguistics shows an evolution of the distribution of words which demonstrates a greater diversity following the experience. Similar techniques have been applied to evaluating children&amp;#8217;s language as they age, however, we are unaware of this being applied to public engagement activities before. We therefore propose that these methods may be useful in evaluating other drop-in engagement activities and demonstrating the impact that they had.&lt;/p&gt;


2020 ◽  
Vol 5 (1) ◽  
Author(s):  
Jean-Christophe Servotte ◽  
T. Bram Welch-Horan ◽  
Paul Mullan ◽  
Justine Piazza ◽  
Alexandre Ghuysen ◽  
...  

Abstract Background Multiple guidelines recommend debriefing after clinical events in the emergency department (ED) to improve performance, but their implementation has been limited. We aimed to start a clinical debriefing program to identify opportunities to address teamwork and patient safety during the COVID-19 pandemic. Methods We reviewed existing literature on best-practice guidelines to answer key clinical debriefing program design questions. An end-of-shift huddle format for the debriefs allowed multiple cases of suspected or confirmed COVID-19 illness to be discussed in the same session, promoting situational awareness and team learning. A novel ED-based clinical debriefing tool was implemented and titled Debriefing In Situ COVID-19 to Encourage Reflection and Plus-Delta in Healthcare After Shifts End (DISCOVER-PHASE). A facilitator experienced in simulation debriefings would facilitate a short (10–25 min) discussion of the relevant cases by following a scripted series of stages for debriefing. Data on the number of debriefing opportunities, frequency of utilization of debriefing, debriefing location, and professional background of the facilitator were analyzed. Results During the study period, the ED treated 3386 suspected or confirmed COVID-19 cases, with 11 deaths and 77 ICU admissions. Of the 187 debriefing opportunities in the first 8-week period, 163 (87.2%) were performed. Of the 24 debriefings not performed, 21 (87.5%) of these were during the four first weeks (21/24; 87.5%). Clinical debriefings had a median duration of 10 min (IQR 7–13). They were mostly facilitated by a nurse (85.9%) and mainly performed remotely (89.8%). Conclusion Debriefing with DISCOVER-PHASE during the COVID-19 pandemic were performed often, were relatively brief, and were most often led remotely by a nurse facilitator. Future research should describe the clinical and organizational impact of this DISCOVER-PHASE.


Author(s):  
Assion Lawson-Body

Firms rely on IT investments (Demirhan et al., 2002; Tuten, 2003), because a growing number of executives believe that investments in information technology (IT) (i.e., wireless technologies) help boost firm performance. The use of wireless communications and computing is growing quickly (Kim & Steinfield, 2004; Leung & Cheung, 2004; Yang et al., 2004). But issues of risk and uncertainty due to technical, organizational, and environmental factors continue to hinder executive efforts to produce meaningful evaluation of investment in wireless technology (Smith et al., 2002). Despite the use of investment appraisal techniques, executives often are forced to rely on instinct when finalizing wireless investment decisions. A key problem with evaluation techniques, it emerges, is their treatment of uncertainty and their failure to account for the fact that outside of a decision to reject an investment outright, firms may have an option to defer an investment until a later period (Tallon et al., 2002).


2019 ◽  
Vol 22 (4) ◽  
pp. 109-114
Author(s):  
Ján Jobbágy ◽  
Norbert Michlian ◽  
Peter Dačanin ◽  
Ivan Rigó

Abstract Considering the global tendency in water saving, this research is focused on practical measurements of even distribution of water. The performance quality is determined by values of coefficients of distribution uniformity and non-uniformity given in percentages. Objects of investigation were belt irrigators with varying input conditions (seven pieces). Testing of hose-reel irrigators took place in Southern and Western Slovakia. Tests were carried out during irrigation of selected agricultural crops (potatoes, vegetables); in these areas, rainwater vessels were distributed at a spacing of 1 or 2 m, perpendicular to the direction of movement of the bracket or tripod with a gun sprinkler. The input conditions, such as machine specifications and weather conditions, were monitored and evaluated for all variants. The data were also analysed along with the linear model through statistical analysis software – one-way analysis of variance ANOVA. Considering the results, it is possible to conclude that there were recorded statistically significant differences for uniformity coefficients, depending not only on the site but also on the specific evaluation methodology (P > 0.05). If the input conditions (site, type of irrigator, sprinkler) were changed, the effect of dependence was demonstrated to a much greater extent (P <0.05, F = 7.08> Fcrit). The results of the non-uniformity coefficients confirmed the statistically significant differences not only in the sample sets of coefficients but also in the selection sets of conditions.


2007 ◽  
Vol 30 (1) ◽  
pp. 3-26 ◽  
Author(s):  
David Nadeau ◽  
Satoshi Sekine

This survey covers fifteen years of research in the Named Entity Recognition and Classification (NERC) field, from 1991 to 2006. We report observations about languages, named entity types, domains and textual genres studied in the literature. From the start, NERC systems have been developed using hand-made rules, but now machine learning techniques are widely used. These techniques are surveyed along with other critical aspects of NERC such as features and evaluation methods. Features are word-level, dictionary-level and corpus-level representations of words in a document. Evaluation techniques, ranging from intuitive exact match to very complex matching techniques with adjustable cost of errors, are an indisputable key to progress.


Sign in / Sign up

Export Citation Format

Share Document