Ex Ante Evaluations of Alternate Data Structures for End User Queries

2008 ◽  
pp. 2096-2123
Author(s):  
Paul L. Bowen ◽  
Fiona H. Rohde ◽  
Jay Basford

The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better. Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.

2004 ◽  
Vol 15 (4) ◽  
pp. 45-70 ◽  
Author(s):  
Paul L. Bowen ◽  
Fiona H. Rohde ◽  
Jay Basford

1994 ◽  
Vol 116 (2) ◽  
pp. 522-530 ◽  
Author(s):  
D. L. Thurston ◽  
C. A. Crawford

Expert systems for design often include provisions for comparison of preliminary design alternatives. Historically, this task has been done on an ad hoc basis (or not at all) due to two difficulties. The first difficulty is design evaluation of multiple attributes. The second is that of taking into account highly subjective end-user preferences. Design experts have developed techniques which have enabled them to deal with these two difficulties; weighted average methods for the former and heuristic “rules of thumb” which categorize end-users for the latter. Limitations of these techniques are that the accuracy and precision of weighted average methods is inadequate, and that the “rules of thumb” might be reasonable and valid for most end-users, but not for some others. This paper brings quantitative rigor to the modelling of end-user preferences which is equal to that used in other phases of engineering analysis. We present a technique by which a heuristic rule base derived from technical experts can be analyzed and modified to integrate quantitative assessment of end-users’ subjective preferences. The operations research tool of multiattribute utility analysis is integrated with artificial intelligence techniques to facilitate preliminary evaluation of design alternatives of multiple attributes specific to individual users. The steps of the methodology are: develop the heuristic rule base, analyze the rule base to separate subjective from objective rules, add a subjective multiattribute utility assessment module, add an uncertainty assessment module, make objective rules explicit, and express performance attributes in terms of design decision variables. The key step is making the distinction between subjective and objective aspects of rules, and replacing the former with utility analysis. These steps are illustrated through an expert system for materials selection for a sailboat mast. Results indicate improved expert system performance for both “typical” and “atypical” end-users.


2006 ◽  
Vol 7 (8) ◽  
pp. 514-544 ◽  
Author(s):  
Paul Bowen ◽  
◽  
Robert O'Farrell ◽  
Fiona Rohde ◽  
◽  
...  

2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
S Houwaart

Abstract End-user (e.g. patients or the public) testing of information material is becoming more common in the German public health care system. However, including the end-user (in this case patients) in an optimisation process and thus enabling a close collaboration while developing PIMs is still rare. This is surprising, given the fact that patients provide the exact perspective one is trying to address. Within the isPO project, a patient organization is included as a legal project partner to act as the patient representative and provide the patient's perspective. As such, the patient organization was included in the PHR approach as part of the PIM-optimisation team. During the optimisation process, the patients gave practical insights into the procedures of diagnosing and treating different types of cancer as well as into the patient's changing priorities and challenges at different time points. This was crucial information for the envisioned application of the individual PIMs and their hierarchical overview. Moreover, the developed PIM-checklist enabled the patients to give detailed feedback to the PIMs. With their experience of being in the exact situation in which the PIMs will be applied, their recommendations, especially on the wording and layout of the materials, have been a valuable contribution to the PIM optimisation process. In this part of the seminar, we will take a closer look at the following skill building aspects: What is gained from including patients as end-users in the development and optimization of PIM?How can we reach patients to contribute to a PIM optimization process? Which requirements and prerequisites do patients have to provide to successfully work on an optimisation team?How to compromise and weigh opinions when different ideas occur? Altogether, this part will construct a structured path of productive patient involvement and help to overcome uncertainties regarding a collaboration with patient organizations.


2021 ◽  
Vol 13 (4) ◽  
pp. 559
Author(s):  
Milto Miltiadou ◽  
Neill D. F. Campbell ◽  
Darren Cosker ◽  
Michael G. Grant

In this paper, we investigate the performance of six data structures for managing voxelised full-waveform airborne LiDAR data during 3D polygonal model creation. While full-waveform LiDAR data has been available for over a decade, extraction of peak points is the most widely used approach of interpreting them. The increased information stored within the waveform data makes interpretation and handling difficult. It is, therefore, important to research which data structures are more appropriate for storing and interpreting the data. In this paper, we investigate the performance of six data structures while voxelising and interpreting full-waveform LiDAR data for 3D polygonal model creation. The data structures are tested in terms of time efficiency and memory consumption during run-time and are the following: (1) 1D-Array that guarantees coherent memory allocation, (2) Voxel Hashing, which uses a hash table for storing the intensity values (3) Octree (4) Integral Volumes that allows finding the sum of any cuboid area in constant time, (5) Octree Max/Min, which is an upgraded octree and (6) Integral Octree, which is proposed here and it is an attempt to combine the benefits of octrees and Integral Volumes. In this paper, it is shown that Integral Volumes is the more time efficient data structure but it requires the most memory allocation. Furthermore, 1D-Array and Integral Volumes require the allocation of coherent space in memory including the empty voxels, while Voxel Hashing and the octree related data structures do not require to allocate memory for empty voxels. These data structures, therefore, and as shown in the test conducted, allocate less memory. To sum up, there is a need to investigate how the LiDAR data are stored in memory. Each tested data structure has different benefits and downsides; therefore, each application should be examined individually.


2018 ◽  
Vol 18 (3-4) ◽  
pp. 470-483 ◽  
Author(s):  
GREGORY J. DUCK ◽  
JOXAN JAFFAR ◽  
ROLAND H. C. YAP

AbstractMalformed data-structures can lead to runtime errors such as arbitrary memory access or corruption. Despite this, reasoning over data-structure properties for low-level heap manipulating programs remains challenging. In this paper we present a constraint-based program analysis that checks data-structure integrity, w.r.t. given target data-structure properties, as the heap is manipulated by the program. Our approach is to automatically generate a solver for properties using the type definitions from the target program. The generated solver is implemented using a Constraint Handling Rules (CHR) extension of built-in heap, integer and equality solvers. A key property of our program analysis is that the target data-structure properties are shape neutral, i.e., the analysis does not check for properties relating to a given data-structure graph shape, such as doubly-linked-lists versus trees. Nevertheless, the analysis can detect errors in a wide range of data-structure manipulating programs, including those that use lists, trees, DAGs, graphs, etc. We present an implementation that uses the Satisfiability Modulo Constraint Handling Rules (SMCHR) system. Experimental results show that our approach works well for real-world C programs.


Energies ◽  
2020 ◽  
Vol 13 (24) ◽  
pp. 6674
Author(s):  
Sebastian Hoffmann ◽  
Fabian Adelt ◽  
Johannes Weyer

This paper presents an agent-based model (ABM) for residential end-users, which is part of a larger, interdisciplinary co-simulation framework that helps to investigate the performance of future power distribution grids (i.e., smart grid scenarios). Different modes of governance (strong, soft and self-organization) as well as end-users’ heterogeneous behavior represent key influential factors. Feedback was implemented as a measure to foster grid-beneficial behavior, which encompasses a range of monetary and non-monetary incentives (e.g., via social comparison). The model of frame selection (MFS) serves as theoretical background for modelling end-users’ decision-making. Additionally, we conducted an online survey to ground the end-user sub-model on empirical data. Despite these empirical and theoretical foundations, the model presented should be viewed as a conceptual framework, which requires further data collection. Using an example scenario, representing a lowly populated residential area (167 households) with a high share of photovoltaic systems (30%), different modes of governance were compared with regard to their suitability for improving system stability (measured in cumulated load). Both soft and strong control were able to decrease overall fluctuations as well as the mean cumulated load (by approx. 10%, based on weekly observation). However, we argue that soft control could be sufficient and more societally desirable.


Author(s):  
Sudeep Sarkar ◽  
Dmitry Goldgof

There is a growing need for expertise both in image analysis and in software engineering. To date, these two areas have been taught separately in an undergraduate computer and information science curriculum. However, we have found that introduction to image analysis can be easily integrated in data-structure courses without detracting from the original goal of teaching data structures. Some of the image processing tasks offer a natural way to introduce basic data structures such as arrays, queues, stacks, trees and hash tables. Not only does this integrated strategy expose the students to image related manipulations at an early stage of the curriculum but it also imparts cohesiveness to the data-structure assignments and brings them closer to real life. In this paper we present a set of programming assignments that integrates undergraduate data-structure education with image processing tasks. These assignments can be incorporated in existing data-structure courses with low time and software overheads. We have used these assignment sets thrice: once in a 10-week duration data-structure course at the University of California, Santa Barbara and the other two times in 15-week duration courses at the University of South Florida, Tampa.


Author(s):  
Nitin Vishnu Choudhari ◽  
Dr. Ashish B Sasankar

Abstract –Today Security issue is the topmost problem in the cloud computing environment. It leads to serious discomfort to the Governance and end-users. Numerous security solutions and policies are available however practically ineffective in use. Most of the security solutions are centered towards cloud technology and cloud service providers only and no consideration has been given to the Network, accessing, and device securities at the end-user level. The discomfort at the end-user level was left untreated. The security of the various public, private networks, variety of devices used by end-users, accessibility, and capacity of end-users is left untreated. This leads towards the strong need for the possible modification of the security architecture for data security at all levels and secured service delivery. This leads towards the strong need for the possible adaption of modified security measures and provisions, which shall provide secured hosting and service delivery at all levels and reduce the security gap between the cloud service providers and end-users. This paper investigates the study and analyze the security architecture in the Cloud environment of Govt. of India and suggest the modifications in the security architecture as per the changing scenario and to fulfill the future needs for the secured service delivery from central up to the end-user level. Keywords: Cloud Security, Security in GI Cloud, Cloud Security measures, Security Assessment in GI Cloud, Proposed Security for GI cloud


Sign in / Sign up

Export Citation Format

Share Document