Data Animator — Software that Visualizes Data as Computer-Generated Animation on Personal Computers: an Application to Hamilton Harbour

1996 ◽  
Vol 31 (3) ◽  
pp. 609-622
Author(s):  
Efraim Halfon

Abstract Data Animator, V1.0, is a scientific visualization package for microcomputers. Its main purpose is to generate two-dimensional animations from any data set collected over time. Geographical references such as a shore and/or bathymetry information, etc., may be added for additional clarity. Visualization of data as animations greatly simplifies the interpretation of field measurements. Data Animator is designed (but not restricted) to display data collected in aquatic environments, lakes, rivers, estuaries, oceans, etc., in a clear, concise way using colour to represent ranges of data values. Data sets can also be displayed as static images (keyframes). A graphic user interface allows the user to choose viewpoint, fonts, colour palette, data and keyframes. All Data Animator's options can be accessed through a graphical user interface (GUI). Point-and-click mouse operations allow the user to manipulate many features, with immediate on-screen feedback. Animations are generated by defining keyframes of known data, each located at a specific time. The program can then interpolate over time, between keyframes, to create smoothly animated transitions (in-between frames). Two types of graphs can be rendered with Data Animator. Plane-type graphs are horizontal slices at a depth specified by the user. Transect-type graphs are vertical slices along a straight line defined by the user. Data Animator can make use of both shore outline information and three-dimensional bathymetry information. This allows for the generation of realistic-looking graphs that follow the shape of the aquatic environment. Animations can be displayed on a computer monitor or transferred to video tape. pH data from Hamilton Harbour have been visualized and the results are discussed.

1994 ◽  
Vol 02 (04) ◽  
pp. 443-452
Author(s):  
EFRAIM HALFON ◽  
MORLEY HOWELL

DATA ANIMATOR is a software program to develop and display limnological data as computer generated animations. The purpose of the program is to visualize in a dynamical fashion a variety of data collected in lakes. Examples are originated from Hamilton Harbour, Lake Ontario. Data collected at different stations and different times are interpolated in space and in time. Lake topography and lake bathymetry files are used to relate data collected in the lake(s) with topographical features. A graphic user interface allows the user to choose two- or three-dimensional views, a viewpoint, fonts, colour palette, data and keyframes. A typical 1800 frame animation can be displayed in a minute at 30 frames per second. Rendering time is about 12 hours. Animations can be displayed on a monitor or transferred to video tape.


2020 ◽  
Vol 122 (11) ◽  
pp. 1-32
Author(s):  
Michael A. Gottfried ◽  
Vi-Nhuan Le ◽  
J. Jacob Kirksey

Background It is of grave concern that kindergartners are missing more school than students in any other year of elementary school; therefore, documenting which students are absent and for how long is of upmost importance. Yet, doing so for students with disabilities (SWDs) has received little attention. This study addresses this gap by examining two cohorts of SWDs, separated by more than a decade, to document changes in attendance patterns. Research Questions First, for SWDs, has the number of school days missed or chronic absenteeism rates changed over time? Second, how are changes in the number of school days missed and chronic absenteeism rates related to changes in academic emphasis, presence of teacher aides, SWD-specific teacher training, and preschool participation? Subjects This study uses data from the Early Childhood Longitudinal Study (ECLS), a nationally representative data set of children in kindergarten. We rely on both ECLS data sets— the kindergarten classes of 1998–1999 and 2010–2011. Measures were identical in both data sets, making it feasible to compare children across the two cohorts. Given identical measures, we combined the data sets into a single data set with an indicator for being in the older cohort. Research Design This study examined two sets of outcomes: The first was number of days absent, and the second was likelihood of being chronically absent. These outcomes were regressed on a measure for being in the older cohort (our key measure for changes over time) and numerous control variables. The error term was clustered by classroom. Findings We found that SWDs are absent more often now than they were a decade earlier, and this growth in absenteeism was larger than what students without disabilities experienced. Absenteeism among SWDs was higher for those enrolled in full-day kindergarten, although having attended center-based care mitigates this disparity over time. Implications are discussed. Conclusions Our study calls for additional attention and supports to combat the increasing rates of absenteeism for SWDs over time. Understanding contextual shifts and trends in rates of absenteeism for SWDs in kindergarten is pertinent to crafting effective interventions and research geared toward supporting the academic and social needs of these students.


2020 ◽  
Vol 41 (4/5) ◽  
pp. 247-268 ◽  
Author(s):  
Starr Hoffman ◽  
Samantha Godbey

PurposeThis paper explores trends over time in library staffing and staffing expenditures among two- and four-year colleges and universities in the United States.Design/methodology/approachResearchers merged and analyzed data from 1996 to 2016 from the National Center for Education Statistics for over 3,500 libraries at postsecondary institutions. This study is primarily descriptive in nature and addresses the research questions: How do staffing trends in academic libraries over this period of time relate to Carnegie classification and institution size? How do trends in library staffing expenditures over this period of time correspond to these same variables?FindingsAcross all institutions, on average, total library staff decreased from 1998 to 2012. Numbers of librarians declined at master’s and doctoral institutions between 1998 and 2016. Numbers of students per librarian increased over time in each Carnegie and size category. Average inflation-adjusted staffing expenditures have remained steady for master's, baccalaureate and associate's institutions. Salaries as a percent of library budget decreased only among doctoral institutions and institutions with 20,000 or more students.Originality/valueThis is a valuable study of trends over time, which has been difficult without downloading and merging separate data sets from multiple government sources. As a result, few studies have taken such an approach to this data. Consequently, institutions and libraries are making decisions about resource allocation based on only a fraction of the available data. Academic libraries can use this study and the resulting data set to benchmark key staffing characteristics.


2001 ◽  
Vol 28 (1) ◽  
pp. 87 ◽  
Author(s):  
N. D. Barlow ◽  
G. L. Norbury

Introduced ferrets (Mustela furo) in New Zealand are subject to population control to reduce their threat to native fauna and the incidence of bovine tuberculosis (Tb) in livestock. To help in evaluating control options and to contribute to a multi-species model for Tb dynamics, a simple Ricker model was developed for ferret population dynamics in a semi-arid environment. The model was based on two data sets and suggested an intrinsic rate of increase for ferrets of 1.0–1.3 year–1 and a carrying capacity of 0.5–2.9 km–2. There was evidence for direct density-dependence in both data sets and the effect appeared to act mainly on recruitment. Dependence of the rate of increase of predators on the density of wild rabbits (Oryctolagus cuniculus) was exhibited in one of the two data sets, together with a numerical response relating current density of predators asymptotically to current density of rabbits, their primary prey. Predators in this data set included both cats and ferrets, estimated from spotlight counts, but the other data set demonstrated a direct proportionality between predator (cat and ferret) spotlight counts and minimum ferrets known to be alive by trapping. The model suggested, firstly, that populations are hard to suppress by continuous culling, with at least a 50% removal per year necessary to effect a suppression of 50% in long-term average density. Secondly, if control is episodic rather than continuous, culling in autumn gives a greater degree of suppression over time (280%, accumulated over time) than culling in spring (180%). A differential equation version of the model provides a component for a general Anderson/May bovine Tb/wildlife (possum/deer/ferret) model.


2005 ◽  
Vol 4 ◽  
pp. 9-16 ◽  
Author(s):  
D. Hofman

Abstract. The LIANA Model Integration System is the shell application supporting model integration and user interface functionality required for the rapid construction and run-time support of the environmental decision support systems (EDSS). Internally it is constructed as the framework of C++ classes and functions covering most common tasks performed by the EDSS (such as managing of and alternative strategies, running of the chain of the models, supporting visualisation of the data with tables and graphs, keeping ranges and default values for input parameters etc.). EDSS is constructed by integration of LIANA system with the models or other applications such as GIS or MAA software. The basic requirements to the model or other application to be integrated is minimal - it should be a Windows or DOS .exe file and receive input and provide output as text files. For the user the EDSS is represented as the number of data sets describing scenario or giving results of evaluation of scenario via modelling. Internally data sets correspond to the I/O files of the models. During the integration the parameters included in each the data sets as well as specifications necessary to present the data set in GUI and export or import it to/from text file are provided with MIL_LIANA language. Visual C++ version of LIANA has been developed in the frame of MOIRA project and is used as the basis for the MOIRA Software Framework - the shell and user interface component of the MOIRA Decision Support System. At present, the usage of LIANA for the creation of a new EDSS requires changes to be made in its C++ code. The possibility to use LIANA for the new EDSS construction without extending the source code is achieved by substituting MIL_LIANA with the object-oriented LIANA language.


2021 ◽  
Author(s):  
Alexander K. Bartella ◽  
Josefine Laser ◽  
Mohammad Kamal ◽  
Dirk Halama ◽  
Michael Neuhaus ◽  
...  

Abstract Introduction: Three-dimensional facial scan images have been showing an increasingly important role in peri-therapeutic management of oral and maxillofacial and head and neck surgery cases. Face scan images can be open using optical facial scanners utilizing line-laser, stereophotography, structured light modality, or from volumetric data obtained from cone beam computed tomography (CBCT). The aim of this study is to evaluate, if two low-cost procedures for creating a three-dimensional face scan images are able to produce a sufficient data set for clinical analysis. Materials and methods: 50 healthy volunteers were included in the study. Two test objects with defined dimensions were attached to the forehead and the left cheek. Anthropometric values were first measured manually, and consecutively, face scans were performed with a smart device and manual photogrammetry and compared to the manually measured data sets.Results: Anthropometric distances on average deviated 2.17 mm from the manual measurement (smart device scanning 3.01 mm vs. photogrammetry 1.34 mm), with 7 out of 8 deviations were statistically significant. Of a total of 32 angles, 19 values showed a significant difference to the original 90° angles. The average deviation was 6.5° (smart device scanning 10.1° vs. photogrammetry 2.8°).Conclusion: Manual photogrammetry with a regular photo-camera shows higher accuracy than scanning with smart device. However, the smart device was more intuitive in handling and further technical improvement of the cameras used should be watched carefully.


Author(s):  
Daniel Chung

Abstract Magnetic resonance imaging techniques were used to collect three-dimensional velocity data measurements of scaled models of a canyon in New Mexico to compare to simulations where a gas was released inside the canyon. The first canyon model covers an area of 1850m × 1030m with a scale of 1:5250 while the second model covers an area of 290m × 160m with a scale of 1:825. A fully turbulent flow with a Reynolds number of 36,000 using the channel hydraulic diameter passes through the canyon geometry for both models. With Magnetic Resonance Velocimetry (MRV), more than 13 million data points were measured to represent flow velocity. The MRV experiment with the 1:5250 scale model helped to identify key terrain features to be included in the next set of measurements of a higher resolution model. MRV not only served as a method of analysis but also as a method for design. The analysis of the data resulted in a new design of a 1:825 scale, which had a higher resolution of the terrain surrounding the gas release point. The preliminary scans from the 1:825 scale model showed a much more dynamic flow around the release point than observed in the 1:5250 scale model. Counter-rotating vortices and circulation can be observed in the 1:825 scale model. This data set will be used to compare to Sandia National Laboratories’ simulations of turbulent flows in a complex terrain.


2021 ◽  
Vol 87 (12) ◽  
pp. 879-890
Author(s):  
Sagar S. Deshpande ◽  
Mike Falk ◽  
Nathan Plooster

Rollers are an integral part of a hot-rolling steel mill. They transport hot metal from one end of the mill to another. The quality of the steel highly depends on the surface quality of the rollers. This paper presents semi-automated methodologies to extract roller parameters from terrestrial lidar points. The procedure was divided into two steps. First, the three-dimensional points were converted to a two-dimensional image to detect the extents of the rollers using fast Fourier transform image matching. Lidar points for every roller were iteratively fitted to a circle. The radius and center of the fitted circle were considered as the average radius and average rotation axis of the roller, respectively. These parameters were also extracted manually and were compared to the measured parameters for accuracy analysis. The proposed methodology was able to extract roller parameters at millimeter level. Erroneously identified rollers were identified by moving average filters. In the second step, roller parameters were determined using the filtered roller points. Two data sets were used to validate the proposed methodologies. In the first data set, 366 out of 372 rollers (97.3%) were identified and modeled. The second, smaller data set consisted of 18 rollers which were identified and modelled accurately.


2021 ◽  
Author(s):  
Mohammad Shehata ◽  
Hideki Mizunaga

<p>Long-period magnetotelluric and gravity data were acquired to investigate the US cordillera's crustal structure. The magnetotelluric data are being acquired across the continental USA on a quasi-regular grid of ∼70 km spacing as an electromagnetic component of the National Science Foundation EarthScope/USArray Program. International Gravimetreique Bureau compiled gravity Data at high spatial resolution. Due to the difference in data coverage density, the geostatistical joint integration was utilized to map the subsurface structures with adequate resolution. First, a three-dimensional inversion of each data set was applied separately.</p><p>The inversion results of both data sets show a similarity of structure for data structuralizing. The individual result of both data sets is resampled at the same locations using the kriging method by considering each inversion model to estimate the coefficient. Then, the Layer Density Correction (LDC) process's enhanced density distribution was applied to MT data's spatial expansion process. Simple Kriging with varying Local Means (SKLM) was applied to the residual analysis and integration. For this purpose, the varying local means of the resistivity were estimated using the corrected gravity data by the Non-Linear Indicator Transform (NLIT), taking into account the spatial correlation. After that, the spatial expansion analysis of MT data obtained sparsely was attempted using the estimated local mean values and SKLM method at the sections where the MT survey was carried out and for the entire area where density distributions exist. This research presents the integration results and the stand-alone inversion results of three-dimensional gravity and magnetotelluric data.</p>


Animals ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 72
Author(s):  
Rodrigo I. Albornoz ◽  
Khageswor Giri ◽  
Murray C. Hannah ◽  
William J. Wales

Body condition scoring is a valuable tool used to assess the changes in subcutaneous tissue reserves of dairy cows throughout the lactation resulting from changes to management or nutritional interventions. A subjective visual method is typically used to assign a body condition score (BCS) to a cow following a standardized scale, but this method is subject to operator bias and is labor intensive, limiting the number of animals that can be scored and frequency of measurement. An automated three-dimensional body condition scoring camera system is commercially available (DeLaval Body Condition Scoring, BCS DeLaval International AB, Tumba, Sweden), but the reliability of the BCS data for research applications is still unknown, as the system’s sensitivity to change in BCS over time within cows has yet to be investigated. The objective of this study was to evaluate the suitability of an automated body condition scoring system for dairy cows for research applications as an alternative to visual body condition scoring. Thirty-two multiparous Holstein-Friesian cows (9 ± 6.8 days in milk) were body condition scored visually by three trained staff weekly and automatically twice each day by the camera for at least 7 consecutive weeks. Measurements were performed in early lactation, when the greatest differences in BCS of a cow over the lactation are normally present, and changes in BCS occur rapidly compared with later stages, allowing for detectable changes in a short timeframe by each method. Two data sets were obtained from the automatic body condition scoring camera: (1) raw daily BCS camera values and (2) a refined data set obtained from the raw daily BCS camera data by fitting a robust smooth loess function to identify and remove outliers. Agreement, precision, and sensitivity properties of the three data sets (visual, raw, and refined camera BCS) were compared in terms of the weekly average for each cow. Sensitivity was estimated as the ratio of response to precision, providing an objective performance criterion for independent comparison of methods. The camera body condition scoring method, using raw or refined camera data, performed better on this criterion compared with the visual method. Sensitivities of the raw BCS camera method, the refined BCS camera method, and the visual BCS method for changes in weekly mean score were 3.6, 6.2, and 1.7, respectively. To detect a change in BCS of an animal, assuming a decline of about 0.2 BCS (1–8 scale) per month, as was observed on average in this experiment, it would take around 44 days with the visual method, 21 days with the raw camera method, or 12 days with the refined camera method. This represents an increased capacity of both camera methods to detect changes in BCS over time compared with the visual method, which improved further when raw camera data were refined as per our proposed method. We recommend the use of the proposed refinement of the camera’s daily BCS data for research applications.


Sign in / Sign up

Export Citation Format

Share Document