scholarly journals What more than a hundred project groups reveal about teaching visualization

2020 ◽  
Vol 23 (5) ◽  
pp. 895-911 ◽  
Author(s):  
Michael Burch ◽  
Elisabeth Melby

Abstract The growing number of students can be a challenge for teaching visualization lectures, supervision, evaluation, and grading. Moreover, designing visualization courses by matching the different experiences and skills of the students is a major goal in order to find a common solvable task for all of them. Particularly, the given task is important to follow a common project goal, to collaborate in small project groups, but also to further experience, learn, or extend programming skills. In this article, we survey our experiences from teaching 116 student project groups of 6 bachelor courses on information visualization with varying topics. Moreover, two teaching strategies were tried: 2 courses were held without lectures and assignments but with weekly scrum sessions (further denoted by TS1) and 4 courses were guided by weekly lectures and assignments (further denoted by TS2). A total number of 687 students took part in all of these 6 courses. Managing the ever growing number of students in computer and data science is a big challenge in these days, i.e., the students typically apply a design-based active learning scenario while being supported by weekly lectures, assignments, or scrum sessions. As a major outcome, we identified a regular supervision either by lectures and assignments or by regular scrum sessions as important due to the fact that the students were relatively unexperienced bachelor students with a wide range of programming skills, but nearly no visualization background. In this article, we explain different subsequent stages to successfully handle the upcoming problems and describe how much supervision was involved in the development of the visualization project. The project task description is given in a way that it has a minimal number of requirements but can be extended in many directions while most of the decisions are up to the students like programming languages, visualization approaches, or interaction techniques. Finally, we discuss the benefits and drawbacks of both teaching strategies. Graphic abstract

Author(s):  
Norihiro Yamada ◽  
Samson Abramsky

Abstract The present work achieves a mathematical, in particular syntax-independent, formulation of dynamics and intensionality of computation in terms of games and strategies. Specifically, we give game semantics of a higher-order programming language that distinguishes programmes with the same value yet different algorithms (or intensionality) and the hiding operation on strategies that precisely corresponds to the (small-step) operational semantics (or dynamics) of the language. Categorically, our games and strategies give rise to a cartesian closed bicategory, and our game semantics forms an instance of a bicategorical generalisation of the standard interpretation of functional programming languages in cartesian closed categories. This work is intended to be a step towards a mathematical foundation of intensional and dynamic aspects of logic and computation; it should be applicable to a wide range of logics and computations.


2000 ◽  
Vol 10 (3) ◽  
pp. 269-303 ◽  
Author(s):  
XAVIER LEROY

A simple implementation of an SML-like module system is presented as a module parameterized by a base language and its type-checker. This implementation is useful both as a detailed tutorial on the Harper–Lillibridge–Leroy module system and its implementation, and as a constructive demonstration of the applicability of that module system to a wide range of programming languages.


Optics ◽  
2020 ◽  
Vol 2 (1) ◽  
pp. 25-42
Author(s):  
Ioseph Gurwich ◽  
Yakov Greenberg ◽  
Kobi Harush ◽  
Yarden Tzabari

The present study is aimed at designing anti-reflective (AR) engraving on the input–output surfaces of a rectangular light-guide. We estimate AR efficiency, by the transmittance level in the angular range, determined by the light-guide. Using nano-engraving, we achieve a uniform high transmission over a wide range of wavelengths. In the past, we used smoothed conical pins or indentations on the faces of light-guide crystal as the engraved structure. Here, we widen the class of pins under consideration, following the physical model developed in the previous paper. We analyze the smoothed pyramidal pins with different base shapes. The possible effect of randomization of the pins parameters is also examined. The results obtained demonstrate optimized engraved structure with parameters depending on the required spectral range and facet format. The predicted level of transmittance is close to 99%, and its flatness (estimated by the standard deviation) in the required wavelengths range is 0.2%. The theoretical analysis and numerical calculations indicate that the obtained results demonstrate the best transmission (reflection) we can expect for a facet with the given shape and size for the required spectral band. The approach is equally useful for any other form and of the facet. We also discuss a simple way of comparing experimental and theoretical results for a light-guide with the designed input and output features. In this study, as well as in our previous work, we restrict ourselves to rectangular facets. We also consider the limitations on maximal transmission produced by the size and shape of the light-guide facets. The theoretical analysis is performed for an infinite structure and serves as an upper bound on the transmittance for smaller-size apertures.


Author(s):  
Bin Wang ◽  
Haocen Zhao ◽  
Ling Yu ◽  
Zhifeng Ye

It is usual that fuel system of an aero-engine operates within a wide range of temperatures. As a result, this can have effect on both the characteristics and precision of fuel metering unit (FMU), even on the performance and safety of the whole engine. This paper provides theoretical analysis of the effect that fluctuation of fuel temperature has on the controllability of FMU and clarifies the drawbacks of the pure mathematical models considering fuel temperature variation for FMU. Taking the electrohydraulic servovalve-controlled FMU as the numerical study, simulation in AMESim is carried out by thermal hydraulic model under the temperatures ranged from −10 to 60 °C to confirm the effectiveness and precision of the model on the basis of steady-state and dynamic characteristics of FMU. Meanwhile, the FMU testing workbench with temperature adjustment device employing the fuel cooler and heater is established to conduct an experiment of the fuel temperature characteristics. Results show that the experiment matches well with the simulation with a relative error no more than 5% and that 0–50 °C fuel temperature variation produces up to 5.2% decrease in fuel rate. In addition, step response increases with the fuel temperature. Fuel temperature has no virtual impact on the steady-state and dynamic characteristics of FMU under the testing condition in this paper, implying that FMU can operate normally in the given temperature range.


2020 ◽  
Vol 8 ◽  
Author(s):  
Devasis Bassu ◽  
Peter W. Jones ◽  
Linda Ness ◽  
David Shallcross

Abstract In this paper, we present a theoretical foundation for a representation of a data set as a measure in a very large hierarchically parametrized family of positive measures, whose parameters can be computed explicitly (rather than estimated by optimization), and illustrate its applicability to a wide range of data types. The preprocessing step then consists of representing data sets as simple measures. The theoretical foundation consists of a dyadic product formula representation lemma, and a visualization theorem. We also define an additive multiscale noise model that can be used to sample from dyadic measures and a more general multiplicative multiscale noise model that can be used to perturb continuous functions, Borel measures, and dyadic measures. The first two results are based on theorems in [15, 3, 1]. The representation uses the very simple concept of a dyadic tree and hence is widely applicable, easily understood, and easily computed. Since the data sample is represented as a measure, subsequent analysis can exploit statistical and measure theoretic concepts and theories. Because the representation uses the very simple concept of a dyadic tree defined on the universe of a data set, and the parameters are simply and explicitly computable and easily interpretable and visualizable, we hope that this approach will be broadly useful to mathematicians, statisticians, and computer scientists who are intrigued by or involved in data science, including its mathematical foundations.


Sorting algorithmdeals with the arrangement of alphanumeric data in static order.It plays an important roleinthe field of data science. Selection sort is one ofthe simplest and efficient algorithms which can be applied for the huge number of elements it works likeby giving list of unsorted information, the calculation which breaksintotwo partitions. One section has all the sorted information and another sectionhas all thestaying unsorted information. The calculation rehashes itself, by finding the smallestcomponentinside the rundown of unsorted information and swappingitwith the furthest left component, in the end setting everything straight information.This researchpresents the implementationof selection sort usingC/C++, Python, and Rust and measuredthetime complexity. After experiment,we have collectedtheresults in terms of running time, andanalyzed the outcomes.It was observed that python language hasvery smallamount of line of code, and it also consumesless storage and fast running time then other two languages.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Ali Rohani ◽  
Jennifer A. Kashatus ◽  
Dane T. Sessions ◽  
Salma Sharmin ◽  
David F. Kashatus

Abstract Mitochondria are highly dynamic organelles that can exhibit a wide range of morphologies. Mitochondrial morphology can differ significantly across cell types, reflecting different physiological needs, but can also change rapidly in response to stress or the activation of signaling pathways. Understanding both the cause and consequences of these morphological changes is critical to fully understanding how mitochondrial function contributes to both normal and pathological physiology. However, while robust and quantitative analysis of mitochondrial morphology has become increasingly accessible, there is a need for new tools to generate and analyze large data sets of mitochondrial images in high throughput. The generation of such datasets is critical to fully benefit from rapidly evolving methods in data science, such as neural networks, that have shown tremendous value in extracting novel biological insights and generating new hypotheses. Here we describe a set of three computational tools, Cell Catcher, Mito Catcher and MiA, that we have developed to extract extensive mitochondrial network data on a single-cell level from multi-cell fluorescence images. Cell Catcher automatically separates and isolates individual cells from multi-cell images; Mito Catcher uses the statistical distribution of pixel intensities across the mitochondrial network to detect and remove background noise from the cell and segment the mitochondrial network; MiA uses the binarized mitochondrial network to perform more than 100 mitochondria-level and cell-level morphometric measurements. To validate the utility of this set of tools, we generated a database of morphological features for 630 individual cells that encode 0, 1 or 2 alleles of the mitochondrial fission GTPase Drp1 and demonstrate that these mitochondrial data could be used to predict Drp1 genotype with 87% accuracy. Together, this suite of tools enables the high-throughput and automated collection of detailed and quantitative mitochondrial structural information at a single-cell level. Furthermore, the data generated with these tools, when combined with advanced data science approaches, can be used to generate novel biological insights.


2022 ◽  
Author(s):  
Md Mahbub Alam ◽  
Luis Torgo ◽  
Albert Bifet

Due to the surge of spatio-temporal data volume, the popularity of location-based services and applications, and the importance of extracted knowledge from spatio-temporal data to solve a wide range of real-world problems, a plethora of research and development work has been done in the area of spatial and spatio-temporal data analytics in the past decade. The main goal of existing works was to develop algorithms and technologies to capture, store, manage, analyze, and visualize spatial or spatio-temporal data. The researchers have contributed either by adding spatio-temporal support with existing systems, by developing a new system from scratch, or by implementing algorithms for processing spatio-temporal data. The existing ecosystem of spatial and spatio-temporal data analytics systems can be categorized into three groups, (1) spatial databases (SQL and NoSQL), (2) big spatial data processing infrastructures, and (3) programming languages and GIS software. Since existing surveys mostly investigated infrastructures for processing big spatial data, this survey has explored the whole ecosystem of spatial and spatio-temporal analytics. This survey also portrays the importance and future of spatial and spatio-temporal data analytics.


2019 ◽  
Vol 9 (2) ◽  
pp. 14-20
Author(s):  
Mădălina Viorica ION (MANU) ◽  
◽  
Ilie VASILE ◽  

This paper inventories some of the essential traits of the software preferred by researchers, students and professors, such as R or RStudio, or Matlab and also their possible utilizations. In order to fill the gap in the Romanian literature and help finance students in choosing proper tools according to the research purpose, this comparative study aims at bringing a fresh, useful perspective in the relevant literature. In Romania, the use of R was the focus of several international conferences on official statistics held in Bucharest, and others having business excellence, innovation and sustainability as purpose. In this time, at global scale, R and Python programming languages are considered the lingua franca of data science, as common statistical software used both in corporations and academia. In this paper, I analyze basic features of such software, with the purpose of application in finance.


Author(s):  
Belén Rubio Ballester ◽  
Fabrizio Antenucci ◽  
Martina Maier ◽  
Anthony C. C. Coolen ◽  
Paul F. M. J. Verschure

Abstract Introduction After a stroke, a wide range of deficits can occur with varying onset latencies. As a result, assessing impairment and recovery are enormous challenges in neurorehabilitation. Although several clinical scales are generally accepted, they are time-consuming, show high inter-rater variability, have low ecological validity, and are vulnerable to biases introduced by compensatory movements and action modifications. Alternative methods need to be developed for efficient and objective assessment. In this study, we explore the potential of computer-based body tracking systems and classification tools to estimate the motor impairment of the more affected arm in stroke patients. Methods We present a method for estimating clinical scores from movement parameters that are extracted from kinematic data recorded during unsupervised computer-based rehabilitation sessions. We identify a number of kinematic descriptors that characterise the patients’ hemiparesis (e.g., movement smoothness, work area), we implement a double-noise model and perform a multivariate regression using clinical data from 98 stroke patients who completed a total of 191 sessions with RGS. Results Our results reveal a new digital biomarker of arm function, the Total Goal-Directed Movement (TGDM), which relates to the patients work area during the execution of goal-oriented reaching movements. The model’s performance to estimate FM-UE scores reaches an accuracy of $$R^2$$ R 2 : 0.38 with an error ($$\sigma$$ σ : 12.8). Next, we evaluate its reliability ($$r=0.89$$ r = 0.89 for test-retest), longitudinal external validity ($$95\%$$ 95 % true positive rate), sensitivity, and generalisation to other tasks that involve planar reaching movements ($$R^2$$ R 2 : 0.39). The model achieves comparable accuracy also for the Chedoke Arm and Hand Activity Inventory ($$R^2$$ R 2 : 0.40) and Barthel Index ($$R^2$$ R 2 : 0.35). Conclusions Our results highlight the clinical value of kinematic data collected during unsupervised goal-oriented motor training with the RGS combined with data science techniques, and provide new insight into factors underlying recovery and its biomarkers.


Sign in / Sign up

Export Citation Format

Share Document