scholarly journals Trialling the use of Google Apps together with online marking to enhance collaborative learning and provide effective feedback

F1000Research ◽  
2017 ◽  
Vol 4 ◽  
pp. 177
Author(s):  
Nicky J. D. Slee ◽  
Marty H. Jacobs

This paper describes a new approach to an ecology practical in which 76 Level 4 students were divided into four groups (n = 20 +/-2) to collect data. Each group studied a different habitat and was further divided into seven subgroups (n = 2 or 3) to collect field data. Each of the four groups collaborated through Google Drive on descriptions and images of the habitat site, and also collaborated at the subgroup level on their own habitat data. The four groups then shared habitat descriptions with the aim to provide enough information to enable everyone to understand the entire data set. The three-stage assignment was assessed and feedback issued at group and individual level via the University’s online submission service (FASER), with some additional feedback given via Moodle, the University’s Virtual Learning Environment. Two separate submissions were made to FASER, the first was the group and subgroup work (stage 1), and the second included the peer assessment task (stage 2) and the individual evaluation of the habitats (stage 3). Feedback was given after the second submission had been uploaded to FASER and again when the assessment for the second submission was complete. The group and subgroup data sets were provided to all students via Moodle, so that individuals could carry out their own analysis of all four habitats. The use of Google Drive and Google Apps helped to improve the digital literacy of the staff and students involved. All three stages of the assignment were successful; over 85% of students passed the first two stages, and 82.9% passed stage 3. The collaborative work enabled students to produce high quality descriptive ecology documents valuable for the subsequent stages of the assignment. The peer assessment encouraged students to gain information on expected Undergraduate Minimum Standards, and gave students the opportunity to study multiple habitats. The final stage was open ended and challenged students to make sense of large ecological data sets. There was a positive correlation between levels of success at stages 1 and 3 for students who achieved less than 65% for the independent work, i.e. they benefited from carrying out group work. This collaborative, three-stage approach is recommended especially as it helps lower ability students gain subject knowledge and improve their presentation skills. However, some modifications are recommended: 1) simplifying the sample and data collection, and 2) providing more guidance for the peer assessment task and individual analysis. Learner autonomy enabled self-directed learning to take place and enriched large scale teaching as it encouraged student-student interaction. Significant differences between gender and ability are discussed.

F1000Research ◽  
2015 ◽  
Vol 4 ◽  
pp. 177
Author(s):  
Nicky J. D. Slee ◽  
Marty H. Jacobs

This paper describes a new approach to an ecology practical where the cohort was divided into four groups to collect data. Each group studied a different habitat; the cohort was further subdivided into seven groups to collect field data. Each of the four groups collaborated through Google Drive on descriptions and images of the habitat site, and also collaborated at the subgroup level on their own habitat data. The four groups then shared habitat descriptions with the aim to provide enough information to enable everyone to understand entire data set. Group work was assessed online and feedback was given at both the group and subgroup levels. At the end of the first stage, peer assignment of all the work was carried out on an individual basis to engage students in other habitats. A complete set of data was finally provided to all students, so that individuals could carry out their own analysis of all four habitats; work was again assessed online and feedback given to each individual. The three-stage assignment from group work to peer assessment to individual analysis was a success. The collaborative work through Google Drive enabled students to produce high quality documents that were valuable for the next step. The peer assignment enabled students to gain information on expected Minimum Standards and exposed them to a variety of habitats. The final stage was open ended and challenged students. This approach is recommended but the data collection process needs modification, and students need more guidance when completing the final stage of the assignment.


Author(s):  
Lior Shamir

Abstract Several recent observations using large data sets of galaxies showed non-random distribution of the spin directions of spiral galaxies, even when the galaxies are too far from each other to have gravitational interaction. Here, a data set of $\sim8.7\cdot10^3$ spiral galaxies imaged by Hubble Space Telescope (HST) is used to test and profile a possible asymmetry between galaxy spin directions. The asymmetry between galaxies with opposite spin directions is compared to the asymmetry of galaxies from the Sloan Digital Sky Survey. The two data sets contain different galaxies at different redshift ranges, and each data set was annotated using a different annotation method. The results show that both data sets show a similar asymmetry in the COSMOS field, which is covered by both telescopes. Fitting the asymmetry of the galaxies to cosine dependence shows a dipole axis with probabilities of $\sim2.8\sigma$ and $\sim7.38\sigma$ in HST and SDSS, respectively. The most likely dipole axis identified in the HST galaxies is at $(\alpha=78^{\rm o},\delta=47^{\rm o})$ and is well within the $1\sigma$ error range compared to the location of the most likely dipole axis in the SDSS galaxies with $z>0.15$ , identified at $(\alpha=71^{\rm o},\delta=61^{\rm o})$ .


2015 ◽  
Vol 8 (1) ◽  
pp. 421-434 ◽  
Author(s):  
M. P. Jensen ◽  
T. Toto ◽  
D. Troyan ◽  
P. E. Ciesielski ◽  
D. Holdridge ◽  
...  

Abstract. The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentation used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.


2020 ◽  
Vol 223 (2) ◽  
pp. 1378-1397
Author(s):  
Rosemary A Renaut ◽  
Jarom D Hogue ◽  
Saeed Vatankhah ◽  
Shuang Liu

SUMMARY We discuss the focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid. For the uniform grid, the model sensitivity matrices have a block Toeplitz Toeplitz block structure for each block of columns related to a fixed depth layer of the subsurface. Then, all forward operations with the sensitivity matrix, or its transpose, are performed using the 2-D fast Fourier transform. Simulations are provided to show that the implementation of the focusing inversion algorithm using the fast Fourier transform is efficient, and that the algorithm can be realized on standard desktop computers with sufficient memory for storage of volumes up to size n ≈ 106. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub–Kahan bidiagonalization or randomized singular value decomposition algorithms. These two algorithms are contrasted for their efficiency when used to solve large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and for data sets of size m it is sufficient to use projected spaces of size approximately m/8. For the inversion of magnetic data sets, we show that it is more efficient to use the Golub–Kahan bidiagonalization, and that it is again sufficient to use projected spaces of size approximately m/8. Simulations support the presented conclusions and are verified for the inversion of a magnetic data set obtained over the Wuskwatim Lake region in Manitoba, Canada.


2009 ◽  
Vol 2 (1) ◽  
pp. 87-98 ◽  
Author(s):  
C. Lerot ◽  
M. Van Roozendael ◽  
J. van Geffen ◽  
J. van Gent ◽  
C. Fayt ◽  
...  

Abstract. Total O3 columns have been retrieved from six years of SCIAMACHY nadir UV radiance measurements using SDOAS, an adaptation of the GDOAS algorithm previously developed at BIRA-IASB for the GOME instrument. GDOAS and SDOAS have been implemented by the German Aerospace Center (DLR) in the version 4 of the GOME Data Processor (GDP) and in version 3 of the SCIAMACHY Ground Processor (SGP), respectively. The processors are being run at the DLR processing centre on behalf of the European Space Agency (ESA). We first focus on the description of the SDOAS algorithm with particular attention to the impact of uncertainties on the reference O3 absorption cross-sections. Second, the resulting SCIAMACHY total ozone data set is globally evaluated through large-scale comparisons with results from GOME and OMI as well as with ground-based correlative measurements. The various total ozone data sets are found to agree within 2% on average. However, a negative trend of 0.2–0.4%/year has been identified in the SCIAMACHY O3 columns; this probably originates from instrumental degradation effects that have not yet been fully characterized.


2017 ◽  
Vol 14 (4) ◽  
pp. 172988141770907 ◽  
Author(s):  
Hanbo Wu ◽  
Xin Ma ◽  
Zhimeng Zhang ◽  
Haibo Wang ◽  
Yibin Li

Human daily activity recognition has been a hot spot in the field of computer vision for many decades. Despite best efforts, activity recognition in naturally uncontrolled settings remains a challenging problem. Recently, by being able to perceive depth and visual cues simultaneously, RGB-D cameras greatly boost the performance of activity recognition. However, due to some practical difficulties, the publicly available RGB-D data sets are not sufficiently large for benchmarking when considering the diversity of their activities, subjects, and background. This severely affects the applicability of complicated learning-based recognition approaches. To address the issue, this article provides a large-scale RGB-D activity data set by merging five public RGB-D data sets that differ from each other on many aspects such as length of actions, nationality of subjects, or camera angles. This data set comprises 4528 samples depicting 7 action categories (up to 46 subcategories) performed by 74 subjects. To verify the challengeness of the data set, three feature representation methods are evaluated, which are depth motion maps, spatiotemporal depth cuboid similarity feature, and curvature space scale. Results show that the merged large-scale data set is more realistic and challenging and therefore more suitable for benchmarking.


2017 ◽  
Vol 44 (2) ◽  
pp. 203-229 ◽  
Author(s):  
Javier D Fernández ◽  
Miguel A Martínez-Prieto ◽  
Pablo de la Fuente Redondo ◽  
Claudio Gutiérrez

The publication of semantic web data, commonly represented in Resource Description Framework (RDF), has experienced outstanding growth over the last few years. Data from all fields of knowledge are shared publicly and interconnected in active initiatives such as Linked Open Data. However, despite the increasing availability of applications managing large-scale RDF information such as RDF stores and reasoning tools, little attention has been given to the structural features emerging in real-world RDF data. Our work addresses this issue by proposing specific metrics to characterise RDF data. We specifically focus on revealing the redundancy of each data set, as well as common structural patterns. We evaluate the proposed metrics on several data sets, which cover a wide range of designs and models. Our findings provide a basis for more efficient RDF data structures, indexes and compressors.


1983 ◽  
Vol 66 ◽  
pp. 411-425
Author(s):  
Frank Hill ◽  
Juri Toomre ◽  
Laurence J. November

AbstractTwo-dimensional power spectra of solar five-minute oscillations display prominent ridge structures in (k, ω) space, where k is the horizontal wavenumber and ω is the temporal frequency. The positions of these ridges in k and ω can be used to probe temperature and velocity structures in the subphotosphere. We have been carrying out a continuing program of observations of five-minute oscillations with the diode array instrument on the vacuum tower telescope at Sacramento Peak Observatory (SPO). We have sought to establish whether power spectra taken on separate days show shifts in ridge locations; these may arise from different velocity and temperature patterns having been brought into our sampling region by solar rotation. Power spectra have been obtained for six days of observations of Doppler velocities using the Mg I λ5173 and Fe I λ5434 spectral lines. Each data set covers 8 to 11 hr in time and samples a region 256″ × 1024″ in spatial extent, with a spatial resolution of 2″ and temporal sampling of 65 s. We have detected shifts in ridge locations between certain data sets which are statistically significant. The character of these displacements when analyzed in terms of eastward and westward propagating waves implies that changes have occurred in both temperature and horizontal velocity fields underlying our observing window. We estimate the magnitude of the velocity changes to be on the order of 100 m s -1; we may be detecting the effects of large-scale convection akin to giant cells.


2019 ◽  
Vol 34 (9) ◽  
pp. 1369-1383 ◽  
Author(s):  
Dirk Diederen ◽  
Ye Liu

Abstract With the ongoing development of distributed hydrological models, flood risk analysis calls for synthetic, gridded precipitation data sets. The availability of large, coherent, gridded re-analysis data sets in combination with the increase in computational power, accommodates the development of new methodology to generate such synthetic data. We tracked moving precipitation fields and classified them using self-organising maps. For each class, we fitted a multivariate mixture model and generated a large set of synthetic, coherent descriptors, which we used to reconstruct moving synthetic precipitation fields. We introduced randomness in the original data set by replacing the observed precipitation fields in the original data set with the synthetic precipitation fields. The output is a continuous, gridded, hourly precipitation data set of a much longer duration, containing physically plausible and spatio-temporally coherent precipitation events. The proposed methodology implicitly provides an important improvement in the spatial coherence of precipitation extremes. We investigate the issue of unrealistic, sudden changes on the grid and demonstrate how a dynamic spatio-temporal generator can provide spatial smoothness in the probability distribution parameters and hence in the return level estimates.


2020 ◽  
pp. 1-51
Author(s):  
Ivan Vulić ◽  
Simon Baker ◽  
Edoardo Maria Ponti ◽  
Ulla Petti ◽  
Ira Leviant ◽  
...  

We introduce Multi-SimLex, a large-scale lexical resource and evaluation benchmark covering data sets for 12 typologically diverse languages, including major languages (e.g., Mandarin Chinese, Spanish, Russian) as well as less-resourced ones (e.g., Welsh, Kiswahili). Each language data set is annotated for the lexical relation of semantic similarity and contains 1,888 semantically aligned concept pairs, providing a representative coverage of word classes (nouns, verbs, adjectives, adverbs), frequency ranks, similarity intervals, lexical fields, and concreteness levels. Additionally, owing to the alignment of concepts across languages, we provide a suite of 66 crosslingual semantic similarity data sets. Because of its extensive size and language coverage, Multi-SimLex provides entirely novel opportunities for experimental evaluation and analysis. On its monolingual and crosslingual benchmarks, we evaluate and analyze a wide array of recent state-of-the-art monolingual and crosslingual representation models, including static and contextualized word embeddings (such as fastText, monolingual and multilingual BERT, XLM), externally informed lexical representations, as well as fully unsupervised and (weakly) supervised crosslingual word embeddings. We also present a step-by-step data set creation protocol for creating consistent, Multi-Simlex -style resources for additional languages.We make these contributions—the public release of Multi-SimLex data sets, their creation protocol, strong baseline results, and in-depth analyses which can be be helpful in guiding future developments in multilingual lexical semantics and representation learning—available via aWeb site that will encourage community effort in further expansion of Multi-Simlex to many more languages. Such a large-scale semantic resource could inspire significant further advances in NLP across languages.


Sign in / Sign up

Export Citation Format

Share Document