rich data
Recently Published Documents


TOTAL DOCUMENTS

738
(FIVE YEARS 365)

H-INDEX

25
(FIVE YEARS 8)

2022 ◽  
Vol 12 ◽  
Author(s):  
Rachel A. Reeb ◽  
Naeem Aziz ◽  
Samuel M. Lapp ◽  
Justin Kitzes ◽  
J. Mason Heberling ◽  
...  

Community science image libraries offer a massive, but largely untapped, source of observational data for phenological research. The iNaturalist platform offers a particularly rich archive, containing more than 49 million verifiable, georeferenced, open access images, encompassing seven continents and over 278,000 species. A critical limitation preventing scientists from taking full advantage of this rich data source is labor. Each image must be manually inspected and categorized by phenophase, which is both time-intensive and costly. Consequently, researchers may only be able to use a subset of the total number of images available in the database. While iNaturalist has the potential to yield enough data for high-resolution and spatially extensive studies, it requires more efficient tools for phenological data extraction. A promising solution is automation of the image annotation process using deep learning. Recent innovations in deep learning have made these open-source tools accessible to a general research audience. However, it is unknown whether deep learning tools can accurately and efficiently annotate phenophases in community science images. Here, we train a convolutional neural network (CNN) to annotate images of Alliaria petiolata into distinct phenophases from iNaturalist and compare the performance of the model with non-expert human annotators. We demonstrate that researchers can successfully employ deep learning techniques to extract phenological information from community science images. A CNN classified two-stage phenology (flowering and non-flowering) with 95.9% accuracy and classified four-stage phenology (vegetative, budding, flowering, and fruiting) with 86.4% accuracy. The overall accuracy of the CNN did not differ from humans (p = 0.383), although performance varied across phenophases. We found that a primary challenge of using deep learning for image annotation was not related to the model itself, but instead in the quality of the community science images. Up to 4% of A. petiolata images in iNaturalist were taken from an improper distance, were physically manipulated, or were digitally altered, which limited both human and machine annotators in accurately classifying phenology. Thus, we provide a list of photography guidelines that could be included in community science platforms to inform community scientists in the best practices for creating images that facilitate phenological analysis.


ZDM ◽  
2022 ◽  
Author(s):  
Markku S. Hannula ◽  
Eeva Haataja ◽  
Erika Löfström ◽  
Enrique Garcia Moreno-Esteva ◽  
Jessica F. A. Salminen-Saari ◽  
...  

AbstractIn this reflective methodological paper we focus on affordances and challenges of video data. We compare and analyze two research settings that use the latest video technology to capture classroom interactions in mathematics education, namely, The Social Unit of Learning (SUL) project of the University of Melbourne and the MathTrack project of the University of Helsinki. While using these two settings as examples, we have structured our reflections around themes pertinent to video research in general, namely, research methods, data management, and research ethics. SUL and MathTrack share an understanding of mathematics learning as social multimodal practice, and provide possibilities for zooming into the situational micro interactions that construct collaborative problem-solving learning. Both settings provide rich data for in-depth analyses of peer interactions and learning processes. The settings share special needs for technical support and data management, as well as attention to ethical aspects from the perspective of the participants’ security and discretion. SUL data are especially suitable for investigating interactions on a broad scope, addressing how multiple interactional processes intertwine. MathTrack, on the other hand, enables exploration of participants’ visual attention in detail and its role in learning. Both settings could provide tools for teachers’ professional development by showing them aspects of classroom interactions that would otherwise remain hidden.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262093
Author(s):  
Mary K. Horton ◽  
Shannon McCurdy ◽  
Xiaorong Shao ◽  
Kalliope Bellesis ◽  
Terrence Chinn ◽  
...  

Background Adverse childhood experiences (ACEs) are linked to numerous health conditions but understudied in multiple sclerosis (MS). This study’s objective was to test for the association between ACEs and MS risk and several clinical outcomes. Methods We used a sample of adult, non-Hispanic MS cases (n = 1422) and controls (n = 1185) from Northern California. Eighteen ACEs were assessed including parent divorce, parent death, and abuse. Outcomes included MS risk, age of MS onset, Multiple Sclerosis Severity Scale score, and use of a walking aid. Logistic and linear regression estimated odds ratios (ORs) (and beta coefficients) and 95% confidence intervals (CIs) for ACEs operationalized as any/none, counts, individual events, and latent factors/patterns. Results Overall, more MS cases experienced ≥1 ACE compared to controls (54.5% and 53.8%, respectively). After adjusting for sex, birthyear, and race, this small difference was attenuated (OR = 1.01, 95% CI: 0.87, 1.18). There were no trends of increasing or decreasing odds of MS across ACE count categories. Consistent associations between individual ACEs between ages 0–10 and 11–20 years and MS risk were not detected. Factor analysis identified five latent ACE factors, but their associations with MS risk were approximately null. Age of MS onset and other clinical outcomes were not associated with ACEs after multiple testing correction. Conclusion Despite rich data and multiple approaches to operationalizing ACEs, no consistent and statistically significant effects were observed between ACEs with MS. This highlights the challenges of studying sensitive, retrospective events among adults that occurred decades before data collection.


2022 ◽  
Vol 12 (1) ◽  
pp. 35
Author(s):  
Michael A. Schwartz ◽  
Brent C. Elder ◽  
Monu Chhetri ◽  
Zenna Preli

Members of the Deaf New American community reported they arrived in the United States with no formal education, unable to read or write in their native language, and had zero fluency in English. Efforts to educate them have floundered, and the study aims to find out why and how to fix the problem. Interviews of eight Deaf New Americans yielded rich data that demonstrates how education policy in the form of the Individuals with Disabilities Education Act (IDEA) and other laws fail to address their needs, because these laws do not include them in their coverage. The study’s main findings are the deleterious effect of the home country’s failure to educate their Deaf citizens, America’s failure to provide accessible and effective instruction, and the combined effect of these institutional failures on the ability of Deaf New Americans to master English and find gainful employment. This article is an argument for a change in education policy that recognizes the unique nature of this community and provides for a role of Deaf educators in teaching Deaf New Americans.


2022 ◽  
Vol 11 (1) ◽  
pp. 38
Author(s):  
Qiong Luo ◽  
Hong Shu ◽  
Zhongyuan Zhao ◽  
Rui Qi ◽  
Youxin Huang ◽  
...  

The evaluation of community livability quantifies the demands of human settlement at the micro scale, supporting urban governance decision-making at the macro scale. Big data generated by the urban management of government agencies can provide an accurate, real-time, and rich data set for livability evaluation. However, these data are intertwined by overlapping geographical management boundaries of different government agencies. It causes the difficulty of data integration and utilization when evaluating community livability. To address this problem, this paper proposes a scheme of partitioning basic geographical space into grids by optimally integrating various geographical management boundaries relevant to enterprise-level big data. Furthermore, the system of indexes on community livability is created, and the evaluation model of community livability is constructed. Taking Wuhan as an example, the effectiveness of the model is verified. After the evaluation, the experimental results show that the livability evaluation with reference to our basic geographic grids can effectively make use of governmental big data to spatially identify the multi-dimensional characteristics of a community, including management, environment, facility services, safety, and health. Our technical solution to evaluate community livability using gridded basic urban geographical data is of large potential in producing thematic data of community, constructing a 15-min community living circle of Wuhan, and enhancing the ability of the community to resist risks.


2022 ◽  
Vol 6 (1) ◽  
pp. 3
Author(s):  
Riccardo Cantini ◽  
Fabrizio Marozzo ◽  
Domenico Talia ◽  
Paolo Trunfio

Social media platforms are part of everyday life, allowing the interconnection of people around the world in large discussion groups relating to every topic, including important social or political issues. Therefore, social media have become a valuable source of information-rich data, commonly referred to as Social Big Data, effectively exploitable to study the behavior of people, their opinions, moods, interests and activities. However, these powerful communication platforms can be also used to manipulate conversation, polluting online content and altering the popularity of users, through spamming activities and misinformation spreading. Recent studies have shown the use on social media of automatic entities, defined as social bots, that appear as legitimate users by imitating human behavior aimed at influencing discussions of any kind, including political issues. In this paper we present a new methodology, namely TIMBRE (Time-aware opInion Mining via Bot REmoval), aimed at discovering the polarity of social media users during election campaigns characterized by the rivalry of political factions. This methodology is temporally aware and relies on a keyword-based classification of posts and users. Moreover, it recognizes and filters out data produced by social media bots, which aim to alter public opinion about political candidates, thus avoiding heavily biased information. The proposed methodology has been applied to a case study that analyzes the polarization of a large number of Twitter users during the 2016 US presidential election. The achieved results show the benefits brought by both removing bots and taking into account temporal aspects in the forecasting process, revealing the high accuracy and effectiveness of the proposed approach. Finally, we investigated how the presence of social bots may affect political discussion by studying the 2016 US presidential election. Specifically, we analyzed the main differences between human and artificial political support, estimating also the influence of social bots on legitimate users.


2022 ◽  
pp. 192-213
Author(s):  
Karim Hesham Shaker Ibrahim

The potential of digital gaming to facilitate foreign language (FL) learning has been established in many empirical investigations; however, the pedagogical implications of these investigations remain rather limited. A potential reason for this limitation is that the FL learning potential of digital games is embedded in the gaming ecology and shaped by different forces in that ecology. However, to date most empirical studies in the field have focused primarily on the linguistic behavior of gamers rather than the gaming ecology. A potential reason for this is the lack of a robust methodological approach to examining game-based language use as an ecological, multidimensional activity. To address this research gap, this chapter proposes the diamond reconstruction model, a dynamic, multidimensional, and ecology-sensitive approach to de- and re-constructing game-based FL use. Grounded in theories of gameplay, and informed by a conceptual model of game-based FL use, the model reconstructs gameplay episodes by gathering detail-rich data on social, cognitive, and virtual dimensions.


Author(s):  
Abdelkader Khobzaoui ◽  
Kadda Benyahia ◽  
Boualem Mansouri ◽  
Sofiane Boukli-Hacene

Internet of Things (IoT) is a set of connected smart devices providing and sharing rich data in real-time without involving a human being. However, IoT is a security nightmare because like in the early computer systems, security issues are not considered in the design step. Thereby, each IoT system could be susceptible to malicious users and uses. To avoid these types of situations, many approaches and techniques are proposed by both academic and industrial researches.DNA computing is an emerging and relatively new field dealing with data encryption using a DNA computing concepts. This technique allows rapid and secure data transfer between connected objects with low power consumption. In this paper, authors propose a symmetric cryptography method based on DNA. This method consists in cutting the message to encrypt/decrypt in blocks of characters and use a symmetric key extracted from a chromosome for encryption and decryption. Implemented on the embedded platform of a Raspberry Pi, the proposed method shows good performances in terms of robustness, complexity and attack resistance.


2022 ◽  
pp. 452-472
Author(s):  
Naila Iqbal Khan

Qualitative case study methodology provides tools for researchers to study complex phenomena within their contexts. When the approach is applied correctly, it becomes a valuable method for health science research to develop theory, evaluate programs, and develop interventions. The purpose of this chapter is to guide the novice researcher in identifying the key elements for designing and implementing qualitative case study research projects. An overview of the types of case study designs is provided along with general recommendations for writing the research questions, developing propositions, determining the “case” under study, binding the case, and a discussion of data sources and triangulation. To facilitate application of these principles, clear examples of research questions, study propositions, and the different types of case study designs are provided. The great contribution of qualitative research is the culturally specific and contextually rich data it produces. This is proving critical in the design of comprehensive solutions to general problems in developing countries.


2021 ◽  
Vol 17 (12) ◽  
pp. e1009626
Author(s):  
Phuc Nguyen ◽  
Sylvia Chien ◽  
Jin Dai ◽  
Raymond J. Monnat ◽  
Pamela S. Becker ◽  
...  

Identification of cell phenotypic states within heterogeneous populations, along with elucidation of their switching dynamics, is a central challenge in modern biology. Conventional single-cell analysis methods typically provide only indirect, static phenotypic readouts. Transmitted light images, on the other hand, provide direct morphological readouts and can be acquired over time to provide a rich data source for dynamic cell phenotypic state identification. Here, we describe an end-to-end deep learning platform, UPSIDE (Unsupervised Phenotypic State IDEntification), for discovering cell states and their dynamics from transmitted light movies. UPSIDE uses the variational auto-encoder architecture to learn latent cell representations, which are then clustered for state identification, decoded for feature interpretation, and linked across movie frames for transition rate inference. Using UPSIDE, we identified distinct blood cell types in a heterogeneous dataset. We then analyzed movies of patient-derived acute myeloid leukemia cells, from which we identified stem-cell associated morphological states as well as the transition rates to and from these states. UPSIDE opens up the use of transmitted light movies for systematic exploration of cell state heterogeneity and dynamics in biology and medicine.


Sign in / Sign up

Export Citation Format

Share Document