scholarly journals Learning and Emotional Outcomes after the Application of Invention Activities in a Sample of University Students

2020 ◽  
Vol 12 (18) ◽  
pp. 7306
Author(s):  
Eduardo González-Cabañes ◽  
Trinidad García ◽  
Celestino Rodríguez ◽  
Marcelino Cuesta ◽  
José Carlos Núñez

Invention activities can promote reflective learning processes. However, their inclusion in educational practice can generate doubts because they take up time that can otherwise be invested in explaining content, and because some students might experience frustration and anxiety while trying to solve them. This study experimentally evaluated the efficacy of invention activities in a university statistics class, considering both emotions (self-reported) and learning achieved. In total, 43 students were randomly assigned to either (a) inventing variability measures before receiving instruction about the topic of statistical variability, or (b) completing a similar problem-solving activity, but only after they had received guidance with a worked example concerning the target concepts. Students in the first condition acquired greater conceptual knowledge, which is an indicator of deep learning. The emotions experienced during the learning activities were similar in both learning conditions. However, it was notable that enjoyment during the invention phase of the invention condition was strongly associated with higher achievement. Invention activities are a promising educational strategy that require students to play an active role, and can promote deep learning. This study also provides implementation guidelines for teachers while discussing the possibilities offered by new technologies.

Drones ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 68
Author(s):  
Jiwei Fan ◽  
Xiaogang Yang ◽  
Ruitao Lu ◽  
Xueli Xie ◽  
Weipeng Li

Unmanned aerial vehicles (UAV) and related technologies have played an active role in the prevention and control of novel coronaviruses at home and abroad, especially in epidemic prevention, surveillance, and elimination. However, the existing UAVs have a single function, limited processing capacity, and poor interaction. To overcome these shortcomings, we designed an intelligent anti-epidemic patrol detection and warning flight system, which integrates UAV autonomous navigation, deep learning, intelligent voice, and other technologies. Based on the convolution neural network and deep learning technology, the system possesses a crowd density detection method and a face mask detection method, which can detect the position of dense crowds. Intelligent voice alarm technology was used to achieve an intelligent alarm system for abnormal situations, such as crowd-gathering areas and people without masks, and to carry out intelligent dissemination of epidemic prevention policies, which provides a powerful technical means for epidemic prevention and delaying their spread. To verify the superiority and feasibility of the system, high-precision online analysis was carried out for the crowd in the inspection area, and pedestrians’ faces were detected on the ground to identify whether they were wearing a mask. The experimental results show that the mean absolute error (MAE) of the crowd density detection was less than 8.4, and the mean average precision (mAP) of face mask detection was 61.42%. The system can provide convenient and accurate evaluation information for decision-makers and meets the requirements of real-time and accurate detection.


Processes ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 575
Author(s):  
Jelena Ochs ◽  
Ferdinand Biermann ◽  
Tobias Piotrowski ◽  
Frederik Erkens ◽  
Bastian Nießing ◽  
...  

Laboratory automation is a key driver in biotechnology and an enabler for powerful new technologies and applications. In particular, in the field of personalized therapies, automation in research and production is a prerequisite for achieving cost efficiency and broad availability of tailored treatments. For this reason, we present the StemCellDiscovery, a fully automated robotic laboratory for the cultivation of human mesenchymal stem cells (hMSCs) in small scale and in parallel. While the system can handle different kinds of adherent cells, here, we focus on the cultivation of adipose-derived hMSCs. The StemCellDiscovery provides an in-line visual quality control for automated confluence estimation, which is realized by combining high-speed microscopy with deep learning-based image processing. We demonstrate the feasibility of the algorithm to detect hMSCs in culture at different densities and calculate confluences based on the resulting image. Furthermore, we show that the StemCellDiscovery is capable of expanding adipose-derived hMSCs in a fully automated manner using the confluence estimation algorithm. In order to estimate the system capacity under high-throughput conditions, we modeled the production environment in a simulation software. The simulations of the production process indicate that the robotic laboratory is capable of handling more than 95 cell culture plates per day.


2008 ◽  
Vol 39 (4) ◽  
pp. 379-394 ◽  
Author(s):  
Kenneth Ruthven

This article examines three important facets of the incorporation of new technologies into educational practice, focusing on emergent usages of the mathematical tools of computer algebra and dynamic geometry. First, it illustrates the interpretative flexibility of these tools, highlighting important differences in ways of conceptualizing and employing them that reflect their appropriation to contrasting practices of mathematics teaching. Second, it examines the cultural process of instrumental evolution in which mathematical frameworks and teaching practices are adapted in response to new possibilities created by these tools, showing that such evolution remains at a relatively early stage. Third, it points to crucial prerequisites, at both classroom and systemic levels, for effective institutional adoption of such tools: explicit recognition of the interplay between the development of instrumental and mathematical knowledge, including the establishment of a recognized repertoire of tool-mediated mathematical techniques supported by appropriate discourses of explanation and justification.


2021 ◽  
pp. 1-55
Author(s):  
Emma A. H. Michie ◽  
Behzad Alaei ◽  
Alvar Braathen

Generating an accurate model of the subsurface for the purpose of assessing the feasibility of a CO2 storage site is crucial. In particular, how faults are interpreted is likely to influence the predicted capacity and integrity of the reservoir; whether this is through identifying high risk areas along the fault, where fluid is likely to flow across the fault, or by assessing the reactivation potential of the fault with increased pressure, causing fluid to flow up the fault. New technologies allow users to interpret faults effortlessly, and in much quicker time, utilizing methods such as Deep Learning. These Deep Learning techniques use knowledge from Neural Networks to allow end-users to compute areas where faults are likely to occur. Although these new technologies may be attractive due to reduced interpretation time, it is important to understand the inherent uncertainties in their ability to predict accurate fault geometries. Here, we compare Deep Learning fault interpretation versus manual fault interpretation, and can see distinct differences to those faults where significant ambiguity exists due to poor seismic resolution at the fault; we observe an increased irregularity when Deep Learning methods are used over conventional manual interpretation. This can result in significant differences between the resulting analyses, such as fault reactivation potential. Conversely, we observe that well-imaged faults show a close similarity between the resulting fault surfaces when both Deep Learning and manual fault interpretation methods are employed, and hence we also observe a close similarity between any attributes and fault analyses made.


Author(s):  
Gagan Kukreja

Almost all financial services (especially digital payments) in China are affected by new innovations and technologies. New technologies such as blockchain, artificial intelligence, machine learning, deep learning, and data analytics have immensely influenced all most all aspects of financial services such as deposits, transactions, billings, remittances, credits (B2B and P2P), underwriting, insurance, and so on. Fintech companies are enabling larger financial inclusion, changing in lifestyle and expenditure behavior, better and fast financial services, and lots more. This chapter covers the development, opportunities, and challenges of financial sectors because of new technologies in China. This chapter throws the light on opportunities that emerged because of the large population of 1.4 billion people, high penetration, and access to the latest and affordable technology, affordable cost of smartphones, and government policies and regulations. Lastly, this chapter portrays the untapped potentials of Fintech in China.


Author(s):  
Silvia Uribe ◽  
Alberto Belmonte ◽  
Francisco Moreno ◽  
Álvaro Llorente ◽  
Juan Pedro López ◽  
...  

AbstractUniversal access on equal terms to audiovisual content is a key point for the full inclusion of people with disabilities in activities of daily life. As a real challenge for the current Information Society, it has been detected but not achieved in an efficient way, due to the fact that current access solutions are mainly based in the traditional television standard and other not automated high-cost solutions. The arrival of new technologies within the hybrid television environment together with the application of different artificial intelligence techniques over the content will assure the deployment of innovative solutions for enhancing the user experience for all. In this paper, a set of different tools for image enhancement based on the combination between deep learning and computer vision algorithms will be presented. These tools will provide automatic descriptive information of the media content based on face detection for magnification and character identification. The fusion of this information will be finally used to provide a customizable description of the visual information with the aim of improving the accessibility level of the content, allowing an efficient and reduced cost solution for all.


1989 ◽  
Vol 23 (1) ◽  
pp. 67-72 ◽  
Author(s):  
John McGrath

Recent developments in molecular genetics are examined with particular reference to psychiatry. the new technologies available have allowed significant advances in the understanding of certain illnesses such as familial Alzheimer's disease and Huntington's chorea, and will provide powerful tools to explore many other important psychiatric illnesses. the area of genetic counselling is already characterized by complex ethical issues. We can expect that as the new technologies provide the prospect of positive germ line genetic engineering, these ethical issues will become more complex. It is important that psychiatrists prepare themselves for these future developments and take an active role in leading the debate.


10.6036/10007 ◽  
2021 ◽  
Vol 96 (5) ◽  
pp. 528-533
Author(s):  
XAVIER LARRIVA NOVO ◽  
MARIO VEGA BARBAS ◽  
VICTOR VILLAGRA ◽  
JULIO BERROCAL

Cybersecurity has stood out in recent years with the aim of protecting information systems. Different methods, techniques and tools have been used to make the most of the existing vulnerabilities in these systems. Therefore, it is essential to develop and improve new technologies, as well as intrusion detection systems that allow detecting possible threats. However, the use of these technologies requires highly qualified cybersecurity personnel to analyze the results and reduce the large number of false positives that these technologies presents in their results. Therefore, this generates the need to research and develop new high-performance cybersecurity systems that allow efficient analysis and resolution of these results. This research presents the application of machine learning techniques to classify real traffic, in order to identify possible attacks. The study has been carried out using machine learning tools applying deep learning algorithms such as multi-layer perceptron and long-short-term-memory. Additionally, this document presents a comparison between the results obtained by applying the aforementioned algorithms and algorithms that are not deep learning, such as: random forest and decision tree. Finally, the results obtained are presented, showing that the long-short-term-memory algorithm is the one that provides the best results in relation to precision and logarithmic loss.


Author(s):  
Luciana Ferreira Santos ◽  
Rosinalda Aurora de Melo Teles

ResumoNeste artigo, a partir de um estudo do estado da arte, analisa-se o tema conhecimento geométrico de professores dos anos iniciais em pesquisas em educação matemática realizadas no Brasil num intervalo de 19 anos, entre 2000 e 2019. A leitura de 31 estudos em nível de mestrado e doutorado apontam que entre os aportes teóricos que embasam o conhecimento geométrico do professor, destacam-se os de Shulman (1986, 1987) e Tardif (2002). Ao longo dos anos, esses modelos teóricos tornaram-se as principais referências para análise do conhecimento/saberes de professores. Em relação aos objetivos, a maioria dos estudos buscava analisar ou identificar como a formação em serviço ou formação continuada pode influenciar na mobilização de conhecimentos/saberes pelos professores. Embora o objeto de análise fosse praticamente o mesmo e os estudos utilizassem uma abordagem qualitativa, os procedimentos metodológicos eram diversificados, incluindo estudo de caso, pesquisa-ação e análise documental. Como instrumentos de coleta de dados destacaram-se diagnósticos; registros produzidos pelas participantes; diário de campo da pesquisadora; gravações em áudio e/ou vídeo e produção de sequências didáticas, entre outras. Observou-se uma tendência em coletar informações por meio de encontros formativos, oficinas e laboratórios de matemática, possivelmente para que os pesquisadores interviessem no desenvolvimento do conhecimento geométrico dos professores. Os resultados das pesquisas analisadas apontam para fragilidades no conhecimento conceitual e prático dos professores em relação à geometria. Também indicam que os processos formativos possibilitam mudanças no conhecimento conceitual e na prática educativa a partir da reflexão dessa prática e da construção de aprendizagens.Palavras-chave: Geometria, Conhecimento de professores, Educação Matemática.AbstractIn this article, based on a state of the art study, we analyse the theme of geometric knowledge of teachers of the early years in mathematics education in research carried out in Brazil between 2000 and 2019. The reading of 31 studies at the master’s and doctoral level points out that among the theoretical contributions that support the teacher’s geometric knowledge, Shulman’s (1986, 1987) and Tardif’s (2002) stand out. Over the years, these theoretical models have become the main references for the analysis of teachers’ knowledge/know-how. Regarding the objectives, most studies sought to analyse or identify how in-service education or continuing education can influence the teachers’ mobilisation of knowledge/know-how. Although the object of analysis was practically the same and the studies used a qualitative approach, the methodological procedures were diverse, including case study, action research, and documentary analysis. As instruments of data collection, we highlight the diagnoses; registers produced by the participants; researcher’s field diary; audio and/or video recordings and production of didactic sequences, among others. There was a tendency to collect information through formative meetings, workshops, and mathematics laboratories, possibly for researchers to intervene in the development of teachers’ geometric knowledge. The results of the studies analysed point to weaknesses in the teachers’ conceptual and practical knowledge of geometry. They also indicate that the education processes enable changes in conceptual knowledge and educational practice based on the reflection of this practice and the construction of learning.Keywords: Geometry, Teachers’ knowledge, Mathematics education.ResumenEn este artículo, basado en un estudio del estado del arte, se analiza el tema conocimiento geométrico de los docentes de los años iniciales en investigaciones en educación matemática realizadas en Brasil entre 2000 y 2019. La lectura de 31 estudios a nivel de maestría y doctorado señala que entre los aportes teóricos que sustentan el conocimiento geométrico del docente, se destacan los de Shulman (1986, 1987) y Tardif (2002). A lo largo de los años, estos modelos teóricos se han convertido en los principales referentes para el análisis del conocimiento docente. En cuanto a los objetivos, la mayoría de los estudios buscaban analizar o identificar cómo la formación en servicio o continua puede influir en la movilización de conocimientos por parte de los docentes. Aunque el objeto de análisis fue prácticamente el mismo y los estudios utilizaron un enfoque cualitativo, los procedimientos metodológicos fueron diversos, incluyendo el estudio de casos, la investigación-acción y el análisis de documentos. Como instrumentos de recolección de datos, se destacaron los diagnósticos; registros producidos por los participantes; diario de campo del investigador; grabaciones de audio y/o video y producción de secuencias didácticas, entre otros. Hubo una tendencia a recolectar información a través de reuniones formativas, talleres y laboratorios de matemáticas, posiblemente para que los investigadores intervinieran en el desarrollo del conocimiento geométrico de los docentes. Los resultados de las investigaciones analizadas apuntan a debilidades en los conocimientos conceptuales y prácticos de los docentes en relación con la geometría. También indican que los procesos de formación posibilitan cambios en el conocimiento conceptual y la práctica educativa a partir de la reflexión de esta práctica y la construcción de aprendizajes.Palabras clave: Geometría, Conocimiento de los profesores, Educación matemática.


Author(s):  
Alex Dexter ◽  
Spencer A. Thomas ◽  
Rory T. Steven ◽  
Kenneth N. Robinson ◽  
Adam J. Taylor ◽  
...  

AbstractHigh dimensionality omics and hyperspectral imaging datasets present difficult challenges for feature extraction and data mining due to huge numbers of features that cannot be simultaneously examined. The sample numbers and variables of these methods are constantly growing as new technologies are developed, and computational analysis needs to evolve to keep up with growing demand. Current state of the art algorithms can handle some routine datasets but struggle when datasets grow above a certain size. We present a training deep learning via neural networks on non-linear dimensionality reduction, in particular t-distributed stochastic neighbour embedding (t-SNE), to overcome prior limitations of these methods.One Sentence SummaryAnalysis of prohibitively large datasets by combining deep learning via neural networks with non-linear dimensionality reduction.


Sign in / Sign up

Export Citation Format

Share Document