scholarly journals Artificial Intelligence in Education

Seminar.net ◽  
2021 ◽  
Vol 17 (2) ◽  
Author(s):  
Xavier Giró Gràcia ◽  
Juana M. Sancho-Gil

Digital technology is constantly permeating and transforming all social systems, and education is not an exception. In the last decade, the unstoppable development of Artificial Intelligence, based on machine learning algorithms and fuelled by Big Data, has given a new push to the hope of improving learning-based machines, and providing educational systems with ‘effective’ solutions. Educators, educational researchers and policymakers, in general, lack the knowledge and expertise to understand the underlying logic of these new ‘black boxes’, and we do not have sufficient research-based evidence to understand the consequences that an excessive use of screens has in students’ development. This paper first discusses the notions behind what Big Data is and what it means in our current society; how data is the new currency that has driven the use of algorithms in all areas of our society, and specifically in the field of Artificial Intelligence; and the concept of ‘black boxes’, and its possible impact on education. Then, it discusses the underlying educational discourses, pointing out the need to analyse not only their contributions but also their possible negative effects. It finishes with considerations and a proposed agenda for further studying this phenomenon.

2021 ◽  
Vol 1 (1) ◽  
pp. 76-87
Author(s):  
Alexander Buhmann ◽  
Christian Fieseler

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.


Author(s):  
Fernando Enrique Lopez Martinez ◽  
Edward Rolando Núñez-Valdez

IoT, big data, and artificial intelligence are currently three of the most relevant and trending pieces for innovation and predictive analysis in healthcare. Many healthcare organizations are already working on developing their own home-centric data collection networks and intelligent big data analytics systems based on machine-learning principles. The benefit of using IoT, big data, and artificial intelligence for community and population health is better health outcomes for the population and communities. The new generation of machine-learning algorithms can use large standardized data sets generated in healthcare to improve the effectiveness of public health interventions. A lot of these data come from sensors, devices, electronic health records (EHR), data generated by public health nurses, mobile data, social media, and the internet. This chapter shows a high-level implementation of a complete solution of IoT, big data, and machine learning implemented in the city of Cartagena, Colombia for hypertensive patients by using an eHealth sensor and Amazon Web Services components.


Author(s):  
Arul Murugan R. ◽  
Sathiyamoorthi V.

Machine learning (ML) is one of the exciting sub-fields of artificial intelligence (AI). The term machine learning is generally stated as the ability to learn without being explicitly programmed. In recent years, machine learning has become one of the thrust areas of research across various business verticals. The technical advancements in the field of big data have provided the ability to gain access over large volumes of diversified data at ease. This massive amount of data can be processed at high speeds in a reasonable amount of time with the help of emerging hardware capabilities. Hence the machine learning algorithms have been the most effective at leveraging all of big data to provide near real-time solutions even for the complex business problems. This chapter aims in giving a solid introduction to various widely adopted machine learning techniques and its applications categorized into supervised, unsupervised, and reinforcement and will serve a simplified guide for the aspiring data and machine learning enthusiasts.


Author(s):  
Alja Videtič Paska ◽  
Katarina Kouter

In psychiatry, compared to other medical fields, the identification of biological markers that would complement current clinical interview, and enable more objective and faster clinical diagnosis, implement accurate monitoring of treatment response and remission, is grave. Current technological development enables analyses of various biological marks in high throughput scale at reasonable costs, and therefore ‘omic’ studies are entering the psychiatry research. However, big data demands a whole new plethora of skills in data processing, before clinically useful information can be extracted. So far the classical approach to data analysis did not really contribute to identification of biomarkers in psychiatry, but the extensive amounts of data might get to a higher level, if artificial intelligence in the shape of machine learning algorithms would be applied. Not many studies on machine learning in psychiatry have been published, but we can already see from that handful of studies that the potential to build a screening portfolio of biomarkers for different psychopathologies, including suicide, exists.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Ashwin A. Phatak ◽  
Franz-Georg Wieland ◽  
Kartik Vempala ◽  
Frederik Volkmar ◽  
Daniel Memmert

AbstractWith the rising amount of data in the sports and health sectors, a plethora of applications using big data mining have become possible. Multiple frameworks have been proposed to mine, store, preprocess, and analyze physiological vitals data using artificial intelligence and machine learning algorithms. Comparatively, less research has been done to collect potentially high volume, high-quality ‘big data’ in an organized, time-synchronized, and holistic manner to solve similar problems in multiple fields. Although a large number of data collection devices exist in the form of sensors. They are either highly specialized, univariate and fragmented in nature or exist in a lab setting. The current study aims to propose artificial intelligence-based body sensor network framework (AIBSNF), a framework for strategic use of body sensor networks (BSN), which combines with real-time location system (RTLS) and wearable biosensors to collect multivariate, low noise, and high-fidelity data. This facilitates gathering of time-synchronized location and physiological vitals data, which allows artificial intelligence and machine learning (AI/ML)-based time series analysis. The study gives a brief overview of wearable sensor technology, RTLS, and provides use cases of AI/ML algorithms in the field of sensor fusion. The study also elaborates sample scenarios using a specific sensor network consisting of pressure sensors (insoles), accelerometers, gyroscopes, ECG, EMG, and RTLS position detectors for particular applications in the field of health care and sports. The AIBSNF may provide a solid blueprint for conducting research and development, forming a smooth end-to-end pipeline from data collection using BSN, RTLS and final stage analytics based on AI/ML algorithms.


2021 ◽  
Vol 1 (1) ◽  
pp. 76-87
Author(s):  
Alexander Buhmann ◽  
Christian Fieseler

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.


Author(s):  
Amit Kumar Tyagi ◽  
Poonam Chahal

With the recent development in technologies and integration of millions of internet of things devices, a lot of data is being generated every day (known as Big Data). This is required to improve the growth of several organizations or in applications like e-healthcare, etc. Also, we are entering into an era of smart world, where robotics is going to take place in most of the applications (to solve the world's problems). Implementing robotics in applications like medical, automobile, etc. is an aim/goal of computer vision. Computer vision (CV) is fulfilled by several components like artificial intelligence (AI), machine learning (ML), and deep learning (DL). Here, machine learning and deep learning techniques/algorithms are used to analyze Big Data. Today's various organizations like Google, Facebook, etc. are using ML techniques to search particular data or recommend any post. Hence, the requirement of a computer vision is fulfilled through these three terms: AI, ML, and DL.


2021 ◽  
Vol 1 (1) ◽  
pp. 74-85
Author(s):  
Alexander Buhmann ◽  
Christian Fieseler

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.


Author(s):  
Dharmapriya M S

Abstract: In the 1950s, the concept of machine learning was discovered and developed as a subfield of artificial intelligence. However, there were no significant developments or research on it until this decade. Typically, this field of study has developed and expanded since the 1990s. It is a field that will continue to develop in the future due to the difficulty of analysing and processing data as the number of records and documents increases. Due to the increasing data, machine learning focuses on finding the best model for the new data that takes into account all the previous data. Therefore, machine learning research will continue in correlation with this increasing data. This research focuses on the history of machine learning, the methods of machine learning, its applications, and the research that has been conducted on this topic. Our study aims to give researchers a deeper understanding of machine learning, an area of research that is becoming much more popular today, and its applications. Keywords: Machine Learning, Machine Learning Algorithms, Artificial Intelligence, Big Data.


Author(s):  
Fati Tahiru ◽  
Samuel Agbesi

The key accelerating factor in the increased growth of AI is the availability of historic datasets, and this has influenced the adoption of artificial intelligence and machine learning in education. This is possible because data can be accessed through the use of various learning management systems (LMS) and the increased use of the internet. Over the years, research on the use of AI and ML in education has improved appreciably, and studies have also indicated its success. Machine learning algorithms have successfully been implemented in institutions for predicting students' performance, recommending courses, counseling students, among others. This chapter discussed the use of AI and ML-assisted systems in education, the importance of AI in education, and the future of AI in education to provide information to educators on the AI transformation in education.


Sign in / Sign up

Export Citation Format

Share Document