scholarly journals Enabling Intelligence through Deep Learning using IoT in a Classroom Environment based on a multimodal approach

Author(s):  
Lakshaga Jyothi M, Et. al.

Smart Classrooms are becoming very popular nowadays. The boom of recent technologies such as the Internet of Things, thanks to those technologies that are tremendously equipping every corner of a diverse set of fields. Every educational institution has set some benchmark on adopting these technologies in their daily lives. But due to some constraints and setbacks, these IoT technological embodiments in the educational sector is still in the premature stage. The major success of any technological evolution is based on its full-fledged implementation to fit the society in the broader concern. The breakthrough in recent years by Deep Learning principles as it outperforms traditional machine learning models to solve any tasks especially, Computer Vision and Natural language processing problems.  A fusion of Computer Vision and Natural Language Processing as a new astonishing field that have shown its existence in the recent years. Using such mixtures with the IoT platforms is a challenging task and and has not reached the eyes of many researchers across the globe.  Many researchers of the past have shown interest in designing an intelligent classroom on a different context. Hence to fill this gap, we have proposed an approach or a conceptual model through which Deep Learning architectures fused in the IoT systems results in an Intelligent Classroom via such hybrid systems. Apart from this, we have also discussed the major challenges, limitations as well as opportunities that can arise with Deep Learning-based IoT Solutions. In this paper, we have summarized the available applications of these technologies to suit our solution.  Thus, this paper can be taken as a kickstart for our research to have a glimpse of the available papers for the success of our proposed approach.

Author(s):  
Gowhar Mohiuddin Dar ◽  
Ashok Sharma ◽  
Parveen Singh

The chapter explores the implications of deep learning in medical sciences, focusing on deep learning concerning natural language processing, computer vision, reinforcement learning, big data, and blockchain influence on some areas of medicine and construction of end-to-end systems with the help of these computational techniques. The deliberation of computer vision in the study is mainly concerned with medical imaging and further usage of natural language processing to spheres such as electronic wellbeing record data. Application of deep learning in genetic mapping and DNA sequencing termed as genomics and implications of reinforcement learning about surgeries assisted by robots are also overviewed.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5526
Author(s):  
Andrew A. Gumbs ◽  
Isabella Frigerio ◽  
Gaya Spolverato ◽  
Roland Croner ◽  
Alfredo Illanes ◽  
...  

Most surgeons are skeptical as to the feasibility of autonomous actions in surgery. Interestingly, many examples of autonomous actions already exist and have been around for years. Since the beginning of this millennium, the field of artificial intelligence (AI) has grown exponentially with the development of machine learning (ML), deep learning (DL), computer vision (CV) and natural language processing (NLP). All of these facets of AI will be fundamental to the development of more autonomous actions in surgery, unfortunately, only a limited number of surgeons have or seek expertise in this rapidly evolving field. As opposed to AI in medicine, AI surgery (AIS) involves autonomous movements. Fortuitously, as the field of robotics in surgery has improved, more surgeons are becoming interested in technology and the potential of autonomous actions in procedures such as interventional radiology, endoscopy and surgery. The lack of haptics, or the sensation of touch, has hindered the wider adoption of robotics by many surgeons; however, now that the true potential of robotics can be comprehended, the embracing of AI by the surgical community is more important than ever before. Although current complete surgical systems are mainly only examples of tele-manipulation, for surgeons to get to more autonomously functioning robots, haptics is perhaps not the most important aspect. If the goal is for robots to ultimately become more and more independent, perhaps research should not focus on the concept of haptics as it is perceived by humans, and the focus should be on haptics as it is perceived by robots/computers. This article will discuss aspects of ML, DL, CV and NLP as they pertain to the modern practice of surgery, with a focus on current AI issues and advances that will enable us to get to more autonomous actions in surgery. Ultimately, there may be a paradigm shift that needs to occur in the surgical community as more surgeons with expertise in AI may be needed to fully unlock the potential of AIS in a safe, efficacious and timely manner.


Author(s):  
Sarojini Yarramsetti ◽  
Anvar Shathik J ◽  
Renisha. P.S.

In this digital world, experience sharing, knowledge exploration, taught posting and other related social exploitations are common to every individual as well as social media/network such as FaceBook, Twitter, etc plays a vital role in such kinds of activities. In general, many social network based sentimental feature extraction details and logics are available as well as many researchers work on that domain for last few years. But all those research specification are narrowed in the sense of building a way for estimating the opinions and sentiments with respect to the tweets and posts the user raised on the social network or any other related web interfacing medium. Many social network schemes provides an ability to the users to push the voice tweets and voice messages, so that the voice messages may contain some harmful as well as normal and important contents. In this paper, a new methodology is designed called Intensive Deep Learning based Voice Estimation Principle (IDLVEP), in which it is used to identify the voice message content and extract the features based on the Natural Language Processing (NLP) logic. The association of such Deep Learning and Natural Language Processing provides an efficient approach to build the powerful data processing model to identify the sentimental features from the social networking medium. This hybrid logic provides support for both text based and voice based tweet sentimental feature estimations. The Natural Language Processing principles assists the proposed approach of IDLVEP to extracts the voice content from the input message and provides a raw text content, based on that the deep learning principles classify the messages with respect to the estimation of harmful or normal tweets. The tweets raised by the user are initially sub-divided into two categories such as voice tweets and text tweets. The voice tweets will be taken care by the NLP principles and the text enabled tweets will be handled by means of deep learning principles, in which the voice tweets are also extracted and taken care by the deep learning principle only. The social network has two different faces such as provides support to developments as well as the same it provides a way to access that for harmful things. So, that this approach of IDLVEP identifies the harmful contents from the user tweets and remove that in an intelligent manner by using the proposed approach classification strategies. This paper concentrates on identifying the sentimental features from the user tweets and provides the harm free social network environment to the society.


Author(s):  
Santosh Kumar Mishra ◽  
Rijul Dhir ◽  
Sriparna Saha ◽  
Pushpak Bhattacharyya

Image captioning is the process of generating a textual description of an image that aims to describe the salient parts of the given image. It is an important problem, as it involves computer vision and natural language processing, where computer vision is used for understanding images, and natural language processing is used for language modeling. A lot of works have been done for image captioning for the English language. In this article, we have developed a model for image captioning in the Hindi language. Hindi is the official language of India, and it is the fourth most spoken language in the world, spoken in India and South Asia. To the best of our knowledge, this is the first attempt to generate image captions in the Hindi language. A dataset is manually created by translating well known MSCOCO dataset from English to Hindi. Finally, different types of attention-based architectures are developed for image captioning in the Hindi language. These attention mechanisms are new for the Hindi language, as those have never been used for the Hindi language. The obtained results of the proposed model are compared with several baselines in terms of BLEU scores, and the results show that our model performs better than others. Manual evaluation of the obtained captions in terms of adequacy and fluency also reveals the effectiveness of our proposed approach. Availability of resources : The codes of the article are available at https://github.com/santosh1821cs03/Image_Captioning_Hindi_Language ; The dataset will be made available: http://www.iitp.ac.in/∼ai-nlp-ml/resources.html .


Sign in / Sign up

Export Citation Format

Share Document