Case Studies in Amalgamation of Deep Learning and Big Data

Author(s):  
Balajee Jeyakumar ◽  
M.A. Saleem Durai ◽  
Daphne Lopez

Deep learning is now more popular research domain in machine learning and pattern recognition in the world. It is widely success in the far-reaching area of applications such as Speech recognition, Computer vision, Natural language processing and Reinforcement learning. With the absolute amount of data accessible nowadays, big data brings chances and transformative possible for several sectors, on the other hand, it also performs on the unpredicted defies to connecting data and information. The size of the data is getting larger, and deep learning is imminent to play a vital role in big data predictive analytics solutions. In this paper, we make available a brief outline of deep learning and focus recent research efforts and the challenges in the fields of science, medical and water resource system.

Author(s):  
Balajee Jeyakumar ◽  
M.A. Saleem Durai ◽  
Daphne Lopez

Deep learning is now more popular research domain in machine learning and pattern recognition in the world. It is widely success in the far-reaching area of applications such as Speech recognition, Computer vision, Natural language processing and Reinforcement learning. With the absolute amount of data accessible nowadays, big data brings chances and transformative possible for several sectors, on the other hand, it also performs on the unpredicted defies to connecting data and information. The size of the data is getting larger, and deep learning is imminent to play a vital role in big data predictive analytics solutions. In this paper, we make available a brief outline of deep learning and focus recent research efforts and the challenges in the fields of science, medical and water resource system.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Kazi Nabiul Alam ◽  
Md Shakib Khan ◽  
Abdur Rab Dhruba ◽  
Mohammad Monirujjaman Khan ◽  
Jehad F. Al-Amri ◽  
...  

The COVID-19 pandemic has had a devastating effect on many people, creating severe anxiety, fear, and complicated feelings or emotions. After the initiation of vaccinations against coronavirus, people’s feelings have become more diverse and complex. Our aim is to understand and unravel their sentiments in this research using deep learning techniques. Social media is currently the best way to express feelings and emotions, and with the help of Twitter, one can have a better idea of what is trending and going on in people’s minds. Our motivation for this research was to understand the diverse sentiments of people regarding the vaccination process. In this research, the timeline of the collected tweets was from December 21 to July21. The tweets contained information about the most common vaccines available recently from across the world. The sentiments of people regarding vaccines of all sorts were assessed using the natural language processing (NLP) tool, Valence Aware Dictionary for sEntiment Reasoner (VADER). Initializing the polarities of the obtained sentiments into three groups (positive, negative, and neutral) helped us visualize the overall scenario; our findings included 33.96% positive, 17.55% negative, and 48.49% neutral responses. In addition, we included our analysis of the timeline of the tweets in this research, as sentiments fluctuated over time. A recurrent neural network- (RNN-) oriented architecture, including long short-term memory (LSTM) and bidirectional LSTM (Bi-LSTM), was used to assess the performance of the predictive models, with LSTM achieving an accuracy of 90.59% and Bi-LSTM achieving 90.83%. Other performance metrics such as precision,, F1-score, and a confusion matrix were also used to validate our models and findings more effectively. This study improves understanding of the public’s opinion on COVID-19 vaccines and supports the aim of eradicating coronavirus from the world.


2021 ◽  
Author(s):  
Khloud Al Jallad

Abstract New Attacks are increasingly used by attackers every day but many of them are not detected by Intrusion Detection Systems as most IDS ignore raw packet information and only care about some basic statistical information extracted from PCAP files. Using networking programs to extract fixed statistical features from packets is good, but may not enough to detect nowadays challenges. We think that it is time to utilize big data and deep learning for automatic dynamic feature extraction from packets. It is time to get inspired by deep learning pre-trained models in computer vision and natural language processing, so security deep learning solutions will have its pre-trained models on big datasets to be used in future researches. In this paper, we proposed a new approach for embedding packets based on character-level embeddings, inspired by FastText success on text data. We called this approach FastPacket. Results are measured on subsets of CIC-IDS-2017 dataset, but we expect promising results on big data pre-trained models. We suggest building pre-trained FastPacket on MAWI big dataset and make it available to community, similar to FastText. To be able to outperform currently used NIDS, to start a new era of packet-level NIDS that can better detect complex attacks


Author(s):  
Renuka Mahajan

In today's world everything is connected and is either consuming data or generating data. The world is changing so fast that even one-year-old data may not be useful, and hence, big data analysis plays a very vital role for higher management of any organizations for decision making. Data warehousing helps in gathering and storing verifiable information into a single entity. Data can be of different types like speech, text, etc. It can be structured or unstructured. Each data point is characterized in terms of volume or variety. This chapter gives an overview of how to utilize the learner interaction data from a particular website and how patterns can be captured by analyzing learner interaction data with big data analytic tools. Big data has risen in the field of education and has many challenges like storage, combining, analysis, and scalability of big data. It covers tools and techniques that can be used. The results from this study will have implications for new learners to the e-learning website, website designers, and academicians.


Author(s):  
Neha Warikoo ◽  
Yung-Chun Chang ◽  
Wen-Lian Hsu

Abstract Motivation Natural Language Processing techniques are constantly being advanced to accommodate the influx of data as well as to provide exhaustive and structured knowledge dissemination. Within the biomedical domain, relation detection between bio-entities known as the Bio-Entity Relation Extraction (BRE) task has a critical function in knowledge structuring. Although recent advances in deep learning-based biomedical domain embedding have improved BRE predictive analytics, these works are often task selective or use external knowledge-based pre-/post-processing. In addition, deep learning-based models do not account for local syntactic contexts, which have improved data representation in many kernel classifier-based models. In this study, we propose a universal BRE model, i.e. LBERT, which is a Lexically aware Transformer-based Bidirectional Encoder Representation model, and which explores both local and global contexts representations for sentence-level classification tasks. Results This article presents one of the most exhaustive BRE studies ever conducted over five different bio-entity relation types. Our model outperforms state-of-the-art deep learning models in protein–protein interaction (PPI), drug–drug interaction and protein–bio-entity relation classification tasks by 0.02%, 11.2% and 41.4%, respectively. LBERT representations show a statistically significant improvement over BioBERT in detecting true bio-entity relation for large corpora like PPI. Our ablation studies clearly indicate the contribution of the lexical features and distance-adjusted attention in improving prediction performance by learning additional local semantic context along with bi-directionally learned global context. Availability and implementation Github. https://github.com/warikoone/LBERT. Supplementary information Supplementary data are available at Bioinformatics online.


News is a routine in everyone's life. It helps in enhancing the knowledge on what happens around the world. Fake news is a fictional information madeup with the intension to delude and hence the knowledge acquired becomes of no use. As fake news spreads extensively it has a negative impact in the society and so fake news detection has become an emerging research area. The paper deals with a solution to fake news detection using the methods, deep learning and Natural Language Processing. The dataset is trained using deep neural network. The dataset needs to be well formatted before given to the network which is made possible using the technique of Natural Language Processing and thus predicts whether a news is fake or not.


Author(s):  
Shaila S. G. ◽  
Sunanda Rajkumari ◽  
Vadivel Ayyasamy

Deep learning is playing vital role with greater success in various applications, such as digital image processing, human-computer interaction, computer vision and natural language processing, robotics, biological applications, etc. Unlike traditional machine learning approaches, deep learning has effective ability of learning and makes better use of data set for feature extraction. Because of its repetitive learning ability, deep learning has become more popular in the present-day research works.


With the evolution of artificial intelligence to deep learning, the age of perspicacious machines has pioneered that can even mimic as a human. A Conversational software agent is one of the best-suited examples of such intuitive machines which are also commonly known as chatbot actuated with natural language processing. The paper enlisted some existing popular chatbots along with their details, technical specifications, and functionalities. Research shows that most of the customers have experienced penurious service. Also, the inception of meaningful cum instructive feedback endure a demanding and exigent assignment as enactment for chatbots builtout reckon mostly upon templates and hand-written rules. Current chatbot models lack in generating required responses and thus contradict the quality conversation. So involving deep learning amongst these models can overcome this lack and can fill up the paucity with deep neural networks. Some of the deep Neural networks utilized for this till now are Stacked Auto-Encoder, sparse auto-encoders, predictive sparse and denoising auto-encoders. But these DNN are unable to handle big data involving large amounts of heterogeneous data. While Tensor Auto Encoder which overcomes this drawback is time-consuming. This paper has proposed the Chatbot to handle the big data in a manageable time.


2020 ◽  
Vol 19 (01) ◽  
pp. A02
Author(s):  
Gernot Rieder ◽  
Thomas Voelker

As the digital revolution continues and our lives become increasingly governed by smart technologies, there is a rising need for reflection and critical debate about where we are, where we are headed, and where we want to be. Against this background, the paper suggests that one way to foster such discussion is by engaging with the world of fiction, with imaginative stories that explore the spaces, places, and politics of alternative realities. Hence, after a concise discussion of the concept of speculative fiction, we introduce the notion of datafictions as an umbrella term for speculative stories that deal with the datafication of society in both imaginative and imaginable ways. We then outline and briefly discuss fifteen datafictions subdivided into five main categories: surveillance; social sorting; prediction; advertising and corporate power; hubris, breakdown, and the end of Big Data. In a concluding section, we argue for the increased use of speculative fiction in education, but also as a tool to examine how specific technologies are culturally imagined and what kind of futures are considered plausible given current implementations and trajectories.


2022 ◽  
Vol 31 (1) ◽  
pp. 113-126
Author(s):  
Jia Guo

Abstract Emotional recognition has arisen as an essential field of study that can expose a variety of valuable inputs. Emotion can be articulated in several means that can be seen, like speech and facial expressions, written text, and gestures. Emotion recognition in a text document is fundamentally a content-based classification issue, including notions from natural language processing (NLP) and deep learning fields. Hence, in this study, deep learning assisted semantic text analysis (DLSTA) has been proposed for human emotion detection using big data. Emotion detection from textual sources can be done utilizing notions of Natural Language Processing. Word embeddings are extensively utilized for several NLP tasks, like machine translation, sentiment analysis, and question answering. NLP techniques improve the performance of learning-based methods by incorporating the semantic and syntactic features of the text. The numerical outcomes demonstrate that the suggested method achieves an expressively superior quality of human emotion detection rate of 97.22% and the classification accuracy rate of 98.02% with different state-of-the-art methods and can be enhanced by other emotional word embeddings.


Sign in / Sign up

Export Citation Format

Share Document