scholarly journals Deep Learning for the Russian Language

Author(s):  
Ekaterina Artemova

AbstractDeep learning is a term used to describe artificial intelligence (AI) technologies. AI deals with how computers can be used to solve complex problems in the same way that humans do. Such technologies as computer vision (CV) and natural language processing (NLP) are distinguished as the largest AI areas. To imitate human vision and the ability to express meaning and feelings through language, deep learning exploits artificial neural networks that are trained on real life evidence.While most vision-related tasks are solved using common methods nearly irrespective of target domains, NLP methods strongly depend on the properties of a given language. Linguistic diversity complicates deep learning for NLP. This chapter focuses on deep learning applications to processing the Russian language.

With the evolution of artificial intelligence to deep learning, the age of perspicacious machines has pioneered that can even mimic as a human. A Conversational software agent is one of the best-suited examples of such intuitive machines which are also commonly known as chatbot actuated with natural language processing. The paper enlisted some existing popular chatbots along with their details, technical specifications, and functionalities. Research shows that most of the customers have experienced penurious service. Also, the inception of meaningful cum instructive feedback endure a demanding and exigent assignment as enactment for chatbots builtout reckon mostly upon templates and hand-written rules. Current chatbot models lack in generating required responses and thus contradict the quality conversation. So involving deep learning amongst these models can overcome this lack and can fill up the paucity with deep neural networks. Some of the deep Neural networks utilized for this till now are Stacked Auto-Encoder, sparse auto-encoders, predictive sparse and denoising auto-encoders. But these DNN are unable to handle big data involving large amounts of heterogeneous data. While Tensor Auto Encoder which overcomes this drawback is time-consuming. This paper has proposed the Chatbot to handle the big data in a manageable time.


2022 ◽  
Vol 12 (1) ◽  
pp. 491
Author(s):  
Alexander Sboev ◽  
Sanna Sboeva ◽  
Ivan Moloshnikov ◽  
Artem Gryaznov ◽  
Roman Rybka ◽  
...  

The paper presents the full-size Russian corpus of Internet users’ reviews on medicines with complex named entity recognition (NER) labeling of pharmaceutically relevant entities. We evaluate the accuracy levels reached on this corpus by a set of advanced deep learning neural networks for extracting mentions of these entities. The corpus markup includes mentions of the following entities: medication (33,005 mentions), adverse drug reaction (1778), disease (17,403), and note (4490). Two of them—medication and disease—include a set of attributes. A part of the corpus has a coreference annotation with 1560 coreference chains in 300 documents. A multi-label model based on a language model and a set of features has been developed for recognizing entities of the presented corpus. We analyze how the choice of different model components affects the entity recognition accuracy. Those components include methods for vector representation of words, types of language models pre-trained for the Russian language, ways of text normalization, and other pre-processing methods. The sufficient size of our corpus allows us to study the effects of particularities of annotation and entity balancing. We compare our corpus to existing ones by the occurrences of entities of different types and show that balancing the corpus by the number of texts with and without adverse drug event (ADR) mentions improves the ADR recognition accuracy with no notable decline in the accuracy of detecting entities of other types. As a result, the state of the art for the pharmacological entity extraction task for the Russian language is established on a full-size labeled corpus. For the ADR entity type, the accuracy achieved is 61.1% by the F1-exact metric, which is on par with the accuracy level for other language corpora with similar characteristics and ADR representativeness. The accuracy of the coreference relation extraction evaluated on our corpus is 71%, which is higher than the results achieved on the other Russian-language corpora.


Author(s):  
Imran Aslan

Developments in technology have opened new doors for healthcare to improve the treatment methods and prevent illnesses as a proactive method. Internet of things (IoT) technologies have also improved the self-management of care and provided more useful data and decisions to doctors with data analytics. Unnecessary visits, utilizing better quality resources, and improving allocation and planning are main advantages of IoT in healthcare. Moreover, governments and private institutions have become a part of this new state-of-the-art development for decreasing costs and getting more benefits over the management of services. In this chapter, IoT technologies and applications are explained with some examples. Furthermore, deep learning and artificial intelligence (AI) usage in healthcare and their benefits are stated that artificial neural networks (ANN) can monitor, learn, and predict, and the overall health severity for preventing serious health loss can be estimated and prevented.


Author(s):  
JZT Sim ◽  
QW Fong ◽  
WM Huang ◽  
CH Tan

With the advent of artificial intelligence (AI), machines are increasingly being used to complete complicated tasks, yielding remarkable results. Machine learning (ML) is the most relevant subset of AI in medicine, which will soon become an integral part of our everyday practice. Therefore, physicians should acquaint themselves with ML and AI, and their role as an enabler rather than a competitor. Herein, we introduce basic concepts and terms used in AI and ML, and aim to demystify commonly used AI/ML algorithms such as learning methods including neural networks/deep learning, decision tree and application domain in computer vision and natural language processing through specific examples. We discuss how machines are already being used to augment the physician’s decision-making process, and postulate the potential impact of ML on medical practice and medical research based on its current capabilities and known limitations. Moreover, we discuss the feasibility of full machine autonomy in medicine.


2022 ◽  
Author(s):  
Ms. Aayushi Bansal ◽  
Dr. Rewa Sharma ◽  
Dr. Mamta Kathuria

Recent advancements in deep learning architecture have increased its utility in real-life applications. Deep learning models require a large amount of data to train the model. In many application domains, there is a limited set of data available for training neural networks as collecting new data is either not feasible or requires more resources such as in marketing, computer vision, and medical science. These models require a large amount of data to avoid the problem of overfitting. One of the data space solutions to the problem of limited data is data augmentation. The purpose of this study focuses on various data augmentation techniques that can be used to further improve the accuracy of a neural network. This saves the cost and time consumption required to collect new data for the training of deep neural networks by augmenting available data. This also regularizes the model and improves its capability of generalization. The need for large datasets in different fields such as computer vision, natural language processing, security and healthcare is also covered in this survey paper. The goal of this paper is to provide a comprehensive survey of recent advancements in data augmentation techniques and their application in various domains.


2020 ◽  
pp. 1-38
Author(s):  
Amandeep Kaur ◽  
◽  
Anjum Mohammad Aslam ◽  

In this chapter we discuss the core concept of Artificial Intelligence. We define the term of Artificial Intelligence and its interconnected terms such as Machine learning, deep learning, Neural Networks. We describe the concept with the perspective of its usage in the area of business. We further analyze various applications and case studies which can be achieved using Artificial Intelligence and its sub fields. In the area of business already numerous Artificial Intelligence applications are being utilized and will be expected to be utilized more in the future where machines will improve the Artificial Intelligence, Natural language processing, Machine learning abilities of humans in various zones.


2021 ◽  
pp. 30-33
Author(s):  
Neha Sharma ◽  
Dr. S Veenadhari ◽  
Rachna Kulhare

Deep learning is a type of artificial intelligence that employs neural networks, a multi-layered structure of algorithms. Deep learning is an accumulation of artificial intelligence statistics based on artificial neural networks for the teaching of functional hierarchies. In sentiment analysis, deep learning is also applied. This paper begins with an overview of deep learning before moving on to a detailed examination of its present uses in sentiment analysis.


Author(s):  
Deepa Joshi ◽  
Shahina Anwarul ◽  
Vidyanand Mishra

A branch of artificial intelligence (AI) known as deep learning consists of statistical analysis algorithms known as artificial neural networks (ANN) inspired by the structure and function of the brain. The accuracy of predicting a task has tremendously improved with the implementation of deep neural networks, which in turn incorporate deep layers into the model allowing the system to learn complex data. This chapter intends to give a straightforward manual for the complexities of Google's Keras framework that is easy to understand. The basic steps for the installation of Anaconda, CUDA, along with deep learning libraries, specifically Keras and Tensorflow, are discussed. A practical approach towards solving deep learning problems in order to identify objects in CIFAR 10 dataset is explained in detail. This will help the audience in understanding deep learning through substantial practical examples to perceive algorithms instead of theory discussions.


Author(s):  
Saad Sadiq ◽  
Mei-Ling Shyu ◽  
Daniel J. Feaster

Deep Neural Networks (DNNs) are best known for being the state-of-the-art in artificial intelligence (AI) applications including natural language processing (NLP), speech processing, computer vision, etc. In spite of all recent achievements of deep learning, it has yet to achieve semantic learning required to reason about the data. This lack of reasoning is partially imputed to the boorish memorization of patterns and curves from millions of training samples and ignoring the spatiotemporal relationships. The proposed framework puts forward a novel approach based on variational autoencoders (VAEs) by using the potential outcomes model and developing the counterfactual autoencoders. The proposed framework transforms any sort of multimedia input distributions to a meaningful latent space while giving more control over how the latent space is created. This allows us to model data that is better suited to answer inference-based queries, which is very valuable in reasoning-based AI applications.


Author(s):  
Dr. Suma V.

The paper is a review on the computer vision that is helpful in the interaction between the human and the machines. The computer vision that is termed as the subfield of the artificial intelligence and the machine learning is capable of training the computer to visualize, interpret and respond back to the visual world in a similar way as the human vision does. Nowadays the computer vision has found its application in broader areas such as the heath care, safety security, surveillance etc. due to the progress, developments and latest innovations in the artificial intelligence, deep learning and neural networks. The paper presents the enhanced capabilities of the computer vision experienced in various applications related to the interactions between the human and machines involving the artificial intelligence, deep learning and the neural networks.


Sign in / Sign up

Export Citation Format

Share Document