Comparison of Speech Recognition and Natural Language Understanding Frameworks for Detection of Dangers with Smart Wearables

Author(s):  
Dariusz Mrozek ◽  
Szymon Kwaśnicki ◽  
Vaidy Sunderam ◽  
Bożena Małysiak-Mrozek ◽  
Krzysztof Tokarz ◽  
...  
Author(s):  
Dr. Jaydeep Patil ◽  
Atharva Shewale ◽  
Ekta Bhushan ◽  
Alister Fernandes ◽  
Rucha Khartadkar

Virtual Personal Assistant (VPA) is one of the most successful results of Artificial Intelligence, which has given a new way for the human to have its work done from a machine. This paper gives a brief survey on the methodologies and concepts used in making of an Virtual Personal Assistant (VPA) and thereby going on to use it in different software applications. Speech Recognition Systems, also known as Automatic Speech Recognition (ASR), plays An important role in virtual assistants in order to help user have a conversation with the system. In this project, we are trying to make a Virtual Personal Assistant ERAA which will include the important features that could help in assisting ones’ needs. Keeping in mind the user experience, we will make it as appealing as possible, just like other VPAs. Various Natural Language Understanding Platforms like IBM Watson and Google Dialogflow were studied for the same. In our project, we have used Google Dialogflow as the NLU Platform for the implementation of the software application. The User-Interface for the application is designed with the help of Flutter Software Platform. All the models used for this VPA will be designed in a way to work as efficient as possible. Some of the common features which are available in most of the VPAs will be added. We will be implementing ERAA via a smartphone application, and for future scope, our aim will be to implement it on the desktop environment. The following Paper ensure to provide the methodologies used for development of the application. It provides the obtained outcomes of the features developed within the application. It shows how the available natural language understanding platforms can reduce the burden of the user, and therefore going on to develop a robust software application.


2021 ◽  
pp. 1-12
Author(s):  
Manaal Faruqui ◽  
Dilek Hakkani-Tür

Abstract As more users across the world are interacting with dialog agents in their daily life, there is a need for better speech understanding that calls for renewed attention to the dynamics between research in automatic speech recognition (ASR) and natural language understanding (NLU). We briefly review these research areas and lay out the current relationship between them. In light of the observations we make in this paper, we argue that (1) NLU should be cognizant of the presence of ASR models being used upstream in a dialog system’s pipeline, (2) ASR should be able to learn from errors found in NLU, (3) there is a need for end-to-end datasets that provide semantic annotations on spoken input, (4) there should be stronger collaboration between ASR and NLU research communities.


Sign in / Sign up

Export Citation Format

Share Document