scholarly journals Generating Vowel Nasality for a Rule-Based Bangla Speech Synthesizer

Bangla is a useful language to study nasal vowels because all the vowels have their corresponding nasal vowel counterpart. Vowel nasality generation is an important task for artificial nasality production in speech synthesizer. Various methods have been employed by many researchers for generating vowel nasality. Vowel nasality generation for a rule-basedspeech synthesizer has not been studied yet for Bangla. This study discusses several methods using full spectrum and partial spectrum for generating vowel nasality to use in a rule-basedBangla text to speech (TTS) system using demisyllable. In a demisyllable based Bangla TTS 1400 demisyllables are needed to be stored in database. Transforming the vowel part of a demisyllable into its nasal counterpart reduces the speech database size to 700 demisyllables. Comparative study of the e

Electronics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 267
Author(s):  
Fernando Alonso Martin ◽  
María Malfaz ◽  
Álvaro Castro-González ◽  
José Carlos Castillo ◽  
Miguel Ángel Salichs

The success of social robotics is directly linked to their ability of interacting with people. Humans possess verbal and non-verbal communication skills, and, therefore, both are essential for social robots to get a natural human–robot interaction. This work focuses on the first of them since the majority of social robots implement an interaction system endowed with verbal capacities. In order to do this implementation, we must equip social robots with an artificial voice system. In robotics, a Text to Speech (TTS) system is the most common speech synthesizer technique. The performance of a speech synthesizer is mainly evaluated by its similarity to the human voice in relation to its intelligibility and expressiveness. In this paper, we present a comparative study of eight off-the-shelf TTS systems used in social robots. In order to carry out the study, 125 participants evaluated the performance of the following TTS systems: Google, Microsoft, Ivona, Loquendo, Espeak, Pico, AT&T, and Nuance. The evaluation was performed after observing videos where a social robot communicates verbally using one TTS system. The participants completed a questionnaire to rate each TTS system in relation to four features: intelligibility, expressiveness, artificiality, and suitability. In this study, four research questions were posed to determine whether it is possible to present a ranking of TTS systems in relation to each evaluated feature, or, on the contrary, there are no significant differences between them. Our study shows that participants found differences between the TTS systems evaluated in terms of intelligibility, expressiveness, and artificiality. The experiments also indicated that there was a relationship between the physical appearance of the robots (embodiment) and the suitability of TTS systems.


1982 ◽  
Vol CE-28 (3) ◽  
pp. 250-256 ◽  
Author(s):  
Katsunobu Fushikida ◽  
Yukio Mitome ◽  
Yuji Inoue

2013 ◽  
pp. 498-512
Author(s):  
Erik Cuevas ◽  
Daniel Zaldivar ◽  
Marco Perez-Cisneros

Reliable corner detection is an important task in pattern recognition applications. In this chapter an approach based on fuzzy-rules to detect corners even under imprecise information is presented. The uncertainties arising due to various types of imaging defects such as blurring, illumination change, noise, et cetera. Fuzzy systems are well known for efficient handling of impreciseness. In order to handle the incompleteness arising due to imperfection of data, it is reasonable to model corner properties by a fuzzy rule-based system. The robustness of the proposed algorithm is compared with well known conventional detectors. The performance is tested on a number of benchmark test images to illustrate the efficiency of the algorithm in noise presence.


2011 ◽  
pp. 456-477 ◽  
Author(s):  
Vassilis Papataxiarhis ◽  
Vassileios Tsetsos ◽  
Isambo Karali ◽  
Panagiotis Stamatopoulos

Embedding rules into Web applications, and distributed applications in general, seems to constitute a significant task in order to accommodate desired expressivity features in such environments. Various methodologies and reasoning modules have been proposed to manage rules and knowledge on the Web. The main objective of the chapter is to survey related work in this area and discuss relevant theories, methodologies and tools that can be used to develop rule-based applications for the Web. The chapter deals with both ways that have been formally defined for modeling a domain of interest: the first based on standard logics while the second one stemmed from the logic programming perspective. Furthermore, a comparative study that evaluates the reasoning engines and the various knowledge representation methodologies, focusing on rules, is presented.


Sign in / Sign up

Export Citation Format

Share Document