language processor
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 6)

H-INDEX

7
(FIVE YEARS 1)

2022 ◽  
pp. 72-86

This chapter presents the Socrates DigitalTM system's design and development process. It describes the four phases of design and development: understand, explore, materialize, and realize. The completion of these four phases results in a Socrates DigitalTM system that leverages artificial intelligence services. The artificial intelligence services include a natural language processor provided by several artificial intelligence service providers, including Apple, Microsoft, Google, IBM, and Amazon.


2022 ◽  
pp. 197-233

This chapter shows how software development professionals use the provided flow charts and pseudo-code to create the Dialog Development Manager. Analysts then use the Dialog Development Manager to create the problem-specific knowledge needed by a natural language processor to support the conversation between Socrates DigitalTM and end users. The Dialog Development Manager guides the analysts through design and development of the Understand, Explore, Materialize, and Realize phases to create the conversational interface for Socrates DigitalTM.


2021 ◽  
pp. 39
Author(s):  
Marta Dosaiguas Canal ◽  
Jèssica Pérez-Moreno

Es evidente que en el seno familiar se producen interacciones musicales entre hermanos/as y que estas tienen una transcendencia importante en el desarrollo musical de ambos participantes. Este artículo presenta datos de un estudio sobre estas interacciones de las que aún se sabe muy poco. Los/as participantes son hermanos/as de entre dos y seis años de tres familias catalanas de características similares. Los datos se recogen mediante el DLP (Digital Language Processor) de LENA®, una grabadora de audio que puede grabar hasta 16 horas con una alta calidad y de forma no intrusiva, que lleva puesta el hermano menor. Las grabaciones se recogen de forma periódica y durante un día entero, y se completan con notas de voz narradas por las familias para facilitar información del contexto de ese día. Los datos se analizan con una tabla validada que permite extraer información de cuatro dimensiones: 1) orden de participación; 2) lugar; 3) tipo de intervención, y 4) fuente. Los resultados revelan, entre otra información, que: a) en la mayor parte de las interacciones el hermano mayor empieza y el menor termina la interacción; b) que las interacciones se producen mayoritariamente en el hogar; c) que la imitación y la sincronía son las tipologías de interacción más utilizadas, y d) que las interacciones se basan por igual en canciones o en improvisaciones.


2021 ◽  
Author(s):  
Yongheng Chen ◽  
Rui Zhong ◽  
Hong Hu ◽  
Hangfan Zhang ◽  
Yupeng Yang ◽  
...  

EMBO Reports ◽  
2020 ◽  
Vol 21 (12) ◽  
Author(s):  
Philip Hunter

Electronics ◽  
2019 ◽  
Vol 8 (6) ◽  
pp. 681 ◽  
Author(s):  
Praveen Edward James ◽  
Hou Kit Mun ◽  
Chockalingam Aravind Vaithilingam

The purpose of this work is to develop a spoken language processing system for smart device troubleshooting using human-machine interaction. This system combines a software Bidirectional Long Short Term Memory Cell (BLSTM)-based speech recognizer and a hardware LSTM-based language processor for Natural Language Processing (NLP) using the serial RS232 interface. Mel Frequency Cepstral Coefficient (MFCC)-based feature vectors from the speech signal are directly input into a BLSTM network. A dropout layer is added to the BLSTM layer to reduce over-fitting and improve robustness. The speech recognition component is a combination of an acoustic modeler, pronunciation dictionary, and a BLSTM network for generating query text, and executes in real time with an 81.5% Word Error Rate (WER) and average training time of 45 s. The language processor comprises a vectorizer, lookup dictionary, key encoder, Long Short Term Memory Cell (LSTM)-based training and prediction network, and dialogue manager, and transforms query intent to generate response text with a processing time of 0.59 s, 5% hardware utilization, and an F1 score of 95.2%. The proposed system has a 4.17% decrease in accuracy compared with existing systems. The existing systems use parallel processing and high-speed cache memories to perform additional training, which improves the accuracy. However, the performance of the language processor has a 36.7% decrease in processing time and 50% decrease in hardware utilization, making it suitable for troubleshooting smart devices.


2018 ◽  
Vol 29 (04) ◽  
pp. 279-291 ◽  
Author(s):  
Kelsey E. Klein ◽  
Yu-Hsiang Wu ◽  
Elizabeth Stangl ◽  
Ruth A. Bentler

AbstractAuditory environments can influence the communication function of individuals with hearing loss and the effects of hearing aids. Therefore, a tool that can objectively characterize a patient’s real-world auditory environments is needed.To use the Language Environment Analysis (LENA) system to quantify the auditory environments of adults with hearing loss, to examine if the use of hearing aids changes a user’s auditory environment, and to determine the association between LENA variables and self-report hearing aid outcome measures.This study used a crossover design.Participants included 22 adults with mild-to-moderate hearing loss, age 64–82 yr.Participants were fitted with bilateral behind-the-ear hearing aids from a major manufacturer.The LENA system consists of a digital language processor (DLP) that is worn by an individual and records up to 16 hr of the individual’s auditory environment. The recording is then automatically categorized according to time spent in different types of auditory environments (e.g., meaningful speech and TV/electronic sound) by the LENA algorithms. The LENA system also characterizes the user’s auditory environment by providing the sound levels of different auditory categories. Participants in the present study wore a LENA DLP in an unaided condition and aided condition, which each lasted six to eight days. Participants wore bilateral hearing aids in the aided condition. Percentage of time spent in each auditory environment, as well as median levels of TV/electronic sounds and speech, were compared between subjects’ unaided and aided conditions using paired sample t tests. LENA data were also compared to self-report measures of hearing disability and hearing aid benefit using Pearson correlations.Overall, participants spent the greatest percentage of time in silence (∼40%), relative to other auditory environments. Participants spent ∼12% and 26% of their time in meaningful speech and TV/electronic sound environments, respectively. No significant differences were found between mean percentage of time spent in each auditory environment in the unaided and aided conditions. Median TV/electronic sound levels were on average 2.4 dB lower in the aided condition than in the unaided condition; speech levels were not significantly different between the two conditions. TV/electronic sound and speech levels did not significantly correlate with self-report data.The LENA system can provide rich data to characterize the everyday auditory environments of older adults with hearing loss. Although TV/electronic sound level was significantly lower in the aided than unaided condition, the use of hearing aids seemed not to substantially change users’ auditory environments. Because there is no significant association between objective LENA variables and self-report questionnaire outcomes, these two types of measures may assess different aspects of communication function. The feasibility of using LENA in clinical settings is discussed.


2014 ◽  
Vol 35 (8) ◽  
pp. 1301-1305 ◽  
Author(s):  
Lingsheng Li ◽  
Ami R. Vikani ◽  
Gregory C. Harris ◽  
Frank R. Lin

Sign in / Sign up

Export Citation Format

Share Document