Speech Analysis Systems

1992 ◽  
Vol 35 (2) ◽  
pp. 314-332 ◽  
Author(s):  
Charles Read ◽  
Eugene H. Buder ◽  
Raymond D. Kent

Performance characteristics are reviewed for seven systems marketed for acoustic speech analysis: CSpeech, CSRE, ILS-PC, Kay Elemetrics model 5500 Sona-Graph, MacSpeech Lab II, MSL, and Signalyze. The characteristics reviewed include system components, basic capabilities (signal acquisition, waveform operations, analysis, and other functions), documentation, user interface, data formats and journaling, speed and precision of spectral analysis, and speed and precision of fundamental frequency analysis. Basic capabilities are also tabulated for three recently introduced systems: the Sensimetrics SpeechStation, the Kay Elemetrics Computerized Speech Lab (CSL), and the LSI Speech Workstation. In addition to the capability and performance summaries, this article offers suggestions for continued development of speech analysis systems, particularly in data exchange, journaling, display features, spectral analysis, and fundamental frequency analysis.

2012 ◽  
Vol 220-223 ◽  
pp. 1472-1475
Author(s):  
Qiu Lin Tan ◽  
Xiang Dong Pei ◽  
Si Min Zhu ◽  
Ji Jun Xiong

On the basis of automatic test system of the status in domestic and foreign, by analysis of the various functions and performance of the integrated test system, a design of the integrated test system is proposed, FPGA as the core logic controller of the hardware circuit. The system of the hardware design include: digital signal source output modules, analog output module and PCM codec module. Design of hardware circuit are mainly described. In addition, a detailed analysis of some key technologies in the design process was given. Overall, its data exchange with host computer is through the PCI card, data link and bandwidth can be expanded in accordance with the actual needs. The entire system designed in the modular principle, which has a strong scalability.


2015 ◽  
Vol 12 (2) ◽  
pp. 655-681 ◽  
Author(s):  
Tomas Cerny ◽  
Miroslav Macik ◽  
Michael Donahoo ◽  
Jan Janousek

Increasing demands on user interface (UI) usability, adaptability, and dynamic behavior drives ever-growing development and maintenance complexity. Traditional UI design techniques result in complex descriptions for data presentations with significant information restatement. In addition, multiple concerns in UI development leads to descriptions that exhibit concern tangling, which results in high fragment replication. Concern-separating approaches address these issues; however, they fail to maintain the separation of concerns for execution tasks like rendering or UI delivery to clients. During the rendering process at the server side, the separation collapses into entangled concerns that are provided to clients. Such client-side entanglement may seem inconsequential since the clients are simply displaying what is sent to them; however, such entanglement compromises client performance as it results in problems such as replication, fragment granularity ill-suited for effective caching, etc. This paper considers advantages brought by concern-separation from both perspectives. It proposes extension to the aspect-oriented UI design with distributed concern delivery (DCD) for client-server applications. Such an extension lessens the serverside involvement in UI assembly and reduces the fragment replication in provided UI descriptions. The server provides clients with individual UI concerns, and they become partially responsible for the UI assembly. This change increases client-side concern reuse and extends caching opportunities, reducing the volume of transmitted information between client and server to improve UI responsiveness and performance. The underlying aspect-oriented UI design automates the server-side derivation of concerns related to data presentations adapted to runtime context, security, conditions, etc. Evaluation of the approach is considered in a case study applying DCD to an existing, production web application. Our results demonstrate decreased volumes of UI descriptions assembled by the server-side and extended client-side caching abilities, reducing required data/fragment transmission, which improves UI responsiveness. Furthermore, we evaluate the potential benefits of DCD integration implications in selected UI frameworks.


2011 ◽  
pp. 133-145 ◽  
Author(s):  
James G. Anderson

Information technology such as electronic medical records (EMRs), electronic prescribing, and clinical decision support systems are recognized as essential tools in all developed countries. However, the U.S. lags significantly behind other countries that are members of the Organization for Economic Cooperation and Development (OECD). Significant barriers impede wide-scale adoption of these tools in the U.S., especially EMR systems. These barriers include lack of access to capital by healthcare providers, complex systems, and lack of data standards that permit exchange of clinical data, privacy concerns and legal barriers, and provider resistance. Overcoming these barriers will require subsidies and performance incentives by payers and government, certification and standardization of vendor applications that permit clinical data exchange, removal of legal barriers, and convincing evidence of the cost-effectiveness of these IT applications.


2020 ◽  
Vol 23 (2) ◽  
pp. 101-109
Author(s):  
Hao Dong ◽  
Zhongji Li ◽  
Leina Gao

2002 ◽  
Vol 1804 (1) ◽  
pp. 144-150
Author(s):  
Kenneth G. Courage ◽  
Scott S. Washburn ◽  
Jin-Tae Kim

The proliferation of traffic software programs on the market has resulted in many very specialized programs, intended to analyze one or two specific items within a transportation network. Consequently, traffic engineers use multiple programs on a single project, which ironically has resulted in new inefficiency for the traffic engineer. Most of these programs deal with the same core set of data, for example, physical roadway characteristics, traffic demand levels, and traffic control variables. However, most of these programs have their own formats for saving data files. Therefore, these programs cannot share information directly or communicate with each other because of incompatible data formats. Thus, the traffic engineer is faced with manually reentering common data from one program into another. In addition to inefficiency, this also creates additional opportunities for data entry errors. XML is catching on rapidly as a means for exchanging data between two systems or users who deal with the same data but in different formats. Specific vocabularies have been developed for statistics, mathematics, chemistry, and many other disciplines. The traffic model markup language (TMML) is introduced as a resource for traffic model data representation, storage, rendering, and exchange. TMML structure and vocabulary are described, and examples of their use are presented.


2020 ◽  
Vol 91 (2A) ◽  
pp. 803-813 ◽  
Author(s):  
Telluri Ramakrushana Reddy ◽  
Pawan Dewangan ◽  
Lalit Arya ◽  
Pabitra Singha ◽  
Kattoju Achuta Kamesh Raju

Abstract We observed a harmonic noise (HN) in DEutscher Geräte-Pool für Amphibische Seismologie ocean-bottom seismometers (OBSs) data recorded from the Andaman–Nicobar region. The HN is characterized by sharp spectral peaks with a fundamental frequency and several overtones occurring at integer multiples of the fundamental frequency. We used an automated algorithm to quantify the occurrence of HN for the entire four-month deployment period (1 January 2014 to 30 April 2014). The algorithm detected more than 23 days of HN for some OBS stations. The spectral analysis of the hourly count of HN shows distinct lunar and solar tidal periodicities at 4.14, 6.1, 6.22, 12, and 12.4 hr as well as 13.66 days. The observed periodicities provide evidence of tidal triggering of HN. The HN is generated by the strumming of head buoys due to seafloor currents initiated by oceanic tides in the Andaman–Nicobar region.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Hyun Jae Baek ◽  
Min Hye Chang ◽  
Jeong Heo ◽  
Kwang Suk Park

Brain-computer interfaces (BCIs) aim to enable people to interact with the external world through an alternative, nonmuscular communication channel that uses brain signal responses to complete specific cognitive tasks. BCIs have been growing rapidly during the past few years, with most of the BCI research focusing on system performance, such as improving accuracy or information transfer rate. Despite these advances, BCI research and development is still in its infancy and requires further consideration to significantly affect human experience in most real-world environments. This paper reviews the most recent studies and findings about ergonomic issues in BCIs. We review dry electrodes that can be used to detect brain signals with high enough quality to apply in BCIs and discuss their advantages, disadvantages, and performance. Also, an overview is provided of the wide range of recent efforts to create new interface designs that do not induce fatigue or discomfort during everyday, long-term use. The basic principles of each technique are described, along with examples of current applications in BCI research. Finally, we demonstrate a user-friendly interface paradigm that uses dry capacitive electrodes that do not require any preparation procedure for EEG signal acquisition. We explore the capacitively measured steady-state visual evoked potential (SSVEP) response to an amplitude-modulated visual stimulus and the auditory steady-state response (ASSR) to an auditory stimulus modulated by familiar natural sounds to verify their availability for BCI. We report the first results of an online demonstration that adopted this ergonomic approach to evaluating BCI applications. We expect BCI to become a routine clinical, assistive, and commercial tool through advanced EEG monitoring techniques and innovative interface designs.


2013 ◽  
Vol 765-767 ◽  
pp. 1287-1290
Author(s):  
Ken Chen ◽  
Fang Wang ◽  
Fang Miao ◽  
Fu Chao Cheng

The spatial data presented several characteristics of mass, multiple, isomerism and multiple tenses, its organization and management mechanism is an important direction of research for Digital Earth. The management of grave emergency with regards to a series of spatial and non-spatial data concerning gathering and handling, having put a higher demand forward the ability of information gathering mechanism on client. The current existing client access mechanism such as C/S model lacks of unified data exchange standards, similarly, B/S model cannot handle the spatial data effectively. It is also difficulty to display for complex and massive spatial data in visualized and real-time. That efficiency depends entirely on the network environment and performance of storage equipment. In order to realize the massive spatial data unified dispatching and efficient sharing based on the principle of Information-gathering and Service-polymerization. We put forward a concept of Spatial-data-cloud which based on G/S model, supported by HGML as the standard and criterion of spatial data exchange, presentation, organization, storage and management. It could also be set up a new work mechanism which use Geo-information browser polymerization multiple and massive complex spatial and non-spatial data. This will provide us a lightweight client called Geo-information browser with which is by the principle Information-gathering and Service-polymerization. It provides emergency management for technical supports such as intelligent decision support, comprehensive research and judgment, and rapid disposal etc. It is the development of basic research of a novel model of Digital Earth.


Sign in / Sign up

Export Citation Format

Share Document