focus detection
Recently Published Documents


TOTAL DOCUMENTS

65
(FIVE YEARS 16)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
pp. 002383092110460
Author(s):  
Martin Ho Kwan Ip ◽  
Anne Cutler

Many different prosodic cues can help listeners predict upcoming speech. However, no research to date has assessed listeners’ processing of preceding prosody from different speakers. The present experiments examine (1) whether individual speakers (of the same language variety) are likely to vary in their production of preceding prosody; (2) to the extent that there is talker variability, whether listeners are flexible enough to use any prosodic cues signaled by the individual speaker; and (3) whether types of prosodic cues (e.g., F0 versus duration) vary in informativeness. Using a phoneme-detection task, we examined whether listeners can entrain to different combinations of preceding prosodic cues to predict where focus will fall in an utterance. We used unsynthesized sentences recorded by four female native speakers of Australian English who happened to have used different preceding cues to produce sentences with prosodic focus: a combination of pre-focus overall duration cues, F0 and intensity (mean, maximum, range), and longer pre-target interval before the focused word onset (Speaker 1), only mean F0 cues, mean and maximum intensity, and longer pre-target interval (Speaker 2), only pre-target interval duration (Speaker 3), and only pre-focus overall duration and maximum intensity (Speaker 4). Listeners could entrain to almost every speaker’s cues (the exception being Speaker 4’s use of only pre-focus overall duration and maximum intensity), and could use whatever cues were available even when one of the cue sources was rendered uninformative. Our findings demonstrate both speaker variability and listener flexibility in the processing of prosodic focus.


2021 ◽  
Author(s):  
Si-Jia Xu ◽  
Yan-Zhao Duan ◽  
Yan-Hao Yu ◽  
Zhen-Nan Tian ◽  
Qi-Dai Chen

Author(s):  
Maksim Levental ◽  
Ryan Chard ◽  
Kyle Chard ◽  
Ian Foster ◽  
Gregg A. Wildenberg

2021 ◽  
Vol 15 ◽  
Author(s):  
Daichi Sone

It has been a clinically important, long-standing challenge to accurately localize epileptogenic focus in drug-resistant focal epilepsy because more intensive intervention to the detected focus, including resection neurosurgery, can provide significant seizure reduction. In addition to neurophysiological examinations, neuroimaging plays a crucial role in the detection of focus by providing morphological and neuroanatomical information. On the other hand, epileptogenic lesions in the brain may sometimes show only subtle or even invisible abnormalities on conventional MRI sequences, and thus, efforts have been made for better visualization and improved detection of the focus lesions. Recent advance in neuroimaging has been attracting attention because of the potentials to better visualize the epileptogenic lesions as well as provide novel information about the pathophysiology of epilepsy. While the progress of newer neuroimaging techniques, including the non-Gaussian diffusion model and arterial spin labeling, could non-invasively detect decreased neurite parameters or hypoperfusion within the focus lesions, advances in analytic technology may also provide usefulness for both focus detection and understanding of epilepsy. There has been an increasing number of clinical and experimental applications of machine learning and network analysis in the field of epilepsy. This review article will shed light on recent advances in neuroimaging for focal epilepsy, including both technical progress of images and newer analytical methodologies and discuss about the potential usefulness in clinical practice.


2021 ◽  
Author(s):  
Weiwei Wang ◽  
Xinjie Zhao ◽  
Yanshu Jia

Abstract To improve the diagnostic efficiency and accuracy of corona virus disease 2019 (COVID-19), and to study the application of artificial intelligence (AI) in COVID-19 diagnosis and public health management, the computer tomography (CT) image data of 200 COVID-19 patients are collected, and the image is input into the AI auxiliary diagnosis software based on the deep learning model, "uAI the COVID-19 intelligent auxiliary analysis system", for focus detection. The software automatically carries on the pneumonia focus identification and the mark in batches, and automatically calculates the lesion volume. The result shows that the CT manifestations of the patients are mainly involved in multiple lobes, and in density, the most common shadow is the ground glass opacity. The detection rate of manual detection method is 95.30%, misdiagnosis rate is 0.20% and missed diagnosis rate is 4.50%; the detection rate of AI software focus detection method based on deep learning model is 99.76%, the misdiagnosis rate is 0.08%, and the missed diagnosis rate is 0.08%. Therefore, it can effectively identify COVID-19 focus and provide relevant data information of focus to provide objective data support for COVID-19 diagnosis and public health management.


2021 ◽  
Author(s):  
Alisson Steffens Henrique ◽  
Esteban Walter Gonzalez Clua ◽  
Rodrigo Lyra ◽  
Anita Maria da Rocha Fernandes ◽  
Rudimar Luis Scaranto Dazzi

Game Analytics is an important research topic in digitalentertainment. Data log is usually the key to understand players’behavior in a game. However, alpha and beta builds may need aspecial attention to player focus and immersion. In this paper, wepropose t he us e of player’s focus detection, through theclassification of pictures. Results show that pictures can be usedas a new source of data for Game Analytics, feeding developerswith a better understanding of players enjoyment while in testingphases .


Agronomy ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 731
Author(s):  
Pedro Faria ◽  
Telmo Nogueira ◽  
Ana Ferreira ◽  
Cristina Carlos ◽  
Luís Rosado

The increasing alarming impacts of climate change are already apparent in viticulture, with unexpected pest outbreaks as one of the most concerning consequences. The monitoring of pests is currently done by deploying chromotropic and delta traps, which attracts insects present in the production environment, and then allows human operators to identify and count them. While the monitoring of these traps is still mostly done through visual inspection by the winegrowers, smartphone image acquisition of those traps is starting to play a key role in assessing the pests’ evolution, as well as enabling the remote monitoring by taxonomy specialists in better assessing the onset outbreaks. This paper presents a new methodology that embeds artificial intelligence into mobile devices to establish the use of hand-held image capture of insect traps for pest detection deployed in vineyards. Our methodology combines different computer vision approaches that improve several aspects of image capture quality and adequacy, namely: (i) image focus validation; (ii) shadows and reflections validation; (iii) trap type detection; (iv) trap segmentation; and (v) perspective correction. A total of 516 images were collected, divided into three different datasets and manually annotated, in order to support the development and validation of the different functionalities. By following this approach, we achieved an accuracy of 84% for focus detection, an accuracy of 80% and 96% for shadows/reflections detection (for delta and chromotropic traps, respectively), as well as mean Jaccard index of 97% for the trap’s segmentation.


2020 ◽  
Vol 138 ◽  
pp. 497-506
Author(s):  
Gelaysi Moreno ◽  
Jefferson S. Ascaneo ◽  
Jorge O. Ricardo ◽  
Leandro T. De La Cruz ◽  
Yaumel Arias ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document