text input
Recently Published Documents


TOTAL DOCUMENTS

338
(FIVE YEARS 104)

H-INDEX

19
(FIVE YEARS 5)

2022 ◽  
Vol 28 (1) ◽  
pp. 30-36
Author(s):  
Matthias Zuchowski ◽  
Aydan Göller

Background/Aims Medical documentation is an important and unavoidable part of a health professional's working day. However, the time required for medical documentation is often viewed negatively, particularly by clinicians with heavy workloads. Digital speech recognition has become more prevalent and is being used to optimise working time. This study evaluated the time and cost savings associated with speech recognition technology, and its potential for improving healthcare processes. Methods Clinicians were directly observed while completing medical documentation. A total of 313 samples were collected, of which 163 used speech recognition and 150 used typing methods. The time taken to complete the medical form, the error rate and error correction time were recorded. A survey was also completed by 31 clinicians to gauge their level of acceptance of speech recognition software for medical documentation. Two-sample t-tests and Mann–Whitney U tests were performed to determine statistical trends and significance. Results On average, medical documentation using speech recognition software took just 5.11 minutes to complete the form, compared to 8.9 minutes typing, representing significant time savings. The error rate was also found to be lower for speech recognition software. However, 55% of clinicians surveyed stated that they would prefer to type their notes rather than use speech recognition software and perceived the error rate of this software to be higher than typing. Conclusions The results showed that there are both temporal and financial advantages of speech recognition technology over text input for medical documentation. However, this technology had low levels of acceptance among staff, which could have implications for the uptake of this method.


2021 ◽  
Vol 12 (1) ◽  
pp. 338
Author(s):  
Ömer Köksal ◽  
Bedir Tekinerdogan

Software bug report classification is a critical process to understand the nature, implications, and causes of software failures. Furthermore, classification enables a fast and appropriate reaction to software bugs. However, for large-scale projects, one must deal with a broad set of bugs from multiple types. In this context, manually classifying bugs becomes cumbersome and time-consuming. Although several studies have addressed automated bug classification using machine learning techniques, they have mainly focused on academic case studies, open-source software, and unilingual text input. This paper presents our automated bug classification approach applied and validated in an industrial case study. In contrast to earlier studies, our study is applied to a commercial software system based on unstructured bilingual bug reports written in English and Turkish. The presented approach adopts and integrates machine learning (ML), text mining, and natural language processing (NLP) techniques to support the classification of software bugs. The approach has been applied within an industrial case study. Compared to manual classification, our results show that bug classification can be automated and even performs better than manual bug classification. Our study shows that the presented approach and the corresponding tools effectively reduce the manual classification time and effort.


2021 ◽  
Vol 1 (3) ◽  
pp. 21-41
Author(s):  
Yudan Su

Purpose In recent years, the incorporation of multimedia into linguistic input has opened a new horizon in the field of second language acquisition (SLA). In the reading aspect, the advent of virtual reality (VR) technology extends the landscape of reading repertoire by engaging learners with auditory, visual and tactile multimodal input. This study aimed to examine the pedagogical potential of VR technology in enhancing learners’ reading comprehension. Methods Three classes including 131 Chinese 8th grade EFL students participated in this study. This study adopted mixed methods methodology and triangulated pre-post-retention tests, questionnaires, learning journals and interview data to compare three modes of text input on learners’ reading performance and cognitive processing. Results The results indicated that VR-assisted multimodal input significantly improved learners’ macrostructural comprehension in the short term, whereas there was no significant difference in retention performance. The findings revealed that reading multimodal text did not exceed learners’ memory capacity or impose extraneous cognitive load. Participants mainly reported favorably on the efficacy of multimodal input in assisting their reading. Conclusion This study was the first attempt to integrate VR technology with input presentation and cognitive processing and offered a new line of theorization of VR-assisted multimodal learning in the cognitive field of SLA.


Author(s):  
HARIS AL QODRI MAARIF

Language Processing Unit (LPU) is a system built to process text-based data to comply with the rules of sign language grammar. This system was developed as an important part of the sign language synthesizer system. Sign language (SL) uses different grammatical rules from the spoken/verbal language, which only involves the important words that Hearing/Impaired Speech people can understand. Therefore, it needs word classification by LPU to determine grammatically processed sentences for the sign language synthesizer. However, the existing language processing unit in SL synthesizers suffers time lagging and complexity problems, resulting in high processing time. The two features, i.e., the computational time and success rate, become trade-offs which means the processing time becomes longer to achieve a higher success rate. This paper proposes an adaptive Language Processing Unit (LPU) that allows processing the words from spoken words to Malaysian SL grammatical rule that results in relatively fast processing time and a good success rate. It involves n-grams, NLP, and Hidden Markov Models (HMM)/Bayesian Networks as the classifier to process the text-based input. As a result, the proposed LPU system has successfully provided an efficient (fast) processing time and a good success rate compared to LPU with other edit distances (Mahalanobis, Levensthein, and Soundex). The system has been tested on 130 text-input sentences with several words ranging from 3 to 10 words. Results showed that the proposed LPU could achieve around 1.497ms processing time with an average success rate of 84.23% for a maximum of ten-word sentences.


Author(s):  
Valentin Benzing ◽  
Sanaz Nosrat ◽  
Alireza Aghababa ◽  
Vassilis Barkoukis ◽  
Dmitriy Bondarev ◽  
...  

The COVID-19 pandemic and the associated governmental restrictions suddenly changed everyday life and potentially affected exercise behavior. The aim of this study was to explore whether individuals changed their preference for certain types of physical exercise during the pandemic and to identify risk factors for inactivity. An international online survey with 13,881 adult participants from 18 countries/regions was conducted during the initial COVID-19 related lockdown (between April and May 2020). Data on types of exercise performed during and before the initial COVID-19 lockdown were collected, translated, and categorized (free-text input). Sankey charts were used to investigate these changes, and a mixed-effects logistic regression model was used to analyze risks for inactivity. Many participants managed to continue exercising but switched from playing games (e.g., football, tennis) to running, for example. In our sample, the most popular exercise types during the initial COVID-19 lockdown included endurance, muscular strength, and multimodal exercise. Regarding risk factors, higher education, living in rural areas, and physical activity before the COVID-19 lockdown reduced the risk for inactivity during the lockdown. In this relatively active multinational sample of adults, most participants were able to continue their preferred type of exercise despite restrictions, or changed to endurance type activities. Very few became physically inactive. It seems people can adapt quickly and that the constraints imposed by social distancing may even turn into an opportunity to start exercising for some. These findings may be helpful to identify individuals at risk and optimize interventions following a major context change that can disrupt the exercise routine.


2021 ◽  
Author(s):  
◽  
Samuel Hindmarsh

<p>Assistive technologies aim to provide assistance to those who are unable to perform various tasks in their day-to-day lives without tremendous difficulty. This includes — amongst other things — communicating with others. Augmentative and adaptive communication (AAC) is a branch of assistive technologies which aims to make communicating easier for people with disabilities which would otherwise prevent them from communicating efficiently (or, in some cases, at all). The input rate of these communication aids, however, is often constrained by the limited number of inputs found on the devices and the speed at which the user can toggle these inputs. A similar restriction is also often found on smaller devices such as mobile phones: these devices also often require the user to input text with a smaller input set, which often results in slower typing speeds.  Several technologies exist with the purpose of improving the text input rates of these devices. These technologies include ambiguous keyboards, which allow users to input text using a single keypress for each character and trying to predict the desired word; word prediction systems, which attempt to predict the word the user is attempting to input before he or she has completed it; and word auto-completion systems, which complete the entry of predicted words before all the corresponding inputs have been pressed.  This thesis discusses the design and implementation of a system incorporating the three aforementioned assistive input methods, and presents several questions regarding the nature of these technologies. The designed system is found to outperform a standard computer keyboard in many situations, which is a vast improvement over many other AAC technologies. A set of experiments was designed and performed to answer the proposed questions, and the results of the experiments determine that the corpus used to train the system — along with other tuning parameters — have a great impact on the performance of the system. Finally, the thesis also discusses the impact that corpus size has on the memory usage and response time of the system.</p>


2021 ◽  
Author(s):  
◽  
Samuel Hindmarsh

<p>Assistive technologies aim to provide assistance to those who are unable to perform various tasks in their day-to-day lives without tremendous difficulty. This includes — amongst other things — communicating with others. Augmentative and adaptive communication (AAC) is a branch of assistive technologies which aims to make communicating easier for people with disabilities which would otherwise prevent them from communicating efficiently (or, in some cases, at all). The input rate of these communication aids, however, is often constrained by the limited number of inputs found on the devices and the speed at which the user can toggle these inputs. A similar restriction is also often found on smaller devices such as mobile phones: these devices also often require the user to input text with a smaller input set, which often results in slower typing speeds.  Several technologies exist with the purpose of improving the text input rates of these devices. These technologies include ambiguous keyboards, which allow users to input text using a single keypress for each character and trying to predict the desired word; word prediction systems, which attempt to predict the word the user is attempting to input before he or she has completed it; and word auto-completion systems, which complete the entry of predicted words before all the corresponding inputs have been pressed.  This thesis discusses the design and implementation of a system incorporating the three aforementioned assistive input methods, and presents several questions regarding the nature of these technologies. The designed system is found to outperform a standard computer keyboard in many situations, which is a vast improvement over many other AAC technologies. A set of experiments was designed and performed to answer the proposed questions, and the results of the experiments determine that the corpus used to train the system — along with other tuning parameters — have a great impact on the performance of the system. Finally, the thesis also discusses the impact that corpus size has on the memory usage and response time of the system.</p>


2021 ◽  
Author(s):  
Sloan Swieso ◽  
Powen Yao ◽  
Mark Miller ◽  
Adityan Jothi ◽  
Andrew Zhao ◽  
...  
Keyword(s):  

2021 ◽  
Vol 38 (5) ◽  
pp. 1413-1421
Author(s):  
Vallamchetty Sreenivasulu ◽  
Mohammed Abdul Wajeed

Spam emails based on images readily evade text-based spam email filters. More and more spammers are adopting the technology. The essence of email is necessary in order to recognize image content. Web-based social networking is a method of communication between the information owner and end users for online exchanges that use social network data in the form of images and text. Nowadays, information is passed on to users in shorter time using social networks, and the spread of fraudulent material on social networks has become a major issue. It is critical to assess and decide which features the filters require to combat spammers. Spammers also insert text into photographs, causing text filters to fail. The detection of visual garbage material has become a hotspot study on spam filters on the Internet. The suggested approach includes a supplementary detection engine that uses visuals as well as text input. This paper proposed a system for the assessment of information, the detection of information on fraud-based mails and the avoidance of distribution to end users for the purpose of enhancing data protection and preventing safety problems. The proposed model utilizes Machine Learning and Convolutional Neural Network (CNN) methods to recognize and prevent fraud information being transmitted to end users.


2021 ◽  
Vol 14 (10) ◽  
pp. 6541-6569
Author(s):  
Phillip D. Alderman

Abstract. The Decision Support System for Agrotechnology Transfer Cropping Systems Model (DSSAT-CSM) is a widely used crop modeling system that has been integrated into large-scale modeling frameworks. Existing frameworks generate spatially explicit simulated outputs at grid points through an inefficient process of translation from binary spatially referenced inputs to point-specific text input files, followed by translation and aggregation back from point-specific text output files to binary spatially referenced outputs. The main objective of this paper was to document the design and implementation of a parallel gridded simulation framework for DSSAT-CSM. A secondary objective was to provide preliminary analysis of execution time and scaling of the new parallel gridded framework. The parallel gridded framework includes improved code for model-internal data transfer, gridded input–output with the Network Common Data Form (NetCDF) library, and parallelization of simulations using the Message Passing Interface (MPI). Validation simulations with the DSSAT-CSM-CROPSIM-CERES-Wheat model revealed subtle discrepancies in simulated yield due to the rounding of soil parameters in the input routines of the standard DSSAT-CSM. Utilizing NetCDF for direct input–output produced a 3.7- to 4-fold reduction in execution time compared to R- and text-based input–output. Parallelization improved execution time for both versions with between 12.2- (standard version) and 13.4-fold (parallel gridded version) speed-up when comparing 1 to 16 compute cores. Estimates of parallelization of computation ranged between 99.2 % (standard version) and 97.3 % (parallel gridded version), indicating potential for scaling to higher numbers of compute cores.


Sign in / Sign up

Export Citation Format

Share Document