selection accuracy
Recently Published Documents


TOTAL DOCUMENTS

64
(FIVE YEARS 24)

H-INDEX

14
(FIVE YEARS 2)

Author(s):  
Kiwako Ito ◽  
Wynne Wong

Abstract Effects of phonetically variable input (PVI) for processing instruction (PI) training and the number of training items were tested with a picture-selection eye-tracking task. Intermediate second language (L2) learners of French (n = 174) were tested before and after they received either a short (24 items), medium (48), or long (96) training on the causative structure with either single- or multivoice input. PI improved picture-selection accuracy from about 10% to above 50% regardless of the training size. Eye-tracking data showed a reduction in looks to the incorrect picture only after the short and medium training: it surfaced regardless of voice variability after the short training, whereas multivoice training led to a greater reduction after the medium training. Long training did not yield a reliable reduction of incorrect looks regardless of voice variability. Taken together, PVI does not hinder L2 syntactic learning. Learners may benefit more from a relatively shorter training with PVI.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zhihuan Liu

Aiming at the problems of low shortest path selection accuracy, longer response time, and poor selection effect in current cold chain logistics transportation methods, a cold chain logistics transportation shortest path selection algorithm based on improved artificial bee colony is proposed. The improved algorithm is used to initialize the food source, reevaluate the fitness value of the food source, generate a new food source, optimize the objective function and food source evaluation strategy, and get an improved artificial bee colony algorithm. Based on the improved artificial bee colony algorithm, the group adaptive mechanism of particle swarm algorithm is introduced to initialize the position and velocity of each particle randomly. Dynamic detection factor and octree algorithm are adopted to dynamically update the path of modeling environment information. According to the information sharing mechanism between individual particles, the group adaptive behavior control is performed. After the maximum number of cycles, the path planning is completed, the shortest path is output, and the shortest path selection of cold chain logistics transportation is realized. The experimental results show that the shortest path selection effect of the cold chain logistics transportation of the proposed algorithm is better, which can effectively improve the shortest path selection accuracy and reduce the shortest path selection time.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Haixia Zhang ◽  
Wenao Cheng

With the continuous development of artificial intelligence technology, the value of massive power data has been widely considered. Aiming at the problem of single-phase-to-ground fault line selection in resonant grounding system, a fault line selection method based on transfer learning depthwise separable convolutional neural network (DSCNN) is proposed. The proposed method uses two pixel-level image fusions to transform the three-phase current of each feeder into the RGB color image, which is used as the input of DSCNN. After DSCNN self-feature extraction, the fault line selection is completed. With the consideration that not all of power distribution systems can obtain a large amount of data in practical applications, the transfer learning strategy is adopted to transplant the trained line selection model. The smaller number of DSCNN parameters increases the portability of the model. The test results show that not only does the proposed method extracts obvious features, but also the line selection accuracy can reach 99.76%. It also has good adaptability under different sampling frequencies, different noise environments, and different distribution network topologies; the line selection accuracy can reach more than 97.43%.


2021 ◽  
Vol 12 ◽  
Author(s):  
Sang He ◽  
Yong Jiang ◽  
Rebecca Thistlethwaite ◽  
Matthew J. Hayden ◽  
Richard Trethowan ◽  
...  

Increasing the number of environments for phenotyping of crop lines in earlier stages of breeding programs can improve selection accuracy. However, this is often not feasible due to cost. In our study, we investigated a sparse phenotyping method that does not test all entries in all environments, but instead capitalizes on genomic prediction to predict missing phenotypes in additional environments without extra phenotyping expenditure. The breeders’ main interest – response to selection – was directly simulated to evaluate the effectiveness of the sparse genomic phenotyping method in a wheat and a rice data set. Whether sparse phenotyping resulted in more selection response depended on the correlations of phenotypes between environments. The sparse phenotyping method consistently showed statistically significant higher responses to selection, compared to complete phenotyping, when the majority of completely phenotyped environments were negatively (wheat) or lowly positively (rice) correlated and any extension environment was highly positively correlated with any of the completely phenotyped environments. When all environments were positively correlated (wheat) or any highly positively correlated environments existed (wheat and rice), sparse phenotyping did not improved response. Our results indicate that genomics-based sparse phenotyping can improve selection response in the middle stages of crop breeding programs.


2021 ◽  
Author(s):  
Sang He ◽  
Yong Jiang ◽  
Rebecca Thistlethwaite ◽  
Matthew Hayden ◽  
Richard Trethowan ◽  
...  

Abstract Increasing the number of environments for phenotyping of crop lines in earlier stages of breeding programs can improve selection accuracy. However, this is often not feasible due to cost. In our study, we investigated a partial phenotyping strategy that does not test all entries in all environments, but instead capitalizes on genomic prediction to predict missing phenotypes in additional environments without extra phenotyping expenditure. The breeders’ main interest – response to selection – was directly simulated to evaluate the effectiveness of the partial genomic phenotyping strategy in a wheat dataset. Whether the partial phenotyping strategy resulted in more selection response depended on the correlations of phenotypes between environments. The partial phenotyping strategy consistently showed statistically significant higher simulated responses to selection, compared to complete phenotyping, when the majority of completely phenotyped environments were negatively correlated and any extension environment was highly positively correlated with any of the completely phenotyped environments. Our results indicate that genomics-based partial phenotyping can improve selection response at middle stages of crop breeding programs.


2021 ◽  
Author(s):  
Michail Schwab ◽  
Aditeya Pandey ◽  
Michelle Borkin

In spite of growing demand for mobile data visualization, few design guidelines exist to address its many challenges including small screens and low touch interaction precision. Both of these challenges can restrict the number of data points a user can reliably select and view in more detail, which is a core requirement for interactive data visualization. In this study, we present a comparison of the conventional tap technique for selection with three variations including visual feedback to understand which interaction technique allows for optimal selection accuracy. Based on the results of the user study, we provide actionable solutions to improve interaction design for mobile visualizations. We find that visual feedback, such as selection with a handle, improves selection accuracy three- to fourfold compared to tap selection. With a 75% accuracy, users could select a target item among 176 items total using the handle, but only from 60 items using tap. On the other hand, techniques with visual feedback took about twice as long per selection when compared to tap. We conclude designers should use selection techniques with visual feedback when the data density is high and improved selection precision is required for a visualization.


2021 ◽  
Vol 6 ◽  
Author(s):  
Seohyun Kim ◽  
Xin Tong ◽  
Zijun Ke

Growth mixture modeling is a popular analytic tool for longitudinal data analysis. It detects latent groups based on the shapes of growth trajectories. Traditional growth mixture modeling assumes that outcome variables are normally distributed within each class. When data violate this normality assumption, however, it is well documented that the traditional growth mixture modeling mislead researchers in determining the number of latent classes as well as in estimating parameters. To address nonnormal data in growth mixture modeling, robust methods based on various nonnormal distributions have been developed. As a new robust approach, growth mixture modeling based on conditional medians has been proposed. In this article, we present the results of two simulation studies that evaluate the performance of the median-based growth mixture modeling in identifying the correct number of latent classes when data follow the normality assumption or have outliers. We also compared the performance of the median-based growth mixture modeling to the performance of traditional growth mixture modeling as well as robust growth mixture modeling based on t distributions. For identifying the number of latent classes in growth mixture modeling, the following three Bayesian model comparison criteria were considered: deviance information criterion, Watanabe-Akaike information criterion, and leave-one-out cross validation. For the median-based growth mixture modeling and t-based growth mixture modeling, our results showed that they maintained quite high model selection accuracy across all conditions in this study (ranged from 87 to 100%). In the traditional growth mixture modeling, however, the model selection accuracy was greatly influenced by the proportion of outliers. When sample size was 500 and the proportion of outliers was 0.05, the correct model was preferred in about 90% of the replications, but the percentage dropped to about 40% as the proportion of outliers increased to 0.15.


Author(s):  
Christopher S Graffeo ◽  
Avital Perry ◽  
Lucas P Carlstrom ◽  
Maria Peris-Celda ◽  
Amy Alexander ◽  
...  

Abstract Background: 3D printing—also known as additive manufacturing—has a wide range of applications. Reproduction of low-cost, high-fidelity, disease- or patient-specific models presents a key developmental area in simulation and education research for complex cranial surgery. Methods: Using cadaveric dissections as source materials, skull base models were created, printed, and tested for educational value in teaching complex cranial approaches. In this pilot study, assessments were made on the value of 3D printed models demonstrating the retrosigmoid and posterior petrosectomy approaches. Models were assessed and tested in a small cohort of neurosurgery resident subjects (n = 3) using a series of 10 radiographic and 2 printed case examples, with efficacy determined via agreement survey and approach selection accuracy. Results: All subjects indicated agreement or strong agreement for all study endpoints that 3D printed models provided significant improvements in understanding of neuroanatomic relationships and principles of approach selection, as compared to 2D dissections or patient cross-sectional imaging alone. Models were not superior to in-person hands-on teaching. Mean approach selection accuracy was 90% ( ±  13%) for 10 imaging-based cases, or 92% ( ±  7%) overall. Trainees strongly agreed that approach decision-making was enhanced by adjunctive use of 3D models for both radiographic and printed cases. Conclusion: 3D printed models incorporating skull base approaches and/or pathologies provide a compelling addition to the complex cranial education armamentarium. Based on our preliminary analysis, 3D printed models offer substantial potential for pedagogical value as dissection guides, adjuncts to preoperative study and case preparation, or tools for approach selection training and evaluation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maia Jacobs ◽  
Melanie F. Pradier ◽  
Thomas H. McCoy ◽  
Roy H. Perlis ◽  
Finale Doshi-Velez ◽  
...  

AbstractDecision support systems embodying machine learning models offer the promise of an improved standard of care for major depressive disorder, but little is known about how clinicians’ treatment decisions will be influenced by machine learning recommendations and explanations. We used a within-subject factorial experiment to present 220 clinicians with patient vignettes, each with or without a machine-learning (ML) recommendation and one of the multiple forms of explanation. We found that interacting with ML recommendations did not significantly improve clinicians’ treatment selection accuracy, assessed as concordance with expert psychopharmacologist consensus, compared to baseline scenarios in which clinicians made treatment decisions independently. Interacting with incorrect recommendations paired with explanations that included limited but easily interpretable information did lead to a significant reduction in treatment selection accuracy compared to baseline questions. These results suggest that incorrect ML recommendations may adversely impact clinician treatment selections and that explanations are insufficient for addressing overreliance on imperfect ML algorithms. More generally, our findings challenge the common assumption that clinicians interacting with ML tools will perform better than either clinicians or ML algorithms individually.


Sign in / Sign up

Export Citation Format

Share Document