accuracy speed
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 48)

H-INDEX

17
(FIVE YEARS 2)

Author(s):  
Tabitha Cheng ◽  
Katherine Staats ◽  
Amy H. Kaji ◽  
Nicole D'Arcy ◽  
Kian Niknam ◽  
...  

Author(s):  
Ivan Rodriguez-Conde ◽  
Celso Campos ◽  
Florentino Fdez-Riverola

AbstractConvolutional neural networks have pushed forward image analysis research and computer vision over the last decade, constituting a state-of-the-art approach in object detection today. The design of increasingly deeper and wider architectures has made it possible to achieve unprecedented levels of detection accuracy, albeit at the cost of both a dramatic computational burden and a large memory footprint. In such a context, cloud systems have become a mainstream technological solution due to their tremendous scalability, providing researchers and practitioners with virtually unlimited resources. However, these resources are typically made available as remote services, requiring communication over the network to be accessed, thus compromising the speed of response, availability, and security of the implemented solution. In view of these limitations, the on-device paradigm has emerged as a recent yet widely explored alternative, pursuing more compact and efficient networks to ultimately enable the execution of the derived models directly on resource-constrained client devices. This study provides an up-to-date review of the more relevant scientific research carried out in this vein, circumscribed to the object detection problem. In particular, the paper contributes to the field with a comprehensive architectural overview of both the existing lightweight object detection frameworks targeted to mobile and embedded devices, and the underlying convolutional neural networks that make up their internal structure. More specifically, it addresses the main structural-level strategies used for conceiving the various components of a detection pipeline (i.e., backbone, neck, and head), as well as the most salient techniques proposed for adapting such structures and the resulting architectures to more austere deployment environments. Finally, the study concludes with a discussion of the specific challenges and next steps to be taken to move toward a more convenient accuracy–speed trade-off.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zachary P. Neal ◽  
Rachel Domagalski ◽  
Bruce Sagan

AbstractProjections of bipartite or two-mode networks capture co-occurrences, and are used in diverse fields (e.g., ecology, economics, bibliometrics, politics) to represent unipartite networks. A key challenge in analyzing such networks is determining whether an observed number of co-occurrences between two nodes is significant, and therefore whether an edge exists between them. One approach, the fixed degree sequence model (FDSM), evaluates the significance of an edge’s weight by comparison to a null model in which the degree sequences of the original bipartite network are fixed. Although the FDSM is an intuitive null model, it is computationally expensive because it requires Monte Carlo simulation to estimate each edge’s p value, and therefore is impractical for large projections. In this paper, we explore four potential alternatives to FDSM: fixed fill model, fixed row model, fixed column model, and stochastic degree sequence model (SDSM). We compare these models to FDSM in terms of accuracy, speed, statistical power, similarity, and ability to recover known communities. We find that the computationally-fast SDSM offers a statistically conservative but close approximation of the computationally-impractical FDSM under a wide range of conditions, and that it correctly recovers a known community structure even when the signal is weak. Therefore, although each backbone model may have particular applications, we recommend SDSM for extracting the backbone of bipartite projections when FDSM is impractical.


2021 ◽  
Vol 7 ◽  
pp. e691
Author(s):  
Jorge Azorin-Lopez ◽  
Marc Sebban ◽  
Andres Fuster-Guillo ◽  
Marcelo Saval-Calvo ◽  
Amaury Habrard

Planes are the core geometric models present everywhere in the three-dimensional real world. There are many examples of manual constructions based on planar patches: facades, corridors, packages, boxes, etc. In these constructions, planar patches must satisfy orthogonal constraints by design (e.g. walls with a ceiling and floor). The hypothesis is that by exploiting orthogonality constraints when possible in the scene, we can perform a reconstruction from a set of points captured by 3D cameras with high accuracy and a low response time. We introduce a method that can iteratively fit a planar model in the presence of noise according to three main steps: a clustering-based unsupervised step that builds pre-clusters from the set of (noisy) points; a linear regression-based supervised step that optimizes a set of planes from the clusters; a reassignment step that challenges the members of the current clusters in a way that minimizes the residuals of the linear predictors. The main contribution is that the method can simultaneously fit different planes in a point cloud providing a good accuracy/speed trade-off even in the presence of noise and outliers, with a smaller processing time compared with previous methods. An extensive experimental study on synthetic data is conducted to compare our method with the most current and representative methods. The quantitative results provide indisputable evidence that our method can generate very accurate models faster than baseline methods. Moreover, two case studies for reconstructing planar-based objects using a Kinect sensor are presented to provide qualitative evidence of the efficiency of our method in real applications.


Symmetry ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1706
Author(s):  
Pu Lan ◽  
Kewen Xia ◽  
Yongke Pan ◽  
Shurui Fan

An improved equilibrium optimizer (EO) algorithm is proposed in this paper to address premature and slow convergence. Firstly, a highly stochastic chaotic mechanism is adopted to initialize the population for range expansion. Secondly, the capability to conduct global search to jump out of local optima is enhanced by assigning adaptive weights and setting adaptive convergence factors. In addition 25 classical benchmark functions are used to validate the algorithm. As revealed by the analysis of the accuracy, speed, and stability of convergence, the IEO algorithm proposed in this paper significantly outperforms other meta-heuristic algorithms. In practice, the distribution is asymmetric because most logging data are unlabeled. Traditional classification models have difficulty in accurately predicting the location of oil layer. In this paper, the oil layers related to oil exploration are predicted using long short-term memory (LSTM) networks. Due to the large amount of data used, however, it is difficult to adjust the parameters. For this reason, an improved equilibrium optimizer algorithm (IEO) is applied to optimize the parameters of LSTM for improved performance, while the effective IEO-LSTM is applied for oil layer prediction. As indicated by the results, the proposed model outperforms the current popular optimization algorithms including particle swarm algorithm PSO and genetic algorithm GA in terms of accuracy, absolute error, root mean square error and mean absolute error.


2021 ◽  
Vol 34 (05) ◽  
pp. 345-352
Author(s):  
Nadine Hachach-Haram ◽  
Danilo Miskovic

AbstractCompared with other fields, adoption of robotics in colorectal surgery remains relatively slow. One of the reasons for this is that the expected benefits of robotics, such as greater accuracy, speed, and better patient outcomes, are not born out in evidence comparing use of robotics for colorectal procedures to conventional laparoscopy. But evidence also suggests that outcomes with colorectal robotic procedures depend on the experience of the surgeon, suggesting that a steep learning curve is acting as a barrier to the benefits of robotics being realized. In this paper, we analyze exactly why surgeon skill and proficiency is such a critical factor in colorectal surgery, especially around the most complex procedures associated with cancer. Shortening of the learning curve is crucial for both the adoption of the technique and the efficient use of expert trainers. Looking beyond the basics of training and embracing a new generation of digital learning technologies that facilitate peer-to-peer collaboration and development beyond the confines of individual institutions may be an important contributor to achieve these goals in the future.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Saeed Bajalan ◽  
Nastaran Bajalan

The main aim of this study is to introduce a 2-layered artificial neural network (ANN) for solving the Black–Scholes partial differential equation (PDE) of either fractional or ordinary orders. Firstly, a discretization method is employed to change the model into a sequence of ordinary differential equations (ODE). Subsequently, each of these ODEs is solved with the aid of an ANN. Adam optimization is employed as the learning paradigm since it can add the foreknowledge of slowing down the process of optimization when getting close to the actual optimum solution. The model also takes advantage of fine-tuning for speeding up the process and domain mapping to confront the infinite domain issue. Finally, the accuracy, speed, and convergence of the method for solving several types of the Black–Scholes model are reported.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4803
Author(s):  
Addie Ira Borja Parico ◽  
Tofael Ahamed

This study aimed to produce a robust real-time pear fruit counter for mobile applications using only RGB data, the variants of the state-of-the-art object detection model YOLOv4, and the multiple object-tracking algorithm Deep SORT. This study also provided a systematic and pragmatic methodology for choosing the most suitable model for a desired application in agricultural sciences. In terms of accuracy, YOLOv4-CSP was observed as the optimal model, with an [email protected] of 98%. In terms of speed and computational cost, YOLOv4-tiny was found to be the ideal model, with a speed of more than 50 FPS and FLOPS of 6.8–14.5. If considering the balance in terms of accuracy, speed and computational cost, YOLOv4 was found to be most suitable and had the highest accuracy metrics while satisfying a real time speed of greater than or equal to 24 FPS. Between the two methods of counting with Deep SORT, the unique ID method was found to be more reliable, with an F1count of 87.85%. This was because YOLOv4 had a very low false negative in detecting pear fruits. The ROI line is more reliable because of its more restrictive nature, but due to flickering in detection it was not able to count some pears despite their being detected.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yanling Zhao ◽  
Jiashuo Zhang

Background: Bearing has been widely used in automotive, aerospace, electronics industry, and other fields.Its pre-tightening technology is one of the most critical technologies. It has an important influence on the accuracy, speed, stiffness, and temperature rise of the spindle. Proper pre-tightening force can eliminate bearing clearance and improve machining accuracy and efficiency of machine tools. Therefore, the development trend of bearing pre-tightening mechanism has been paid more and more attention. Objective: In order to improve the processing efficiency and processing accuracy of the bearing system, the structure and function of the bearing pre-tensioning device are continuously enhanced. Methods: This paper retraces various current representative patents relative to the mechanism of bearing pre-tightening. Results: Through the investigation of several patents of bearing pre-tightening devices, the principles and effects of different bearing pre-tightening devices are classified and reviewed. Besides, the future development trend of bearing pre-tightening devices is discussed. Conclusion: The optimization of the bearing pre-tightening device is conducive to improving processing efficiency and quality. A controllable pre-tightening force is the research direction, and more related patents will be invented in the future.


2021 ◽  
Vol 3 (2) ◽  
Author(s):  
Immo Weber ◽  
Hauke Niehaus ◽  
Kristina Krause ◽  
Lena Molitor ◽  
Martin Peper ◽  
...  

Abstract Whereas the effect of vagal nerve stimulation on emotional states is well established, its effect on cognitive functions is still unclear. Recent rodent studies show that vagal activation enhances reinforcement learning and neuronal dopamine release. The influence of vagal nerve stimulation on reinforcement learning in humans is still unknown. Here, we studied the effect of transcutaneous vagal nerve stimulation on reinforcement learning in eight long-standing seizure-free epilepsy patients, using a well-established forced-choice reward-based paradigm in a cross-sectional, within-subject study design. We investigated vagal nerve stimulation effects on overall accuracy using non-parametric cluster-based permutation tests. Furthermore, we modelled sub-components of the decision process using drift-diffusion modelling. We found higher accuracies in the vagal nerve stimulation condition compared to sham stimulation. Modelling suggests a stimulation-dependent increase in reward sensitivity and shift of accuracy-speed trade-offs towards maximizing rewards. Moreover, vagal nerve stimulation was associated with increased non-decision times suggesting enhanced sensory or attentional processes. No differences of starting bias were detected for both conditions. Accuracies in the extinction phase were higher in later trials of the vagal nerve stimulation condition, suggesting a perseverative effect compared to sham. Together, our results provide first evidence of causal vagal influence on human reinforcement learning and might have clinical implications for the usage of vagal stimulation in learning deficiency.


Sign in / Sign up

Export Citation Format

Share Document