Using group sizes to optimise archery equipment

Author(s):  
James L Park

One method used by archers and coaches to optimise archery equipment is to measure the size of arrow groups on the target prior to and following an adjustment. The group sizes are then used to determine if an equipment change assisted or detracted from the archer’s performance. A model based on a Monte Carlo method and group size measurements from seven elite archers were used to test the validity of this process. The results showed that this method was neither an effective method nor useful process because the probability of false positives or false negatives was too great. A better approach to optimise archery equipment is to monitor the archer’s skill level or average score over an extended time following any change.

2019 ◽  
Vol 12 (1) ◽  
Author(s):  
Yeong-min Na ◽  
Hyun-seok Lee ◽  
Jong-kyu Park

Abstract This paper proposes a continuum robot that can be controlled automatically using image recognition. The proposed robot can operate in narrower spaces than the existing robots composed of links and joints. In addition, because it is automatically controlled through image recognition, the robot can be operated irrespective of the human controller's skill level. The manipulator is divided into two stages, with three wires connected to each stage to minimize the energy used to control the manipulator posture. The manipulator's posture is controlled by adjusting the length of the wire, similar to the relaxation and contraction of the muscles. Denavit–Hartenberg transformation and the Monte Carlo method were used to analyze the robot's kinematics and workspace. In a performance test, an experimental plate with nine targets was fabricated and the manipulator speed was adjusted to 5, 10, and 20 mm/s. Experimental results show that the manipulator was automatically controlled and reached all targets, with errors of 2.58, 3.28, and 9.18 mm.


Measurement ◽  
2019 ◽  
Vol 137 ◽  
pp. 323-331 ◽  
Author(s):  
Mengbao Fan ◽  
Genlong Wu ◽  
Binghua Cao ◽  
Thomson Sarkodie-Gyan ◽  
Zhixiong Li ◽  
...  

2017 ◽  
Vol 54 (11) ◽  
pp. 110102
Author(s):  
王晓芳 Wang Xiaofang ◽  
张 新 Zhang Xin ◽  
张继真 Zhang Jizhen ◽  
王灵杰 Wang Lingjie

2020 ◽  
Vol 41 (1) ◽  
pp. 116-131 ◽  
Author(s):  
Connor WJ Bevington ◽  
Ju-Chieh (Kevin) Cheng ◽  
Ivan S Klyuzhin ◽  
Mariya V Cherkasova ◽  
Catharine A Winstanley ◽  
...  

Current methods using a single PET scan to detect voxel-level transient dopamine release—using F-test (significance) and cluster size thresholding—have limited detection sensitivity for clusters of release small in size and/or having low release levels. Specifically, simulations show that voxels with release near the peripheries of such clusters are often rejected—becoming false negatives and ultimately distorting the F-distribution of rejected voxels. We suggest a Monte Carlo method that incorporates these two observations into a cost function, allowing erroneously rejected voxels to be accepted under specified criteria. In simulations, the proposed method improves detection sensitivity by up to 50% while preserving the cluster size threshold, or up to 180% when optimizing for sensitivity. A further parametric-based voxelwise thresholding is then suggested to better estimate the release dynamics in detected clusters. We apply the Monte Carlo method to a pilot scan from a human gambling study, where additional parametrically unique clusters are detected as compared to the current best methods—results consistent with our simulations.


Sign in / Sign up

Export Citation Format

Share Document