Accuracy of Ssh Measurement by Usv Equipped With Gps -A Comparison with the Gps Buoy

Author(s):  
Zhai Wanlin ◽  
Yan Longhao ◽  
Wang He ◽  
Qiao Jiguo ◽  
Liang Hao
Keyword(s):  
BMJ Open ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. e040269
Author(s):  
Stephen Gilbert ◽  
Alicia Mehl ◽  
Adel Baluch ◽  
Caoimhe Cawley ◽  
Jean Challiner ◽  
...  

ObjectivesTo compare breadth of condition coverage, accuracy of suggested conditions and appropriateness of urgency advice of eight popular symptom assessment apps.DesignVignettes study.Setting200 primary care vignettes.Intervention/comparatorFor eight apps and seven general practitioners (GPs): breadth of coverage and condition-suggestion and urgency advice accuracy measured against the vignettes’ gold-standard.Primary outcome measures(1) Proportion of conditions ‘covered’ by an app, that is, not excluded because the user was too young/old or pregnant, or not modelled; (2) proportion of vignettes with the correct primary diagnosis among the top 3 conditions suggested; (3) proportion of ‘safe’ urgency advice (ie, at gold standard level, more conservative, or no more than one level less conservative).ResultsCondition-suggestion coverage was highly variable, with some apps not offering a suggestion for many users: in alphabetical order, Ada: 99.0%; Babylon: 51.5%; Buoy: 88.5%; K Health: 74.5%; Mediktor: 80.5%; Symptomate: 61.5%; Your.MD: 64.5%; WebMD: 93.0%. Top-3 suggestion accuracy was GPs (average): 82.1%±5.2%; Ada: 70.5%; Babylon: 32.0%; Buoy: 43.0%; K Health: 36.0%; Mediktor: 36.0%; Symptomate: 27.5%; WebMD: 35.5%; Your.MD: 23.5%. Some apps excluded certain user demographics or conditions and their performance was generally greater with the exclusion of corresponding vignettes. For safe urgency advice, tested GPs had an average of 97.0%±2.5%. For the vignettes with advice provided, only three apps had safety performance within 1 SD of the GPs—Ada: 97.0%; Babylon: 95.1%; Symptomate: 97.8%. One app had a safety performance within 2 SDs of GPs—Your.MD: 92.6%. Three apps had a safety performance outside 2 SDs of GPs—Buoy: 80.0% (p<0.001); K Health: 81.3% (p<0.001); Mediktor: 87.3% (p=1.3×10-3).ConclusionsThe utility of digital symptom assessment apps relies on coverage, accuracy and safety. While no digital tool outperformed GPs, some came close, and the nature of iterative improvements to software offers scalable improvements to care.


Author(s):  
Toshihiko Nagai ◽  
Koji Kawaguchi ◽  
Yutaka Yoshimura ◽  
Takeshi Yoshioka ◽  
Ryoichi Tanikawa ◽  
...  

Author(s):  
Chuntao Chen ◽  
Wanlin Zhai ◽  
Longhao Yan ◽  
Qian Zhang ◽  
Xiaoxu Zhang ◽  
...  

Author(s):  
Hiroyasu KAWAI ◽  
Koji KAWAGUCHI ◽  
Katsumi SEKI ◽  
Tsutomu INOMATA
Keyword(s):  

2008 ◽  
Vol 244 ◽  
pp. 1071-1079 ◽  
Author(s):  
Christopher Watson ◽  
Richard Coleman ◽  
Roger Handsworth
Keyword(s):  

Author(s):  
Jingjin Huang ◽  
Guoqing Zhou ◽  
Tao Yue ◽  
Wei Zhao ◽  
Xiaodong Tao ◽  
...  
Keyword(s):  

Author(s):  
T. Nagai ◽  
K. Shimizu ◽  
J. H. Lee ◽  
M. Iwasaki ◽  
T. Fujita ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document