gps buoy
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 4)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 9 (7) ◽  
pp. 729
Author(s):  
Yukiharu Hisaki

Drifting buoys collect wave data in the open ocean far from land and in areas with strong currents. However, the validation of the drifting buoy wave data is limited. Here, we compared the drifting buoy wave data, ERA5 wave data, and moored GPS buoy wave data. Data from 2009 to 2018 near the coast of Japan were used. The agreement of the drifting buoy-observed wave parameters with the moored GPS buoy-observed wave parameters is better than that of ERA5 wave parameters, which is statistically significant. In particular, the accuracy of the ERA5 wave heights tends to be lower where the ocean currents are fast. On the other hand, the agreement between the drifting buoy-observed wave heights and the moored GPS buoy-observed wave heights was good even in the areas with strong currents. It is confirmed that the drifting buoy wave data can be used as reference data for wave modeling study.


BMJ Open ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. e040269
Author(s):  
Stephen Gilbert ◽  
Alicia Mehl ◽  
Adel Baluch ◽  
Caoimhe Cawley ◽  
Jean Challiner ◽  
...  

ObjectivesTo compare breadth of condition coverage, accuracy of suggested conditions and appropriateness of urgency advice of eight popular symptom assessment apps.DesignVignettes study.Setting200 primary care vignettes.Intervention/comparatorFor eight apps and seven general practitioners (GPs): breadth of coverage and condition-suggestion and urgency advice accuracy measured against the vignettes’ gold-standard.Primary outcome measures(1) Proportion of conditions ‘covered’ by an app, that is, not excluded because the user was too young/old or pregnant, or not modelled; (2) proportion of vignettes with the correct primary diagnosis among the top 3 conditions suggested; (3) proportion of ‘safe’ urgency advice (ie, at gold standard level, more conservative, or no more than one level less conservative).ResultsCondition-suggestion coverage was highly variable, with some apps not offering a suggestion for many users: in alphabetical order, Ada: 99.0%; Babylon: 51.5%; Buoy: 88.5%; K Health: 74.5%; Mediktor: 80.5%; Symptomate: 61.5%; Your.MD: 64.5%; WebMD: 93.0%. Top-3 suggestion accuracy was GPs (average): 82.1%±5.2%; Ada: 70.5%; Babylon: 32.0%; Buoy: 43.0%; K Health: 36.0%; Mediktor: 36.0%; Symptomate: 27.5%; WebMD: 35.5%; Your.MD: 23.5%. Some apps excluded certain user demographics or conditions and their performance was generally greater with the exclusion of corresponding vignettes. For safe urgency advice, tested GPs had an average of 97.0%±2.5%. For the vignettes with advice provided, only three apps had safety performance within 1 SD of the GPs—Ada: 97.0%; Babylon: 95.1%; Symptomate: 97.8%. One app had a safety performance within 2 SDs of GPs—Your.MD: 92.6%. Three apps had a safety performance outside 2 SDs of GPs—Buoy: 80.0% (p<0.001); K Health: 81.3% (p<0.001); Mediktor: 87.3% (p=1.3×10-3).ConclusionsThe utility of digital symptom assessment apps relies on coverage, accuracy and safety. While no digital tool outperformed GPs, some came close, and the nature of iterative improvements to software offers scalable improvements to care.


2019 ◽  
Vol 37 (5) ◽  
pp. 1533-1541 ◽  
Author(s):  
Chuntao Chen ◽  
Jianhua Zhu ◽  
Wanlin Zhai ◽  
Longhao Yan ◽  
Yili Zhao ◽  
...  

2016 ◽  
Vol 61 (7) ◽  
pp. 335-339 ◽  
Author(s):  
Kh. Kh. Il’yasov ◽  
V. E. Nazaikinskii ◽  
S. Ya. Sekerzh-Zen’kovich ◽  
A. A. Tolchennikov

Polar Science ◽  
2016 ◽  
Vol 10 (2) ◽  
pp. 132-139 ◽  
Author(s):  
Yuichi Aoyama ◽  
Tae-Hee Kim ◽  
Koichiro Doi ◽  
Hideaki Hayakawa ◽  
Toshihiro Higashi ◽  
...  
Keyword(s):  

2016 ◽  
Vol 75 (sp1) ◽  
pp. 1242-1246
Author(s):  
Dongseob Song ◽  
Giyoung Kim ◽  
Junglyul Lee

2015 ◽  
Vol 520 ◽  
pp. 397-406 ◽  
Author(s):  
R. Hostache ◽  
P. Matgen ◽  
L. Giustarini ◽  
F.N. Teferle ◽  
C. Tailliez ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document