quantitative rating
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 10)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
Vol 11 (19) ◽  
pp. 8946
Author(s):  
Ioakeim Konstantinidis ◽  
Vassilis Marinos ◽  
George Papathanassiou

Rockfall events consist one of the most hazardous geological phenomena in mountainous landscapes, with the potential to turn catastrophic if they occur near an anthropogenic environment. Rockfall hazard and risk assessments are recognized as some of the most challenging surveys among the geoengineering society, due to the urgent need for accurate foresight of likely rockfall areas, together with their magnitude and impact. In recent decades, with the introduction of remote sensing technologies, such as Unmanned Aerial Vehicles, the construction of qualitative and quantitative analyses for rockfall events became more precise. This study primarily aims to take advantage of the UAV’s capabilities, in order to produce a detailed hazard and risk assessment via the proposition of a new semi-quantitative rating system. The area of application is located in the cultural heritage area of Kipinas Monastery in Epirus, Greece, which is characterized by the absence of pre-existing data regarding previous rockfall events. As an outcome, it was shown that the suggested methodology, with the combination of innovative remote sensing technologies with traditional engineering geological field surveys, can lead to the extraction of all the necessary quantitative data input for the proposed rating system for any natural slope.


Buildings ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 304
Author(s):  
Efstathios Adamopoulos ◽  
Fulvio Rinaudo

The detailed documentation of degradation constitutes a fundamental step for weathering diagnosis and, consequently, for successful planning and implementation of conservation measures for stone heritage. Mapping the surface patterns of stone is a non-destructive procedure critical for the qualitative and quantitative rating of the preservation state. Furthermore, mapping is employed for the annotation of weathering categories and the calculation of damage indexes. However, it is often a time-consuming task, which is conducted manually. Thus, practical methods need to be developed to automatize degradation mapping without significantly increasing the diagnostic process’s cost for conservation specialists. This work aims to develop and evaluate a methodology based on affordable close-range sensing techniques, image processing, and free and open source software for the spatial description, annotation, qualitative analysis, and rating of stone weathering-induced damage. Low-cost cameras were used to record images in the visible, near-infrared, and thermal-infrared spectra. The application of photogrammetric techniques allowed for the generation of the necessary background, that was elaborated to extract thematic information. Digital image processing of the spatially and radiometrically corrected images and image mosaics enabled the straightforward transition to a spatial information environment simplifying the development of degradation maps. The digital thematic maps facilitated the rating of stone damage and the extraction of useful statistical data.


2021 ◽  
Author(s):  
Kim Martinez ◽  
Maria Isabel Menéndez-Menéndez ◽  
David Checa ◽  
Andres Bustillo

BACKGROUND The design of Virtual Reality Serious Games (VR-SG) is a subject still developing. One of its open developments is the definition of metrics to evaluate the fun and learning result. In this way, weaknesses and strengths in the design of serious games can be found for future works in this research field. OBJECTIVE This paper aims to create a metric that can be used to rate the gameplay of VR-SG. This metric’s novelty allows to evaluate the different fun and learning features and give them a quantitative rating. A study case shows the capability of implementing this evaluation to identify strengths and weaknesses of VR-SGs. METHODS The new VR-SG metric is developed on the basis of the Mechanics, Dynamics and Aesthetic (MDA) framework but including User Experience (UX) elements and adapting them to VR-SG. This metric includes 1) UX aspects: VR-headsets, training tutorials and interactive adaptions to avoid VR inconveniences; and 2) MDA aspects: exclusive VR audiovisual elements and its aesthetics interactions. RESULTS The selected indie serious game is Hellblade, developed to raise awareness about the difficulties of people suffering from psychosis with two versions: one for 2D-screens and the other for VR devices. The comparison of metric´s scores for both versions shows: 1) some VR dynamics increase the gameplay impact and therefore, the educational capacity; and 2) flaws in game design where the scores drop down. Some of these flaws are: reduced number of levels, missions and items, lack of a tutorial to enhance usability and lack of strategies and rewards in the long-term to increase motivation. CONCLUSIONS This metric allows to identify the elements of the gameplay and UX that are necessary to learn in VR experiences. The study case shows this research is useful to evaluate the educational utility of VR-SG. Further works will analyze VR applications to synthetize every game element influencing its intrinsic sensations. CLINICALTRIAL The trials have not been registered, as testing for this metric has not involved people with mental conditions or addressed other medical applications. Hellblade is a commercial video game that anyone can purchase and play. The trials have been carried out to obtain results on the gaming experience of different people in relation to the educational purpose of raising awareness of psychosis.


2020 ◽  
Vol 25 (10) ◽  
pp. 4116
Author(s):  
S. I. Karas ◽  
E. V. Grakova ◽  
M. V. Balakhonova ◽  
M. B. Arzhanik ◽  
E. E. Kara-Sal

Aim. To create a methodological base for distance learning of cardiology healthcare professionals — multimedia clinical diagnostic tasks.Material and methods. The interdisciplinary team used text and multimedia formats for clinical diagnostic data. Web technologies provided remote access to information located on the server.Results. The report presents the experience of the practical implementation of multimedia clinical diagnostic tasks in cardiology, including the augmented reality. The variability of presenting information to students is implemented in the multimedia clinical diagnostic tasks, which is integrated with the rating system for evaluating decisions. The solution paths are determined by the actions of the students in the trigger interactive blocks and is evaluated by the rating system. Personal rating is a numerical value that integrally characterizes the decisionmaking competence of students. The conversion of the quantitative rating into the conventional form (‘pass/fail’, ‘excellent’, ‘good’, ‘passing grade’) will be provided after the trial period of the software.Conclusion. The created Web service and computer simulations can become a methodological basis for the distance learning in cardiology. This technology can be in demand in the continuing medical education.


10.2196/18374 ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. e18374
Author(s):  
Stuart McLennan

Background Previous research internationally has only analyzed publicly available feedback on physician rating websites (PRWs). However, it appears that many PRWs are not publishing all the feedback they receive. Analysis of this rejected feedback could provide a better understanding of the types of feedback that are currently not published and whether this is appropriate. Objective The aim of this study was to examine (1) the number of patient feedback rejected from the Swiss PRW Medicosearch, (2) the evaluation tendencies of the rejected patient feedback, and (3) the types of issues raised in the rejected narrative comments. Methods The Swiss PRW Medicosearch provided all the feedback that had been rejected between September 16, 2008, and September 22, 2017. The feedback were analyzed and classified according to a theoretical categorization framework of physician-, staff-, and practice-related issues. Results Between September 16, 2008, and September 22, 2017, Medicosearch rejected a total of 2352 patient feedback. The majority of feedback rejected (1754/2352, 74.6%) had narrative comments in the German language. However, 11.9% (279/2352) of the rejected feedback only provided a quantitative rating with no narrative comment. Overall, 25% (588/2352) of the rejected feedback were positive, 18.7% (440/2352) were neutral, and 56% (1316/2352) were negative. The average rating of the rejected feedback was 2.8 (SD 1.4). In total, 44 subcategories addressing the physician (n=20), staff (n=9), and practice (n=15) were identified. In total, 3804 distinct issues were identified within the 44 subcategories of the categorization framework; 75% (2854/3804) of the issues were related to the physician, 6.4% (242/3804) were related to the staff, and 18.6% (708/3804) were related to the practice. Frequently mentioned issues identified from the rejected feedback included (1) satisfaction with treatment (533/1903, 28%); (2) the overall assessment of the physician (392/1903, 20.6%); (3) recommending the physician (345/1903, 18.1%); (4) the physician’s communication (261/1903, 13.7%); (5) the physician’s caring attitude (220/1903, 11.6%); and (6) the physician’s friendliness (203/1903, 10.6%). Conclusions It is unclear why the majority of the feedback were rejected. This is problematic and raises concerns that online patient feedback are being inappropriately manipulated. If online patient feedback is going to be collected, there needs to be clear policies and practices about how this is handled. It cannot be left to the whims of PRWs, who may have financial incentives to suppress negative feedback, to decide which feedback is or is not published online. Further research is needed to examine how many PRWs are using criteria for determining which feedback is published or not, what those criteria are, and what measures PRWs are using to address the manipulation of online patient feedback.


2020 ◽  
Author(s):  
Stuart McLennan

BACKGROUND Previous research internationally has only analyzed publicly available feedback on physician rating websites (PRWs). However, it appears that many PRWs are not publishing all the feedback they receive. Analysis of this rejected feedback could provide a better understanding of the types of feedback that are currently not published and whether this is appropriate. OBJECTIVE The aim of this study was to examine (1) the number of patient feedback rejected from the Swiss PRW Medicosearch, (2) the evaluation tendencies of the rejected patient feedback, and (3) the types of issues raised in the rejected narrative comments. METHODS The Swiss PRW Medicosearch provided all the feedback that had been rejected between September 16, 2008, and September 22, 2017. The feedback were analyzed and classified according to a theoretical categorization framework of physician-, staff-, and practice-related issues. RESULTS Between September 16, 2008, and September 22, 2017, Medicosearch rejected a total of 2352 patient feedback. The majority of feedback rejected (1754/2352, 74.6%) had narrative comments in the German language. However, 11.9% (279/2352) of the rejected feedback only provided a quantitative rating with no narrative comment. Overall, 25% (588/2352) of the rejected feedback were positive, 18.7% (440/2352) were neutral, and 56% (1316/2352) were negative. The average rating of the rejected feedback was 2.8 (SD 1.4). In total, 44 subcategories addressing the physician (n=20), staff (n=9), and practice (n=15) were identified. In total, 3804 distinct issues were identified within the 44 subcategories of the categorization framework; 75% (2854/3804) of the issues were related to the physician, 6.4% (242/3804) were related to the staff, and 18.6% (708/3804) were related to the practice. Frequently mentioned issues identified from the rejected feedback included (1) satisfaction with treatment (533/1903, 28%); (2) the overall assessment of the physician (392/1903, 20.6%); (3) recommending the physician (345/1903, 18.1%); (4) the physician’s communication (261/1903, 13.7%); (5) the physician’s caring attitude (220/1903, 11.6%); and (6) the physician’s friendliness (203/1903, 10.6%). CONCLUSIONS It is unclear why the majority of the feedback were rejected. This is problematic and raises concerns that online patient feedback are being inappropriately manipulated. If online patient feedback is going to be collected, there needs to be clear policies and practices about how this is handled. It cannot be left to the whims of PRWs, who may have financial incentives to suppress negative feedback, to decide which feedback is or is not published online. Further research is needed to examine how many PRWs are using criteria for determining which feedback is published or not, what those criteria are, and what measures PRWs are using to address the manipulation of online patient feedback.


10.2196/14336 ◽  
2019 ◽  
Vol 21 (9) ◽  
pp. e14336 ◽  
Author(s):  
Stuart McLennan

Background The majority of physician rating websites (PRWs) provide users the option to leave narrative comments about their physicians. Narrative comments potentially provide richer insights into patients’ experiences and feelings that cannot be fully captured in predefined quantitative rating scales and are increasingly being examined. However, the content and nature of narrative comments on Swiss PRWs has not been examined to date. Objective This study aimed to examine (1) the types of issues raised in narrative comments on Swiss PRWs and (2) the evaluation tendencies of the narrative comments. Methods A random stratified sample of 966 physicians was generated from the regions of Zürich and Geneva. Every selected physician was searched for on 3 PRWs (OkDoc, DocApp, and Medicosearch) and Google, and narrative comments were collected. Narrative comments were analyzed and classified according to a theoretical categorization framework of physician-, staff-, and practice-related issues. Results The selected physicians had a total of 849 comments. In total, 43 subcategories addressing the physician (n=21), staff (n=8), and practice (n=14) were identified. None of the PRWs’ comments covered all 43 subcategories of the categorization framework; comments on Google covered 86% (37/43) of the subcategories, Medicosearch covered 72% (31/43), DocApp covered 60% (26/43), and OkDoc covered 56% (24/43). In total, 2441 distinct issues were identified within the 43 subcategories of the categorization framework; 83.65% (2042/2441) of the issues related to the physician, 6.63% (162/2441) related to the staff, and 9.70% (237/2441) related to the practice. Overall, 95% (41/43) of the subcategories of the categorization framework and 81.60% (1992/2441) of the distinct issues identified were concerning aspects of performance (interpersonal skills of the physician and staff, infrastructure, and organization and management of the practice) that are considered assessable by patients. Overall, 83.0% (705/849) of comments were classified as positive, 2.5% (21/849) as neutral, and 14.5% (123/849) as negative. However, there were significant differences between PRWs, regions, and specialty regarding negative comments: 90.2% (111/123) of negative comments were on Google, 74.7% (92/123) were regarding physicians in Zurich, and 73.2% (90/123) were from specialists. Conclusions From the narrative comments analyzed, it can be reported that interpersonal issues make up nearly half of all negative issues identified, and it is recommended that physicians should focus on improving these issues. The current suppression of negative comments by Swiss PRWs is concerning, and there is a need for a consensus-based criterion to be developed to determine which comments should be published publicly. Finally, it would be helpful if Swiss patients are made aware of the current large differences between Swiss PRWs regarding the frequency and nature of ratings to help them determine which PRW will provide them with the most useful information.


2019 ◽  
Author(s):  
Stuart McLennan

BACKGROUND The majority of physician rating websites (PRWs) provide users the option to leave narrative comments about their physicians. Narrative comments potentially provide richer insights into patients’ experiences and feelings that cannot be fully captured in predefined quantitative rating scales and are increasingly being examined. However, the content and nature of narrative comments on Swiss PRWs has not been examined to date. OBJECTIVE This study aimed to examine (1) the types of issues raised in narrative comments on Swiss PRWs and (2) the evaluation tendencies of the narrative comments. METHODS A random stratified sample of 966 physicians was generated from the regions of Zürich and Geneva. Every selected physician was searched for on 3 PRWs (OkDoc, DocApp, and Medicosearch) and Google, and narrative comments were collected. Narrative comments were analyzed and classified according to a theoretical categorization framework of physician-, staff-, and practice-related issues. RESULTS The selected physicians had a total of 849 comments. In total, 43 subcategories addressing the physician (n=21), staff (n=8), and practice (n=14) were identified. None of the PRWs’ comments covered all 43 subcategories of the categorization framework; comments on Google covered 86% (37/43) of the subcategories, Medicosearch covered 72% (31/43), DocApp covered 60% (26/43), and OkDoc covered 56% (24/43). In total, 2441 distinct issues were identified within the 43 subcategories of the categorization framework; 83.65% (2042/2441) of the issues related to the physician, 6.63% (162/2441) related to the staff, and 9.70% (237/2441) related to the practice. Overall, 95% (41/43) of the subcategories of the categorization framework and 81.60% (1992/2441) of the distinct issues identified were concerning aspects of performance (interpersonal skills of the physician and staff, infrastructure, and organization and management of the practice) that are considered assessable by patients. Overall, 83.0% (705/849) of comments were classified as positive, 2.5% (21/849) as neutral, and 14.5% (123/849) as negative. However, there were significant differences between PRWs, regions, and specialty regarding negative comments: 90.2% (111/123) of negative comments were on Google, 74.7% (92/123) were regarding physicians in Zurich, and 73.2% (90/123) were from specialists. CONCLUSIONS From the narrative comments analyzed, it can be reported that interpersonal issues make up nearly half of all negative issues identified, and it is recommended that physicians should focus on improving these issues. The current suppression of negative comments by Swiss PRWs is concerning, and there is a need for a consensus-based criterion to be developed to determine which comments should be published publicly. Finally, it would be helpful if Swiss patients are made aware of the current large differences between Swiss PRWs regarding the frequency and nature of ratings to help them determine which PRW will provide them with the most useful information.


Author(s):  
Stuart McLennan

BACKGROUND Physician rating websites (PRWs) have been developed as part of a wider move toward transparency around health care quality, and these allow patients to anonymously rate, comment, and discuss physicians’ quality on the Web. The first Swiss PRWs were established in 2008, at the same time as many international PRWs. However, there has been limited research conducted on PRWs in Switzerland to date. International research has indicated that a key shortcoming of PRWs is that they have an insufficient number of ratings. OBJECTIVE The aim of this study was to examine the frequency of quantitative ratings and narrative comments on the Swiss PRWs. METHODS In November 2017, a random stratified sample of 966 physicians was generated from the regions of Zürich and Geneva. Every selected physician was searched for on 4 rating websites (OkDoc, DocApp, Medicosearch, and Google) between November 2017 and July 2018. It was recorded whether the physician could be identified, what the physician’s quantitative rating was, and whether the physician had received narrative comments. In addition, Alexa Internet was used to examine the number of visitors to the PRWs, compared with other websites. RESULTS Overall, the portion of physicians able to be identified on the PRWs ranged from 42.4% (410/966) on OkDoc to 87.3% (843/966) on DocApp. Of the identifiable physicians, only a few of the selected physicians had been rated quantitatively (4.5% [38/843] on DocApp to 49.8% [273/548] on Google) or received narrative comments (4.5% [38/843] on DocApp to 31.2% [171/548] on Google) at least once. Rated physicians also had, on average, a low number of quantitative ratings (1.47 ratings on OkDoc to 3.74 rating on Google) and narrative comments (1.23 comment on OkDoc to 3.03 comments on Google). All 3 websites allowing ratings used the same rating scale (1-5 stars) and had a very positive average rating: DocApp (4.71), Medicosearch (4.69), and Google (4.41). There were significant differences among the PRWs (with the majority of ratings being posted on Google in past 2 years) and regions (with physicians in Zurich more likely to have been rated and have more ratings on average). Only Google (position 1) and Medicosearch (position 8358) are placed among the top 10,000 visited websites in Switzerland. CONCLUSIONS It appears that this is the first time Google has been included in a study examining physician ratings internationally and it is noticeable how Google has had substantially more ratings than the 3 dedicated PRWs in Switzerland over the past 2 and a half years. Overall, this study indicates that Swiss PRWs are not yet a reliable source of unbiased information regarding patient experiences and satisfaction with Swiss physicians; many selected physicians were unable to be identified, only a few physicians had been rated, and the ratings posted were overwhelmingly positive.


Sign in / Sign up

Export Citation Format

Share Document