scholarly journals Low Comparability of Nutrition-Related Mobile Apps against the Polish Reference Method—A Validity Study

Nutrients ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 2868
Author(s):  
Agnieszka Bzikowska-Jura ◽  
Piotr Sobieraj ◽  
Filip Raciborski

Nutrition-related mobile applications (apps) are commonly used to provide information about the user’s dietary intake, however, limited research has been carried out to assess to what extent their results agree with those from the reference method (RM). The main aim of this study was to evaluate the agreement of popular nutrition-related apps with the Polish RM (Dieta 6.0). The dietary data from two days of dietary records previously obtained from adults (60 males and 60 females) were compared with values calculated in five selected apps (FatSecret, YAZIO, Fitatu, MyFitnessPal, and Dine4Fit). The selection of apps was performed between January and February 2021 and based on developed criteria (e.g., availability in the Polish language, access to the food composition database, and the number of downloads). The data was entered by experienced clinical dietitians and checked by one more researcher. The mean age of study participants was 41.7 ± 14.8. We observed that all the apps tended to overestimate the energy intake, however, when considering the macronutrient intake, over- and underestimation were observed. According to our assumed criterion (± 5% as perfect agreement, ± 10% as sufficient agreement), none of the apps can be recommended as a replacement for the reference method both for scientific as well as clinical use. According to the Bland-Altman analysis, the smallest bias was observed in Dine4Fit in relation to energy, protein, and fat intake (respectively: −23 kcal; −0.7 g, 3 g), however, a wide range between the upper and lower limits of agreement were reported. According to the carbohydrate intake, the lowest bias was observed when FatSecret and Fitatu were used. These results indicate that the leading nutrition-related apps present a critical issue in the assessment of energy and macronutrient intake. Therefore, the implementation of validation studies for quality assessment is crucial to develop apps with satisfying quality.

Healthcare ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 232
Author(s):  
Agnieszka Zimmermann ◽  
Anna Pilarska ◽  
Aleksandra Gaworska-Krzemińska ◽  
Jerzy Jankau ◽  
Marsha N. Cohen

Background: Informed consent is important in clinical practice, as a person’s written consent is required prior to many medical interventions. Many informed consent forms fail to communicate simply and clearly. The aim of our study was to create an easy-to-understand form. Methods: Our assessment of a Polish-language plastic surgery informed consent form used the Polish-language comprehension analysis program (jasnopis.pl, SWPS University) to assess the readability of texts written for people of various education levels; and this enabled us to modify the form by shortening sentences and simplifying words. The form was re-assessed with the same software and subsequently given to 160 adult volunteers to assess the revised form’s degree of difficulty or readability. Results: The first software analysis found the language was suitable for people with a university degree or higher education, and after revision and re-assessment became suitable for persons with 4–6 years of primary school education and above. Most study participants also assessed the form as completely comprehensible. Conclusions: There are significant benefits possible for patients and practitioners by improving the comprehensibility of written informed consent forms.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4582
Author(s):  
Changjie Cai ◽  
Tomoki Nishimura ◽  
Jooyeon Hwang ◽  
Xiao-Ming Hu ◽  
Akio Kuroda

Fluorescent probes can be used to detect various types of asbestos (serpentine and amphibole groups); however, the fiber counting using our previously developed software was not accurate for samples with low fiber concentration. Machine learning-based techniques (e.g., deep learning) for image analysis, particularly Convolutional Neural Networks (CNN), have been widely applied to many areas. The objectives of this study were to (1) create a database of a wide-range asbestos concentration (0–50 fibers/liter) fluorescence microscopy (FM) images in the laboratory; and (2) determine the applicability of the state-of-the-art object detection CNN model, YOLOv4, to accurately detect asbestos. We captured the fluorescence microscopy images containing asbestos and labeled the individual asbestos in the images. We trained the YOLOv4 model with the labeled images using one GTX 1660 Ti Graphics Processing Unit (GPU). Our results demonstrated the exceptional capacity of the YOLOv4 model to learn the fluorescent asbestos morphologies. The mean average precision at a threshold of 0.5 ([email protected]) was 96.1% ± 0.4%, using the National Institute for Occupational Safety and Health (NIOSH) fiber counting Method 7400 as a reference method. Compared to our previous counting software (Intec/HU), the YOLOv4 achieved higher accuracy (0.997 vs. 0.979), particularly much higher precision (0.898 vs. 0.418), recall (0.898 vs. 0.780) and F-1 score (0.898 vs. 0.544). In addition, the YOLOv4 performed much better for low fiber concentration samples (<15 fibers/liter) compared to Intec/HU. Therefore, the FM method coupled with YOLOv4 is remarkable in detecting asbestos fibers and differentiating them from other non-asbestos particles.


2018 ◽  
Vol 38 (1) ◽  
pp. 3-9 ◽  
Author(s):  
Wenhong Chen ◽  
Anabel Quan-Haase

The hype around big data does not seem to abate nor do the scandals. Privacy breaches in the collection, use, and sharing of big data have affected all the major tech players, be it Facebook, Google, Apple, or Uber, and go beyond the corporate world including governments, municipalities, and educational and health institutions. What has come to light is that enabled by the rapid growth of social media and mobile apps, various stakeholders collect and use large amounts of data, disregarding the ethics and politics. As big data touch on many realms of daily life and have profound impacts in the social world, the scrutiny around big data practice becomes increasingly relevant. This special issue investigates the ethics and politics of big data using a wide range of theoretical and methodological approaches. Together, the articles provide new understandings of the many dimensions of big data ethics and politics, showing it is important to understand and increase awareness of the biases and limitations inherent in big data analysis and practices.


2018 ◽  
Vol 17 (1) ◽  
pp. 160940691878345 ◽  
Author(s):  
Benjamin L. Read

Many qualitative social scientists conduct single-session interviews with large numbers of individuals so as to maximize the sample size and obtain a wide range of study participants. Yet in some circumstances, one-shot interviews cannot produce information of adequate quality, quantity, and validity. This article explains the several conditions that call for an alternative approach, serial interviewing, that entails interviewing participants on multiple occasions. This method is appropriate when studying complex or ill-defined issues, when interviews are subject to time constraints, when exploring change or variation over time, when participants are reluctant to share valid information, and when working with critical informants. A further benefit is the opportunity it provides for verifying and cross-checking information. This article delineates the general features of this technique. Through a series of encounters, the researcher builds familiarity and trust, probes a range of key topics from multiple angles, explores different facets of participants’ experiences, and learns from events that happen to take place during the interviews. This helps overcome biases associated with one-off interviews, including a tendency toward safe, simple answers in which participants flatten complexity, downplay sociopolitical conflict, and put themselves in a flattering light. This article illustrates the utility of this approach through examples drawn from published work and through a running illustration based on the author’s research on elected neighborhood leaders in Taipei. Serial interviewing helped produce relatively accurate and nuanced data concerning the power these leaders wield and their multiple roles as intermediaries between state and society.


Author(s):  
Tess Grynoch

Objective: To examine how Canadian academic medical libraries are supporting mobile apps, what apps are currently being provided by these libraries, and what types of promotion are being used. Methods: A survey of the library websites for the 17 medical schools in Canada was completed. For each library website surveyed, the medical apps listed on the website, any services mentioned through this medium, and any type of app promotion events were noted. When Facebook and Twitter accounts were evident, the tweets were searched and the past two years of Facebook posts scanned for mention of medical apps or mobile services/events. Results: All seventeen academic medical libraries had lists of mobile medical apps with a large range in the number of medical relevant apps (average=31, median= 23). A total of 275 different apps were noted and the apps covered a wide range of subjects. Five of the 14 Facebook accounts scanned had posts about medical apps in the past two years while 11 of the 15 Twitter accounts had tweets about medical apps. Social media was only one of the many promotional methods noted. Outside of the app lists and mobile resources guides, Canadian academic medical libraries are providing workshops, presentations, and drop-in sessions for mobile medical apps. Conclusion: While librarians cannot simply compare mobile services and resources between academic medical libraries without factoring in a number of other circumstances, librarians can learn from mobile resources strategies employed at other libraries, such as using research guides to increase medical app literacy.


2021 ◽  
Vol 9 (6) ◽  
pp. 13-25
Author(s):  
Michail Angelopoulos ◽  
Yannis Pollalis

This research focuses on providing insights for a solution for collecting, storing, analyzing and visualizing data from customer energy consumption patterns. The data analysis part of our research provides the models for knowledge discovery that can be used to improve energy efficiency at both producer and consumer ends. Τhe study sets a new analytical framework for assessing the role of behavioral knowledge in energy efficiency using a wide range of Case Studies, Experiments, Research, Information and Communication Technologies (ICT) in combination with the most modern econometric methods and large analytical data taking into account the characteristics of the study participants (household energy customers).


2010 ◽  
Vol 29 (2) ◽  
pp. 203
Author(s):  
Jasmina Petreska ◽  
Ljupco Pejov

Three numerical methods were applied to compute the anharmonic O–H stretching vibrational frequencies of the free and aqueous hydroxide ion on the basis of one-dimensional vibrational potential energies computed at various levels of theory: i) simple Hamiltonian matrix diagonalization technique, based on representation of the vibrational potential in Simons-Parr-Finlan (SPF) coordinates, ii) Numerov algorithm and iii) Fourier grid Hamiltonian method (FGH).Considering the Numerov algorithm as a reference method, the diagonalization technique performs remarkably well in a very wide range of frequencies and frequency shifts (up to 300 cm–1). FGH method, on the other hand, though showing a very good performance as well, exhibits more significant (and non-uniform) discrepancies with the Numerov algorithm, even for rather modest frequency shifts.


Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Bradley A Cahn ◽  
Jill T Shah ◽  
Samantha By ◽  
E. B Welch ◽  
Laura Sacolick ◽  
...  

Background: Radiographic diagnosis of intracranial hemorrhage (ICH) is a critical determinant of stroke care pathways requiring patient transport to a neuroimaging suite. Advances in low-field MRI have made it possible to obtain clinically useful imaging at the point of care (POC). Aim: The aim of this study was to obtain preliminary data regarding the ability of a bedside POC MRI scanner to detect ICH. Methods: We studied 36 patients with a diagnosis of ICH (n=18) or ischemic stroke (n=18). Five blinded readers independently evaluated T2W and FLAIR exams acquired prospectively on a 64 mT, portable bedside MRI system (Hyperfine Research, Inc). Kappa coefficients (κ) were calculated to determine inter-rater agreement. Ground truth was obtained from the clinical report of the closest conventional imaging study (17.9 ± 10.4 hours) and verified by a core reader. For each exam, majority consensus among raters was used to determine sensitivity. Results: ICH volume ranged from 4 to 101 cc (median of 13 cc). Exams were acquired within 7 days of symptom onset (51.1 ± 28.8 hours). A pathologic lesion was identified on every exam with 100% sensitivity. Sensitivity for distinguishing any hemorrhage was 89% and specificity was 83%. The mean sensitivity and specificity for individual raters was 79% and 69%, respectively. When limited to supratentorial hemorrhage, consensus sensitivity was 94%. For ICH cases detected by all raters (n=9), there was 100% accuracy for localizing the bleed (lobar vs. non-lobar) with perfect agreement among raters (κ = 1, p <0.0001). There was substantial agreement for identifying intraventricular hemorrhage (IVH) (κ = 0.72, p < 0.0001). Sensitivity for IVH was 100% based on rater consensus. Figure 1 shows a POC exam with an ICH and IVH. Conclusions: These data suggest that low-field, POC MRI may be used to detect hemorrhagic stroke at the bedside. Further work is needed to evaluate this approach in the hyperacute setting and across a wide range of ICH characteristics.


Author(s):  
Soumya Raychaudhuri

Successful use of text mining algorithms to facilitate genomics research hinges on the ability to recognize the names of genes in scientific text. In this chapter we address the critical issue of gene name recognition. Once gene names can be recognized in the scientific text, we can begin to understand what the text says about those genes. This is a much more challenging issue than one might appreciate at first glance. Gene names can be inconsistent and confusing; automated gene name recognition efforts have therfore turned out to be quite challenging to implement with high accuracy. Gene name recognition algorithms have a wide range of useful applications. Until this chapter we have been avoiding this issue and have been using only gene-article indices. In practice these indices are manually assembled. Gene name recognition algorithms offer the possibility of automating and expediting the laborious task of building reference indices. Article indices can be built that associate articles to genes based on whether or not the article mentions the gene by name. In addition, gene name recognition is the first step in doing more detailed sentence-by-sentence text analysis. For example, in Chapter 10 we will talk about identifying relationships between genes from text. Frequently, this requires identifying sentences refering to two gene names, and understanding what sort of relationship the sentence is describing between these genes. Sophisticated natural language processing techniques to parse sentences and understand gene function cannot be done in a meaningful way without recognizing where the gene names are in the first place. The major concepts of this chapter are presented in the frame box. We begin by describing the commonly used strategies that can be used alone or in concert to identify gene names. At the end of the chapter we introduce one successful name finding algorithm that combines many of the different strategies. There are several commonly used approaches that can be exploited to recognize gene names in text (Chang, Shutze, et al. 2004). Often times these approaches can be combined into even more effective multifaceted algorithms.


1996 ◽  
Vol 42 (5) ◽  
pp. 738-743 ◽  
Author(s):  
N Harris ◽  
V Galpchian ◽  
N Rifai

Abstract We compared the performance of three methods for quantifying high-density lipoprotein cholesterol (HDL-C) with the Reference Method for HDL-C, using samples with a wide range of triglyceride (TG) concentrations (290-18000 mg/L). All three comparison assays-- utilizing a magnetic dextran sulfate precipitating reagent, a direct method, and a standard MgCl2-dextran sulfate reagent--were precise, with a run-to-run CV of less than or equal to 4.1%. However, the systematic error of these assays exceeded the National Cholesterol Education Program (NCEP) performance goal of less than or equal to 10% in half of the concentration ranges tested. Nevertheless, the total error of the assays generally meets the current 22% limit set by the NCEP. Although both the magnetic dextran sulfate precipitation reagent and the direct assay can be performed more rapidly than the MgCl2-dextran sulfate assay, the direct assay involves no sample preparation and requires only 4 microL of sample excluding the dead space. Although precipitation is frequently inadequate with the MgCl2-dextran sulfate reagent at TG concentrations &gt;6000 mg/L, both the magnetic and the direct reagent show no interference from high TG concentrations as great as 18 000 mg/L.


Sign in / Sign up

Export Citation Format

Share Document