code algorithm
Recently Published Documents


TOTAL DOCUMENTS

97
(FIVE YEARS 29)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Kristin M. Lenoir ◽  
Lynne E. Wagenknecht ◽  
Jasmin Divers ◽  
Ramon Casanova ◽  
Dana Dabelea ◽  
...  

Abstract Background Disease surveillance of diabetes among youth has relied mainly upon manual chart review. However, increasingly available structured electronic health record (EHR) data have been shown to yield accurate determinations of diabetes status and type. Validated algorithms to determine date of diabetes diagnosis are lacking. The objective of this work is to validate two EHR-based algorithms to determine date of diagnosis of diabetes. Methods A rule-based ICD-10 algorithm identified youth with diabetes from structured EHR data over the period of 2009 through 2017 within three children’s hospitals that participate in the SEARCH for Diabetes in Youth Study: Cincinnati Children’s Hospital, Cincinnati, OH, Seattle Children’s Hospital, Seattle, WA, and Children’s Hospital Colorado, Denver, CO. Previous research and a multidisciplinary team informed the creation of two algorithms based upon structured EHR data to determine date of diagnosis among diabetes cases. An ICD-code algorithm was defined by the year of occurrence of a second ICD-9 or ICD-10 diabetes code. A multiple-criteria algorithm consisted of the year of first occurrence of any of the following: diabetes-related ICD code, elevated glucose, elevated HbA1c, or diabetes medication. We assessed algorithm performance by percent agreement with a gold standard date of diagnosis determined by chart review. Results Among 3777 cases, both algorithms demonstrated high agreement with true diagnosis year and differed in classification (p = 0.006): 86.5% agreement for the ICD code algorithm and 85.9% agreement for the multiple-criteria algorithm. Agreement was high for both type 1 and type 2 cases for the ICD code algorithm. Performance improved over time. Conclusions Year of occurrence of the second ICD diabetes-related code in the EHR yields an accurate diagnosis date within these pediatric hospital systems. This may lead to increased efficiency and sustainability of surveillance methods for incidence of diabetes among youth.


2021 ◽  
Vol 27 (3) ◽  
pp. 205-214
Author(s):  
Xin Niu ◽  
Jingjing Jiang

Multimedia is inconvenient to use, difficult to maintain, and redundant in data storage. In order to solve the above problems and apply cloud storage to the integration of university teaching resources, this paper designs a virtualized cloud storage platform for university multimedia classrooms. The platform has many advantages, such as reducing the initial investment in multimedia classrooms, simplifying management tasks, making maximum use of actual resources and easy access to resources. Experiments and analysis show the feasibility and effectiveness of the platform. Aiming at the problems of the single-node repair algorithm of the existing multimedia cloud storage system, the limited domain is large, the codec complexity is high, the disk I/O (Input/Output) cost is high, the storage overhead and the repair bandwidth are unbalanced, and a network coding-based approach is proposed. Multimedia cloud storage. System single node repair algorithm. The algorithm stores the grouped multimedia file data in groups in the system, and performs XOR (exclusive OR) on the data in the group on the GF(2) finite field. When some nodes fail, the new node only needs to be connected. Two to three non-faulty nodes in the same group can accurately repair the data in the failed node. Theoretical analysis and simulation results show that the algorithm can reduce the complexity and repair of the codec, and reduce the disk I/O overhead. In this case, the storage cost of the algorithm is consistent with the storage cost based on the minimum storage regeneration code algorithm, and the repair bandwidth cost is close to the minimum bandwidth regeneration code algorithm.


2021 ◽  
Author(s):  
Kristin M Lenoir ◽  
Lynne E Wagenknecht ◽  
Jasmin Divers ◽  
Ramon Casanova ◽  
Dana Dabelea ◽  
...  

Abstract Background. Disease surveillance of diabetes among youth has relied mainly upon manual chart review. However, increasingly available structured electronic health record (EHR) data have been shown to yield accurate determinations of diabetes status and type. Validated algorithms to determine date of diabetes diagnosis are lacking. The objective of this work is to validate two EHR-based algorithms to determine date of diagnosis of diabetes.Methods. A rule-based ICD-10 algorithm identified youth with diabetes from structured EHR data over the period of 2009 through 2017 within three children’s hospitals that participate in the SEARCH for Diabetes in Youth Study: Cincinnati Children’s Hospital, Cincinnati, OH, Seattle Children’s Hospital, Seattle, WA, and Children’s Hospital Colorado, Denver, CO. Previous research and a multidisciplinary team informed the creation of two algorithms based upon structured EHR data to determine date of diagnosis among diabetes cases. An ICD-code algorithm was defined by the year of occurrence of second ICD-9 or ICD-10 diabetes code. A multiple-criteria algorithm consisted of the year of first occurrence of any of the following: diabetes-related ICD code, elevated glucose, elevated HbA1c, or diabetes medication. We assessed algorithm performance by percent agreement with a gold standard date of diagnosis determined by chart review. Results. Among 3777 cases, both algorithms demonstrated high agreement with true diagnosis year and differed in classification (p=0.006): 86.5% agreement for the ICD code algorithm and 85.9% agreement for the multiple-criteria algorithm. Agreement was high for both type 1 and type 2 cases for the ICD code algorithm. Performance improved over time. Conclusions. Year of occurrence of the second ICD diabetes-related code in the EHR yields an accurate diagnosis date within these pediatric hospital systems. This may lead to increased efficiency and sustainability of surveillance methods for incidence of diabetes among youth.


Circulation ◽  
2021 ◽  
Vol 143 (Suppl_1) ◽  
Author(s):  
Kelly Cho ◽  
Nicholas Link ◽  
Petra Schubert ◽  
Zeling He ◽  
Jacqueline P Honerlaw ◽  
...  

Introduction: The majority of population-based studies of myocardial infarction (MI) rely on billing codes for classification. Classification algorithms employing machine learning (ML) increasingly used for phenotyping using electronic health record (EHR) data. Hypothesis: ML algorithms integrating billing and information from narrative notes extracted using natural language processing (NLP) can improve classification of MI compared to billing code algorithms. Improved classification will improve power to compare risk factors across population subgroups. Methods: Retrospective cohort study of nationwide Veterans Affairs (VA) EHR data. MI classified using 2 approaches: (1) published billing code algorithm, (2) published phenotyping pipeline incorporating NLP and ML. Results compared against gold standard chart review of MI outcomes in 308 Veterans. We also tested known association between high density lipoprotein cholesterol (HDL-C) and MI outcomes classified using the 2 approaches among Black and White Veterans, stratified by sex and race; prior study showed HDL-C less protective for Black compared to White individuals. Results: We studied 17,176,658 million Veterans, mean age 69 years, 94% male, 12% self-report Black, 71% White. The billing code algorithm classified MI at positive predictive value (PPV) 0.64 compared to the published ML approach, PPV 0.90; the latter classified a modestly higher percentage of non-White Veterans. Using ML algorithm for MI, we replicated a reduced protective effect of HDL-C in Black vs White male and female Veterans (Table); with the billing code algorithm no association was observed between low density lipoprotein cholesterol (LDL-C) or HDL-C with MI among Black female Veterans. Conclusions: Using nationwide VA data, application of an ML approach improved classification of MI particularly among non-White Veterans, resulting in improved power to study differences in association for MI risk factors among Black and White Veterans.


2021 ◽  
Author(s):  
Boris Backovic

The project deals with the operation of a Source-Channel Codec for a WCDMA Based Multimedia System. The system is meant to transfer and receive both digitized speech and still image signals. It uses a part of the WCDMA technology to mix up the transmitted signals throughout the implementation of Direct Sequence Spread Spectrum and Chip Sequencing methodologies. The Walsh code algorithm is used to ensure the orthogonality among different Chip Sequences. On the transmitter side the system first offers the formatting stage where both a speech and a still image signal are digitized. The following stage in the system exhibits a significant degree of data compression applying appropriate compression algorithms: Lempel-Ziv-Welch for the speech signal and Huffman Code Algorithm for the still image. These compression algorithms are implemented in the Source Encoder stage of the system. The system also provides basic FEC (Forward Error Correction) capabilities, using both Linear Block Code and Convolutional Code algorithms introduced in the Channel Encoder stage. The goal of these FEC algorithms is to detect and correct errors during the transmission of data due to the channel imperfections. At the WCMDA stage the two signals are added together forming an aggregated signal that is being transmitted through the channel. On the receiver side a digital demodulator separates the aggregated signal into two signals using the feature of the orthogonality of vectors. Then the Channel Decoder stage follows, where both signals, which have gotten corrupted during the transmission through the channel due to channel imperfections, are recovered. The imperfections in the channel are simulated by random noise that is added to the aggregated signal in the WCDMA stage of the system. The last stage in the system, the Source Decoder stage, deals with the conversion of the received signals from the digital to analog form and reconstruction of the signals in the sense that they can be heard (speech) and seen (still image). Each stage in the system is simulated using MATLAB programming language. The report is formed of three major parts; the theoretical part where the theory behind each stage in the system is explained, the example part where applicable numerical examples are provided and analyzed for better understanding of both the theory and the Matlab code, and the result part where the Matlab results for each stage are analayzed.


2021 ◽  
Author(s):  
Boris Backovic

The project deals with the operation of a Source-Channel Codec for a WCDMA Based Multimedia System. The system is meant to transfer and receive both digitized speech and still image signals. It uses a part of the WCDMA technology to mix up the transmitted signals throughout the implementation of Direct Sequence Spread Spectrum and Chip Sequencing methodologies. The Walsh code algorithm is used to ensure the orthogonality among different Chip Sequences. On the transmitter side the system first offers the formatting stage where both a speech and a still image signal are digitized. The following stage in the system exhibits a significant degree of data compression applying appropriate compression algorithms: Lempel-Ziv-Welch for the speech signal and Huffman Code Algorithm for the still image. These compression algorithms are implemented in the Source Encoder stage of the system. The system also provides basic FEC (Forward Error Correction) capabilities, using both Linear Block Code and Convolutional Code algorithms introduced in the Channel Encoder stage. The goal of these FEC algorithms is to detect and correct errors during the transmission of data due to the channel imperfections. At the WCMDA stage the two signals are added together forming an aggregated signal that is being transmitted through the channel. On the receiver side a digital demodulator separates the aggregated signal into two signals using the feature of the orthogonality of vectors. Then the Channel Decoder stage follows, where both signals, which have gotten corrupted during the transmission through the channel due to channel imperfections, are recovered. The imperfections in the channel are simulated by random noise that is added to the aggregated signal in the WCDMA stage of the system. The last stage in the system, the Source Decoder stage, deals with the conversion of the received signals from the digital to analog form and reconstruction of the signals in the sense that they can be heard (speech) and seen (still image). Each stage in the system is simulated using MATLAB programming language. The report is formed of three major parts; the theoretical part where the theory behind each stage in the system is explained, the example part where applicable numerical examples are provided and analyzed for better understanding of both the theory and the Matlab code, and the result part where the Matlab results for each stage are analayzed.


2021 ◽  
Vol 2 (2) ◽  
pp. 68
Author(s):  
Daniel Setiawan Cahyono ◽  
Shinta Estri Wahyuningrum

Optical Character Recognition (OCR) is a method for computer to process an image that contains some text and then try to find any characters in that image, then convert it to digital text. In this research, Advanced Local Binary Pattern and Chain Code algorithm will be tested to identify alphabets in the image. Several method image preprocessing are also needed, such as image transformation, image rescaling, grayscale conversion, edge detection and edge thinning.


2021 ◽  
Vol 1098 (4) ◽  
pp. 042042
Author(s):  
A Kodir ◽  
R Fajar ◽  
A S Awalluddin ◽  
U Ruswandi ◽  
N Ismail ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document