data framework
Recently Published Documents





Saadaldeen Rashid Ahmed ◽  
Zainab Ali Abbood ◽  
hameed Mutlag Farhan ◽  
Baraa Taha Yasen ◽  
Mohammed Rashid Ahmed ◽  

This study aims is to establish a small system of text-independent recognition of speakers for a relatively small group of speakers at a sound stage. The fascinating justification for the International Space Station (ISS) to detect if the astronauts are speaking at a specific time has influenced the difficulty. In this work, we employed Machine Learning Applications. Accordingly, we used the Direct Deep Neural Network (DNN)-based approach, in which the posterior opportunities of the output layer are utilized to determine the speaker’s presence. In line with the small footprint design objective, a simple DNN model with only sufficient hidden units or sufficient hidden units per layer was designed, thereby reducing the cost of parameters through intentional preparation to avoid the normal overfitting problem and optimize the algorithmic aspects, such as context-based training, activation functions, validation, and learning rate. Two commercially available databases, namely, TIMIT clean speech and HTIMIT multihandset communication database and TIMIT noise-added data framework, were tested for this reference model that we developed using four sound categories at three distinct signal-to-noise ratios. Briefly, we used a dynamic pruning method in which the conditions of all layers are simultaneously pruned, and the pruning mechanism is reassigned. The usefulness of this approach was evaluated on all the above contact databases

Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 166
Sudip Paul ◽  
Maheshrao Maindarkar ◽  
Sanjay Saxena ◽  
Luca Saba ◽  
Monika Turk ◽  

Background and Motivation: Diagnosis of Parkinson’s disease (PD) is often based on medical attention and clinical signs. It is subjective and does not have a good prognosis. Artificial Intelligence (AI) has played a promising role in the diagnosis of PD. However, it introduces bias due to lack of sample size, poor validation, clinical evaluation, and lack of big data configuration. The purpose of this study is to compute the risk of bias (RoB) automatically. Method: The PRISMA search strategy was adopted to select the best 39 AI studies out of 85 PD studies closely associated with early diagnosis PD. The studies were used to compute 30 AI attributes (based on 6 AI clusters), using AP(ai)Bias 1.0 (AtheroPointTM, Roseville, CA, USA), and the mean aggregate score was computed. The studies were ranked and two cutoffs (Moderate-Low (ML) and High-Moderate (MH)) were determined to segregate the studies into three bins: low-, moderate-, and high-bias. Result: The ML and HM cutoffs were 3.50 and 2.33, respectively, which constituted 7, 13, and 6 for low-, moderate-, and high-bias studies. The best and worst architectures were “deep learning with sketches as outcomes” and “machine learning with Electroencephalography,” respectively. We recommend (i) the usage of power analysis in big data framework, (ii) that it must undergo scientific validation using unseen AI models, and (iii) that it should be taken towards clinical evaluation for reliability and stability tests. Conclusion: The AI is a vital component for the diagnosis of early PD and the recommendations must be followed to lower the RoB.

2022 ◽  
Veerle Buffel ◽  
Katrien Danhieux ◽  
Philippe Bos ◽  
Roy Remmen ◽  
Josefien Van Olmen ◽  

Abstract Background. To assess the quality of integrated diabetes care, we should be able to follow the patient throughout the care path, monitor his/her care process and link them to his/her health outcomes, while simultaneously link this information to the primary care system and its performance on the structure and organization related quality indicators. However the development process of such a data framework is challenging, even in period of increasing and improving health data storage and management. This study aims to develop an integrated multi-level data framework for quality of diabetes care and to operationalize this framework in the fragmented Belgium health care and data landscape.Methods. Based on document reviews and iterative expert consultations, theoretical approaches and quality indicators were identified and assessed. After mapping and assessing the validity of existing health information systems and available data sources through expert consultations, the theoretical framework was translated in a data framework with measurable quality indicators. The construction of the data base included sampling procedures, data-collection, and several technical and privacy-related aspects of linking and accessing Belgian datasets.Results. To address three dimensions of quality of care, we integrated the chronic care model and cascade of care approach, addressing respectively the structure related quality indicators and the process and outcome related indicators. The corresponding data framework is based on self-collected data at the primary care practice level (using the Assessment of quality of integrated care tool), and linked health insurance data with lab data at the patient level. Conclusion. In this study, we have described the transition of a theoretical quality of care framework to a unique multilevel database, which allows assessing the quality of diabetes care, by considering the complete care continuum (process and outcomes) as well as organizational characteristics of primary care practices.

2022 ◽  
Wilfried Yves Hamilton Adoni ◽  
Tarik Nahhal ◽  
Najib Ben Aoun ◽  
Moez Krichen ◽  
Mohammed Alzahrani

Abstract In this paper, we present a scalable and real-time intelligent transportation system based on a big data framework. The proposed system allows for the use of existing data from road sensors to better understand traffic flow, traveler behavior, and increase road network performance. Our transportation system is designed to process large-scale stream data to analyze traffic events such as incidents, crashes and congestion. The experiments performed on the public transportation modes of the city of Casablanca in Morocco reveal that the proposed system achieves a significant gain of time, gathers large-scale data from many road sensors and is not expensive in terms of hardware resource consumption.

2022 ◽  
pp. 1865-1875
Krishan Tuli ◽  
Amanpreet Kaur ◽  
Meenakshi Sharma

Cloud computing is offering various IT services to many users in the work on the basis of pay-as-you-use model. As the data is increasing day by day, there is a huge requirement for cloud applications that manage such a huge amount of data. Basically, a best solution for analyzing such amounts of data and handles a large dataset. Various companies are providing such framesets for particular applications. A cloud framework is the accruement of different components which is similar to the development tools, various middleware for particular applications and various other database management services that are needed for cloud computing deployment, development and managing the various applications of the cloud. This results in an effective model for scaling such a huge amount of data in dynamically allocated recourses along with solving their complex problems. This article is about the survey on the performance of the big data framework based on a cloud from various endeavors which assists ventures to pick a suitable framework for their work and get a desired outcome.

2022 ◽  
Neetu Singh ◽  
Upkar Varshney ◽  
Sumantra Sarkar

Sign in / Sign up

Export Citation Format

Share Document