scholarly journals BrainApp: Using near-patient sensing through a mobile app and machine learning in brain tumour patients

2021 ◽  
Vol 23 (Supplement_4) ◽  
pp. iv2-iv2
Author(s):  
Nur Aizaan Anwar ◽  
Matthew Williams

Abstract Aims 1. To assess the feasibility, acceptability, and performance of a mobile app, 'BRIAN', developed by The Brain Tumour Charity (BTC), in collecting data on quality of life (QOL), activity and sleep, for predicting disease progression in adult brain tumour patients. 2. To generate a prospectively collected dataset of patient measures obtained through mobile devices in brain tumour patients and healthy volunteers. 3. To assess compliance and performance of micro-challenges (hand coordination, visual memory, speech and facial features) in study participants using a mobile application. 4. To assess differences and systematic variation in micro-challenge performance between healthy volunteers and brain tumour patients. 5. To assess factors associated with micro-challenge performance in brain tumour patients, the relationship between micro-challenges and standard measures of QoL and disease progression. 6. To assess the diagnostic performance of different machine learning models in detecting brain tumour progression. Method This abstract describes the protocol for a multi-centre observational non-randomised phase II trial for adult brain tumour patients and healthy volunteers in the UK. Participants will use the BRIAN mobile app, developed by BTC to help individuals cope with a brain tumour, and share their journey with both researchers and clinicians. Participants will be required to enter information on their medical background, mood, and QOL; have the option to link fitness trackers to the app; as well as perform mini-games which assess speech, coordination, facial features and reaction time. Patients will have their brain imaging and histopathology report submitted to the sponsor. We will then investigate the correlation and temporal relation of these multimodal data with conventional measures of disease progression. We will use traditional statistical methods initially (i.e. descriptive statistics and multilevel modelling), which will then inform the development of a machine learning model in predicting brain tumour progression. Results The BRIAN app is currently being beta-tested by healthy volunteers from the Computational Oncology Lab, Imperial College London. We are in the process of obtaining ethical approval for the trial. Conclusion This study may enable the development of a tool that allows us to detect earlier signs of disease progression, and so offer earlier treatment and preservation of quality of life; and hence the best course of action. Such a tool would also be non-invasive, cheap, quick, and can be used by patients in the comfort of their own homes.

Author(s):  
Nisha Yadav ◽  
Kakoli Banerjee ◽  
Vikram Bali

In the software industry, where the quality of the output is based on human performance, fatigue can be a reason for performance degradation. Fatigue not only degrades quality, but is also a health risk factor. Sleep disorders, depression, and stress are all results of fatigue which can contribute to fatal problems. This article presents a comparative study of different techniques which can be used for detecting fatigue of programmers and data miners who spent lots of time in front of a computer screen. Machine learning can used for worker fatigue detection also, but there are some factors which are specific for software workers. One of such factors is screen illumination. Screen illumination is the light of the computer screen or laptop screen that is casted on the workers face and makes it difficult for the machine learning algorithm to extract the facial features. This article presents a comparative study of the techniques which can be used for general fatigue detection and identifies the best techniques.


Author(s):  
Pritom Bhowmik ◽  
◽  
Arabinda Saha Partha ◽  

Machine learning teaches computers to think in a similar way to how humans do. An ML models work by exploring data and identifying patterns with minimal human intervention. A supervised ML model learns by mapping an input to an output based on labeled examples of input-output (X, y) pairs. Moreover, an unsupervised ML model works by discovering patterns and information that was previously undetected from unlabelled data. As an ML project is an extensively iterative process, there is always a need to change the ML code/model and datasets. However, when an ML model achieves 70-75% of accuracy, then the code or algorithm most probably works fine. Nevertheless, in many cases, e.g., medical or spam detection models, 75% accuracy is too low to deploy in production. A medical model used in susceptible tasks such as detecting certain diseases must have an accuracy label of 98-99%. Furthermore, that is a big challenge to achieve. In that scenario, we may have a good working model, so a model-centric approach may not help much achieve the desired accuracy threshold. However, improving the dataset will improve the overall performance of the model. Improving the dataset does not always require bringing more and more data into the dataset. Improving the quality of the data by establishing a reasonable baseline level of performance, labeler consistency, error analysis, and performance auditing will thoroughly improve the model's accuracy. This review paper focuses on the data-centric approach to improve the performance of a production machine learning model.


2019 ◽  
Vol 8 (4) ◽  
pp. 1426-1430

Continuous integration and Continuous Deployment (CICD) is a trending practice in agile software development. Using Continuous Integration helps the developers to find the bugs before it goes to the production by running unit tests, smoke tests etc. Deploying the components of the application in Production using Continuous Deployment, using this way, the new release of the application reaches the client faster. Continuous Security makes sure that the application is less prone to vulnerabilities by doing static scans on code and dynamic scans on the deployed releases. The goal of this study is to identify the benefits of adapting the Continuous Integration - Continuous Deployment in Application Software. The Pipeline involves Implementation of CI – CS - CD on a web application ClubSoc which is a Club Management Application and using unsupervised machine learning algorithms to detect anomalies in the CI-CS-CD process. The Continuous Integration is implemented using Jenkins CI Tool, Continuous Security is implemented using Veracode Tool and Continuous Deployment is done using Docker and Jenkins. The results have shown by adapting this methodology, the author is able to improve the quality of the code, finding vulnerabilities using static scans, portability and saving time by automation in deployment of applications using Docker and Jenkins. Applying machine learning helps in predicting the defects, failures and trends in the Continuous Integration Pipeline, whereas it can help in predicting the business impact in Continuous Delivery. Unsupervised learning algorithms such as K-means Clustering, Symbolic Aggregate Approximation (SAX) and Markov are used for Quality and Performance Regression analysis in the CICD Model. Using the CICD model, the developers can fix the bugs pre-release and this will impact the company as a whole by raising the profit and attracting more customers. The Updated Application reaches the client faster using Continuous Deployment. By Analyzing the failure trends using Unsupervised machine learning, the developers might be able to predict where the next error is likely to happen and prevent it in the pre-build stage


2011 ◽  
Vol 223 (06) ◽  
Author(s):  
J Bode ◽  
A Sabag ◽  
S Kietz ◽  
G Neufeld ◽  
M Lakomek

Author(s):  
Feidu Akmel ◽  
Ermiyas Birihanu ◽  
Bahir Siraj

Software systems are any software product or applications that support business domains such as Manufacturing,Aviation, Health care, insurance and so on.Software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from other for this reason it is better to apply the software metrics to measure the quality of software. Attributes that we gathered from source code through software metrics can be an input for software defect predictor. Software defect are an error that are introduced by software developer and stakeholders. Finally, in this study we discovered the application of machine learning on software defect that we gathered from the previous research works.


2019 ◽  
Vol 1 (1) ◽  
pp. 92
Author(s):  
Fazidah Hanim Husain

Lighting is one of the key elements in any space and building infrastructure. Good design for an area in the building requires sufficient light that contributes to the efficiency of the activities. The correct method allows natural light to transmit, reduce heat and glare in providing a conducive learning environment. Light plays a significant influence to the quality of space and contributes focus of the students in an architecture studio. Previous research has shown that the effect of light also controlled emotions, behavior, and mood of the students. The operations of artificial lighting that have been used most of the time in an architecture studio during day and night may create lavishness and inadequacy at the same time. Therefore, this paper focuses on the identifying the quality of light for the architecture studio in UiTM (Perak), to instill a creative learning environment. Several methodologies adopted in this study such as illuminance level measurement using lux meter (LM-8100), and a questionnaire survey in gauging the lighting comfort level from students’ perspective. The study revealed that the illuminance level in the architecture studio is insufficient and not in the acceptable range stated in the Malaysian: Standards 1525:2007 and  not evenly distributed.  The study also concluded that the current studio environment is not condusive and appears monotonous. 


2020 ◽  
Vol 16 (4) ◽  
pp. 730-744
Author(s):  
V.I. Loktionov

Subject. The article reviews the way strategic threats to energy security influence the quality of people's life. Objectives. The study unfolds the theory of analyzing strategic threats to energy security by covering the matter of quality of people's life. Methods. To analyze the way strategic threats to energy security spread across cross-sectoral commodity and production chains and influences quality of people's living, I applied the factor analysis and general scientific methods of analysis and synthesis. Results. I suggest interpreting strategic threats to energy security as risks of people's quality of life due to a reduction in the volume of energy supply. I identified mechanisms reflecting how the fuel and energy complex and its development influence the quality of people's life. The article sets out the method to assess such quality-of-life risks arising from strategic threats to energy security. Conclusions and Relevance. In the current geopolitical situation, strategic threats to energy security cause long-standing adverse consequences for the quality of people's life. If strategic threats to energy security are further construed as risk of quality of people's life, this will facilitate the preparation and performance of a more effective governmental policy on energy, which will subsequently raise the economic well-being of people.


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


2015 ◽  
Vol 6 (1) ◽  
pp. 50-57
Author(s):  
Rizqa Raaiqa Bintana ◽  
Putri Aisyiyah Rakhma Devi ◽  
Umi Laili Yuhana

The quality of the software can be measured by its return on investment. Factors which may affect the return on investment (ROI) is the tangible factors (such as the cost) dan intangible factors (such as the impact of software to the users or stakeholder). The factor of the software itself are assessed through reviewing, testing, process audit, and performance of software. This paper discusses the consideration of return on investment (ROI) assessment criteria derived from the software and its users. These criteria indicate that the approach may support a rational consideration of all relevant criteria when evaluating software, and shows examples of actual return on investment models. Conducted an analysis of the assessment criteria that affect the return on investment if these criteria have a disproportionate effort that resulted in a return on investment of a software decreased. Index Terms - Assessment criteria, Quality assurance, Return on Investment, Software product


Sign in / Sign up

Export Citation Format

Share Document