scholarly journals A large, open source dataset of stroke anatomical brain images and manual lesion segmentations

2017 ◽  
Author(s):  
Sook-Lei Liew ◽  
Julia M. Anglin ◽  
Nick W. Banks ◽  
Matt Sondag ◽  
Kaori L. Ito ◽  
...  

AbstractStroke is the leading cause of adult disability worldwide, with up to two-thirds of individuals experiencing long-term disabilities. Large-scale neuroimaging studies have shown promise in identifying robust biomarkers (e.g., measures of brain structure) of long-term stroke recovery following rehabilitation. However, analyzing large rehabilitation-related datasets is problematic due to barriers in accurate stroke lesion segmentation. Manually-traced lesions are currently the gold standard for lesion segmentation on T1-weighted MRIs, but are labor intensive and require anatomical expertise. While algorithms have been developed to automate this process, the results often lack accuracy. Newer algorithms that employ machine-learning techniques are promising, yet these require large training datasets to optimize performance. Here we present ATLAS (Anatomical Tracings of Lesions After Stroke), an open-source dataset of 304 T1-weighted MRIs with manually segmented lesions and metadata. This large, diverse dataset can be used to train and test lesion segmentation algorithms and provides a standardized dataset for comparing the performance of different segmentation methods. We hope ATLAS release 1.1 will be a useful resource to assess and improve the accuracy of current lesion segmentation methods.

2021 ◽  
Author(s):  
Sook-Lei Liew ◽  
Bethany Lo ◽  
Miranda R. Donnelly ◽  
Artemis Zavaliangos-Petropulu ◽  
Jessica N. Jeong ◽  
...  

AbstractAccurate lesion segmentation is critical in stroke rehabilitation research for the quantification of lesion burden and accurate image processing. Current automated lesion segmentation methods for T1-weighted (T1w) MRIs, commonly used in rehabilitation research, lack accuracy and reliability. Manual segmentation remains the gold standard, but it is time-consuming, subjective, and requires significant neuroanatomical expertise. We previously released a large, open-source dataset of stroke T1w MRIs and manually segmented lesion masks (ATLAS v1.2, N=304) to encourage the development of better algorithms. However, many methods developed with ATLAS v1.2 report low accuracy, are not publicly accessible or are improperly validated, limiting their utility to the field. Here we present ATLAS v2.0 (N=955), a larger dataset of T1w stroke MRIs and manually segmented lesion masks that includes both training (public) and test (hidden) data. Algorithm development using this larger sample should lead to more robust solutions, and the hidden test data allows for unbiased performance evaluation via segmentation challenges. We anticipate that ATLAS v2.0 will lead to improved algorithms, facilitating large-scale stroke rehabilitation research.


Author(s):  
O. Smith ◽  
H. Cho

Abstract. Studying deforestation has been an important topic in forestry research. Especially, canopy classification using remotely sensed data plays an essential role in monitoring tree canopy on a large scale. As remote sensing technologies advance, the quality and resolution of satellite imagery have significantly improved. Oftentimes, leveraging high-resolution imagery such as the National Agriculture Imagery Program (NAIP) imagery requires proprietary software. However, the lack of insight into the inner workings of such software and the inability of modifying its code lead many researchers towards open-source solutions. In this research, we introduce CanoClass, an open-source cross-platform canopy classification system written in Python. CanoClass utilizes the Random Forest and Extra Trees algorithms provided by scikit-learn to classify canopy using remote sensing imagery. Based on our benchmark tests, this new canopy classification system was 283 % to 464 % faster than commercial Feature Analyst, but it produced comparable results with a similarity of 87.56 % to 87.62 %.


2021 ◽  
Vol 1804 (1) ◽  
pp. 012133
Author(s):  
Mahmood Shakir Hammoodi ◽  
Hasanain Ali Al Essa ◽  
Wial Abbas Hanon

2021 ◽  
Author(s):  
Sophie de Bruin ◽  
Jannis Hoch ◽  
Nina von Uexkull ◽  
Halvard Buhaug ◽  
Nico Wanders

<p>The socioeconomic impacts of changes in climate-related and hydrology-related factors are increasingly acknowledged to affect the on-set of violent conflict. Full consensus upon the general mechanisms linking these factors with conflict is, however, still limited. The absence of full understanding of the non-linearities between all components and the lack of sufficient data make it therefore hard to address violent conflict risk on the long-term. </p><p>Although it is neither desirable nor feasible to make exact predictions, projections are a viable means to provide insights into potential future conflict risks and uncertainties thereof. Hence, making different projections is a legitimate way to deal with and understand these uncertainties, since the construction of diverse scenarios delivers insights into possible realizations of the future.  </p><p>Through machine learning techniques, we (re)assess the major drivers of conflict for the current situation in Africa, which are then applied to project the regions-at-risk following different scenarios. The model shows to accurately reproduce observed historic patterns leading to a high ROC score of 0.91. We show that socio-economic factors are most dominant when projecting conflicts over the African continent. The projections show that there is an overall reduction in conflict risk as a result of increased economic welfare that offsets the adverse impacts of climate change and hydrologic variables. It must be noted, however, that these projections are based on current relations. In case the relations of drivers and conflict change in the future, the resulting regions-at-risk may change too.   By identifying the most prominent drivers, conflict risk mitigation measures can be tuned more accurately to reduce the direct and indirect consequences of climate change on the population in Africa. As new and improved data becomes available, the model can be updated for more robust projections of conflict risk in Africa under climate change.</p>


2021 ◽  
Author(s):  
Nikos Fazakis ◽  
Elias Dritsas ◽  
Otilia Kocsis ◽  
Nikos Fakotakis ◽  
Konstantinos Moustakas

2018 ◽  
Vol 27 (03) ◽  
pp. 1850011 ◽  
Author(s):  
Athanasios Tagaris ◽  
Dimitrios Kollias ◽  
Andreas Stafylopatis ◽  
Georgios Tagaris ◽  
Stefanos Kollias

Neurodegenerative disorders, such as Alzheimer’s and Parkinson’s, constitute a major factor in long-term disability and are becoming more and more a serious concern in developed countries. As there are, at present, no effective therapies, early diagnosis along with avoidance of misdiagnosis seem to be critical in ensuring a good quality of life for patients. In this sense, the adoption of computer-aided-diagnosis tools can offer significant assistance to clinicians. In the present paper, we provide in the first place a comprehensive recording of medical examinations relevant to those disorders. Then, a review is conducted concerning the use of Machine Learning techniques in supporting diagnosis of neurodegenerative diseases, with reference to at times used medical datasets. Special attention has been given to the field of Deep Learning. In addition to that, we communicate the launch of a newly created dataset for Parkinson’s disease, containing epidemiological, clinical and imaging data, which will be publicly available to researchers for benchmarking purposes. To assess the potential of the new dataset, an experimental study in Parkinson’s diagnosis is carried out, based on state-of-the-art Deep Neural Network architectures and yielding very promising accuracy results.


Author(s):  
Prachi

This chapter describes how with Botnets becoming more and more the leading cyber threat on the web nowadays, they also serve as the key platform for carrying out large-scale distributed attacks. Although a substantial amount of research in the fields of botnet detection and analysis, bot-masters inculcate new techniques to make them more sophisticated, destructive and hard to detect with the help of code encryption and obfuscation. This chapter proposes a new model to detect botnet behavior on the basis of traffic analysis and machine learning techniques. Traffic analysis behavior does not depend upon payload analysis so the proposed technique is immune to code encryption and other evasion techniques generally used by bot-masters. This chapter analyzes the benchmark datasets as well as real-time generated traffic to determine the feasibility of botnet detection using traffic flow analysis. Experimental results clearly indicate that a proposed model is able to classify the network traffic as a botnet or as normal traffic with a high accuracy and low false-positive rates.


Author(s):  
Stijn Hoppenbrouwers ◽  
Bart Schotten ◽  
Peter Lucas

Many model-based methods in AI require formal representation of knowledge as input. For the acquisition of highly structured, domain-specific knowledge, machine learning techniques still fall short, and knowledge elicitation and modelling is then the standard. However, obtaining formal models from informants who have few or no formal skills is a non-trivial aspect of knowledge acquisition, which can be viewed as an instance of the well-known “knowledge acquisition bottleneck”. Based on the authors’ work in conceptual modelling and method engineering, this paper casts methods for knowledge modelling in the framework of games. The resulting games-for-modelling approach is illustrated by a first prototype of such a game. The authors’ long-term goal is to lower the threshold for formal knowledge acquisition and modelling.


2020 ◽  
pp. 146144482093944
Author(s):  
Aimei Yang ◽  
Adam J Saffer

Social media can offer strategic communicators cost-effective opportunities to reach millions of individuals. However, in practice it can be difficult to be heard in these crowded digital spaces. This study takes a strategic network perspective and draws from recent research in network science to propose the network contingency model of public attention. This model argues that in the networked social-mediated environment, an organization’s ability to attract public attention on social media is contingent on its ability to fit its network position with the network structure of the communication context. To test the model, we combine data mining, social network analysis, and machine-learning techniques to analyze a large-scale Twitter discussion network. The results of our analysis of Twitter discussion around the refugee crisis in 2016 suggest that in high core-periphery network contexts, “star” positions were most influential whereas in low core-periphery network contexts, a “community” strategy is crucial to attracting public attention.


Sign in / Sign up

Export Citation Format

Share Document