scholarly journals A Data Augmentation Method for War Trauma Using the War Trauma Severity Score and Deep Neural Networks

Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2657
Author(s):  
Jibin Yin ◽  
Pengfei Zhao ◽  
Yi Zhang ◽  
Yi Han ◽  
Shuoyu Wang

The demand for large-scale analysis and research of data on trauma from modern warfare is increasing day by day, but the amount of existing data is not sufficient to meet such demand. In this study, an integrated modeling approach incorporating a war trauma severity scoring algorithm (WTSS) and deep neural networks (DNN) is proposed. First, the proposed WTSS, which uses multiple non-linear regression based on the characteristics of war trauma data and the medical evaluation by an expert panel, performed a standardized assessment of an injury and predicts its trauma consequences. Second, to generate virtual injury, based on the probability of occurrence, the injured parts, injury types, and complications were randomly sampled and combined, and then WTSS was used to assess the consequences of the virtual injury. Third, to evaluate the accuracy of the predicted injury consequences, we built a DNN classifier and then trained it with the generated data and tested it with real data. Finally, we used the Delphi method to filter out unreasonable injuries and improve data rationality. The experimental results verified that the proposed approach surpassed the traditional artificial generation methods, achieved a prediction accuracy of 84.43%, and realized large-scale and credible war trauma data augmentation.

2021 ◽  
Vol 15 ◽  
Author(s):  
Chao He ◽  
Jialu Liu ◽  
Yuesheng Zhu ◽  
Wencai Du

Classification of electroencephalogram (EEG) is a key approach to measure the rhythmic oscillations of neural activity, which is one of the core technologies of brain-computer interface systems (BCIs). However, extraction of the features from non-linear and non-stationary EEG signals is still a challenging task in current algorithms. With the development of artificial intelligence, various advanced algorithms have been proposed for signal classification in recent years. Among them, deep neural networks (DNNs) have become the most attractive type of method due to their end-to-end structure and powerful ability of automatic feature extraction. However, it is difficult to collect large-scale datasets in practical applications of BCIs, which may lead to overfitting or weak generalizability of the classifier. To address these issues, a promising technique has been proposed to improve the performance of the decoding model based on data augmentation (DA). In this article, we investigate recent studies and development of various DA strategies for EEG classification based on DNNs. The review consists of three parts: what kind of paradigms of EEG-based on BCIs are used, what types of DA methods are adopted to improve the DNN models, and what kind of accuracy can be obtained. Our survey summarizes the current practices and performance outcomes that aim to promote or guide the deployment of DA to EEG classification in future research and development.


Author(s):  
Matteo Magnani ◽  
Alexandra Segerberg

Visual politics is becoming increasingly salient online. The qualitative methods of the research tradition do not expand to complex media ecologies, but advances in deep neural networks open an unprecedented path to large-scale analysis on the basis of actual visual content. However, the analysis of social visuals is challenging, since social and political scenes are semantically rich and convey complex narratives and ideas. This paper examines validity conditions for integrating deep neural network tools in the study of digitally augmented social visuals. It argues that the complexity of social visuals needs to be reflected in the validation process and its communication: It is necessary to move beyond the conventionally dichotomous approach to neural network validation which focuses on data and neural network respectively, to instead acknowledge the interdependency between data and tool. The final definition of good data is not available until the end of the process, which itself relies on a tool that needs good data to be trained. Themes change during the process not just because of our interaction with the data, but also because of our interactions with the tool and the specific way in which it mediates our analysis. An upshot is that the conventional approach of performance assessment – i.e., counting errors – is potentially misleading in this context. We explore our argument experimentally in the context of a study that addresses climate communication on YouTube. Climate themes such as polar bear in arctic landscapes and elite people/events present tough cases of social visuals.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Fuyong Xing ◽  
Yuanpu Xie ◽  
Xiaoshuang Shi ◽  
Pingjun Chen ◽  
Zizhao Zhang ◽  
...  

Abstract Background Nucleus or cell detection is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. Results We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. Conclusions We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.


2019 ◽  
Vol 134 ◽  
pp. 53-65 ◽  
Author(s):  
Paolo Vecchiotti ◽  
Giovanni Pepe ◽  
Emanuele Principi ◽  
Stefano Squartini

2021 ◽  
Vol 5 (3) ◽  
pp. 1-10
Author(s):  
Melih Öz ◽  
Taner Danışman ◽  
Melih Günay ◽  
Esra Zekiye Şanal ◽  
Özgür Duman ◽  
...  

The human eye contains valuable information about an individual’s identity and health. Therefore, segmenting the eye into distinct regions is an essential step towards gathering this useful information precisely. The main challenges in segmenting the human eye include low light conditions, reflections on the eye, variations in the eyelid, and head positions that make an eye image hard to segment. For this reason, there is a need for deep neural networks, which are preferred due to their success in segmentation problems. However, deep neural networks need a large amount of manually annotated data to be trained. Manual annotation is a labor-intensive task, and to tackle this problem, we used data augmentation methods to improve synthetic data. In this paper, we detail the exploration of the scenario, which, with limited data, whether performance can be enhanced using similar context data with image augmentation methods. Our training and test set consists of 3D synthetic eye images generated from the UnityEyes application and manually annotated real-life eye images, respectively. We examined the effect of using synthetic eye images with the Deeplabv3+ network in different conditions using image augmentation methods on the synthetic data. According to our experiments, the network trained with processed synthetic images beside real-life images produced better mIoU results than the network, which only trained with real-life images in the Base dataset. We also observed mIoU increase in the test set we created from MICHE II competition images.


Sign in / Sign up

Export Citation Format

Share Document