scholarly journals A deep learning approach for synthetic MRI based on two routine sequences and training with synthetic data

2021 ◽  
Vol 210 ◽  
pp. 106371
Author(s):  
Elisa Moya-Sáez ◽  
Óscar Peña-Nogales ◽  
Rodrigo de Luis-García ◽  
Carlos Alberola-López
2020 ◽  
Vol 4 (4) ◽  
pp. 1-4
Author(s):  
Mohammad Nabati ◽  
Hojjat Navidan ◽  
Reza Shahbazian ◽  
Seyed Ali Ghorashi ◽  
David Windridge

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4756
Author(s):  
Irvin Hussein Lopez-Nava ◽  
Luis M. Valentín-Coronado ◽  
Matias Garcia-Constantino ◽  
Jesus Favela

Activity recognition is one of the most active areas of research in ubiquitous computing. In particular, gait activity recognition is useful to identify various risk factors in people’s health that are directly related to their physical activity. One of the issues in activity recognition, and gait in particular, is that often datasets are unbalanced (i.e., the distribution of classes is not uniform), and due to this disparity, the models tend to categorize into the class with more instances. In the present study, two methods for classifying gait activities using accelerometer and gyroscope data from a large-scale public dataset were evaluated and compared. The gait activities in this dataset are: (i) going down an incline, (ii) going up an incline, (iii) walking on level ground, (iv) going down stairs, and (v) going up stairs. The proposed methods are based on conventional (shallow) and deep learning techniques. In addition, data were evaluated from three data treatments: original unbalanced data, sampled data, and augmented data. The latter was based on the generation of synthetic data according to segmented gait data. The best results were obtained with classifiers built with augmented data, with F-measure results of 0.812 (σ = 0.078) for the shallow learning approach, and of 0.927 (σ = 0.033) for the deep learning approach. In addition, the data augmentation strategy proposed to deal with the unbalanced problem resulted in increased classification performance using both techniques.


Author(s):  
Bijayananda Dalai ◽  
Prakash Kumar ◽  
Uppala Srinu ◽  
Mrinal K Sen

Summary The converted wave data (P-to-s or S-to-p), traditionally termed as receiver functions, are often contaminated with noise of different origin that may lead to the erroneous identification of phases and thus influence the interpretations. Here we utilize an unsupervised deep learning approach called Patchunet to de-noise the converted wave data. We divide the input data into several patches, which are input to the encoder and decoder network to extract some meaningful features. The method de-noises an image patch-by-patch and utilizes the redundant information on similar patches to obtain the final de-noised results. The method is first tested on a suite of synthetic data contaminated with various amount of Gaussian and realistic noise and then on the observed data from three permanent seismic stations: HYB (Hyderabad, India), LBTB (Lobatse, Botswana, South Africa), COR (Corvallis, Oregon, USA). The method works very well even when the signal-to-noise ratio is poor or with the presence of spike noise and deconvolution artifacts. The field data demonstrate the effectiveness of the method for attenuating the random noise especially for the mantle phases, which show significant improvements over conventional receiver function based images.


2020 ◽  
Vol 55 (4) ◽  
pp. 249-256 ◽  
Author(s):  
Shohei Fujita ◽  
Akifumi Hagiwara ◽  
Yujiro Otsuka ◽  
Masaaki Hori ◽  
Naoyuki Takei ◽  
...  

2018 ◽  
Vol 6 (3) ◽  
pp. 122-126
Author(s):  
Mohammed Ibrahim Khan ◽  
◽  
Akansha Singh ◽  
Anand Handa ◽  
◽  
...  

2020 ◽  
Vol 17 (3) ◽  
pp. 299-305 ◽  
Author(s):  
Riaz Ahmad ◽  
Saeeda Naz ◽  
Muhammad Afzal ◽  
Sheikh Rashid ◽  
Marcus Liwicki ◽  
...  

This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.


Sign in / Sign up

Export Citation Format

Share Document