scholarly journals Initial Clinical Experience With a State-of-the-Art Linear Accelerator for Radiotherapy in a Low-Resource Setting: The First 35 Patients Treated Via a Guatemalan-American Partnership

2020 ◽  
Vol 108 (3) ◽  
pp. e427-e428
Author(s):  
K. Lee ◽  
A. Velarde ◽  
K.D. Najera ◽  
L. Sobrevilla ◽  
E. Palacios ◽  
...  
Author(s):  
Rui Wang ◽  
Xu Tan ◽  
Renqian Luo ◽  
Tao Qin ◽  
Tie-Yan Liu

Neural approaches have achieved state-of-the-art accuracy on machine translation but suffer from the high cost of collecting large scale parallel data. Thus, a lot of research has been conducted for neural machine translation (NMT) with very limited parallel data, i.e., the low-resource setting. In this paper, we provide a survey for low-resource NMT and classify related works into three categories according to the auxiliary data they used: (1) exploiting monolingual data of source and/or target languages, (2) exploiting data from auxiliary languages, and (3) exploiting multi-modal data. We hope that our survey can help researchers to better understand this field and inspire them to design better algorithms, and help industry practitioners to choose appropriate algorithms for their applications.


2020 ◽  
Vol 30 (01) ◽  
pp. 2050001
Author(s):  
Takumi Maruyama ◽  
Kazuhide Yamamoto

Inspired by machine translation task, recent text simplification approaches regard a task as a monolingual text-to-text generation, and neural machine translation models have significantly improved the performance of simplification tasks. Although such models require a large-scale parallel corpus, such corpora for text simplification are very few in number and smaller in size compared to machine translation task. Therefore, we have attempted to facilitate the training of simplification rewritings using pre-training from a large-scale monolingual corpus such as Wikipedia articles. In addition, we propose a translation language model to seamlessly conduct a fine-tuning of text simplification from the pre-training of the language model. The experimental results show that the translation language model substantially outperforms a state-of-the-art model under a low-resource setting. In addition, a pre-trained translation language model with only 3000 supervised examples can achieve a performance comparable to that of the state-of-the-art model using 30,000 supervised examples.


Diabetes ◽  
2018 ◽  
Vol 67 (Supplement 1) ◽  
pp. 93-LB
Author(s):  
EDDY JEAN BAPTISTE ◽  
PHILIPPE LARCO ◽  
MARIE-NANCY CHARLES LARCO ◽  
JULIA E. VON OETTINGEN ◽  
EDDLYS DUBOIS ◽  
...  

2019 ◽  
Vol 4 (4) ◽  
pp. 571-578 ◽  
Author(s):  
Andrew R. Barsky ◽  
Fionnbarr O'Grady ◽  
Christopher Kennedy ◽  
Neil K. Taunk ◽  
Lei Dong ◽  
...  

2021 ◽  
Vol 14 (4) ◽  
pp. e239250
Author(s):  
Vijay Anand Ismavel ◽  
Moloti Kichu ◽  
David Paul Hechhula ◽  
Rebecca Yanadi

We report a case of right paraduodenal hernia with strangulation of almost the entire small bowel at presentation. Since resection of all bowel of doubtful viability would have resulted in too little residual length to sustain life, a Bogota bag was fashioned using transparent plastic material from an urine drainage bag and the patient monitored intensively for 18 hours. At re-laparotomy, clear demarcation lines had formed with adequate length of viable bowel (100 cm) and resection with anastomosis was done with a good outcome on follow-up, 9 months after surgery. Our description of a rare cause of strangulated intestinal obstruction and a novel method of maximising length of viable bowel is reported for its successful outcome in a low-resource setting.


Author(s):  
Víctor Lopez-Lopez ◽  
Ana Morales ◽  
Elisa García-Vazquez ◽  
Miguel González ◽  
Quiteria Hernandez ◽  
...  

Author(s):  
Navin Kumar ◽  
Mukur Dipi Ray ◽  
D. N. Sharma ◽  
Rambha Pandey ◽  
Kanak Lata ◽  
...  

Author(s):  
Shumin Shi ◽  
Dan Luo ◽  
Xing Wu ◽  
Congjun Long ◽  
Heyan Huang

Dependency parsing is an important task for Natural Language Processing (NLP). However, a mature parser requires a large treebank for training, which is still extremely costly to create. Tibetan is a kind of extremely low-resource language for NLP, there is no available Tibetan dependency treebank, which is currently obtained by manual annotation. Furthermore, there are few related kinds of research on the construction of treebank. We propose a novel method of multi-level chunk-based syntactic parsing to complete constituent-to-dependency treebank conversion for Tibetan under scarce conditions. Our method mines more dependencies of Tibetan sentences, builds a high-quality Tibetan dependency tree corpus, and makes fuller use of the inherent laws of the language itself. We train the dependency parsing models on the dependency treebank obtained by the preliminary transformation. The model achieves 86.5% accuracy, 96% LAS, and 97.85% UAS, which exceeds the optimal results of existing conversion methods. The experimental results show that our method has the potential to use a low-resource setting, which means we not only solve the problem of scarce Tibetan dependency treebank but also avoid needless manual annotation. The method embodies the regularity of strong knowledge-guided linguistic analysis methods, which is of great significance to promote the research of Tibetan information processing.


Sign in / Sign up

Export Citation Format

Share Document