Disconnection for protection (D4P): an addition to the disconnection repertoire

2021 ◽  
pp. 016344372110226
Author(s):  
Abdul Rohman ◽  
Peng Hwa Ang

This article responds to Crosscurrent articles (Treré et al., 2020) published in this journal by positing the potential usefulness of Disconnection for Protection (D4P) for calming unrest and managing volatile times. We first use a vignette from Ambon, Indonesia, to illuminate the need for D4P to throttle the spread of mis/disinformation during a communal violence and then discuss the existing repertoire of disconnections. Building on this, we propose temporal/spatial context, information flow, externality, and motivation as constitutive elements of D4P. We elaborate on their terms and conditions and suggest research directions at the end.

2019 ◽  
Vol 41 (5) ◽  
pp. 1692-1708
Author(s):  
Atilio Grondona ◽  
Bijeesh Kozhikkodan Veettil ◽  
Silvia Beatriz Alves Rolim ◽  
Luciana Paulo Gomes

Author(s):  
Bo Yan ◽  
Chuming Lin ◽  
Weimin Tan

For video super-resolution, current state-of-the-art approaches either process multiple low-resolution (LR) frames to produce each output high-resolution (HR) frame separately in a sliding window fashion or recurrently exploit the previously estimated HR frames to super-resolve the following frame. The main weaknesses of these approaches are: 1) separately generating each output frame may obtain high-quality HR estimates while resulting in unsatisfactory flickering artifacts, and 2) combining previously generated HR frames can produce temporally consistent results in the case of short information flow, but it will cause significant jitter and jagged artifacts because the previous super-resolving errors are constantly accumulated to the subsequent frames.In this paper, we propose a fully end-to-end trainable frame and feature-context video super-resolution (FFCVSR) network that consists of two key sub-networks: local network and context network, where the first one explicitly utilizes a sequence of consecutive LR frames to generate local feature and local SR frame, and the other combines the outputs of local network and the previously estimated HR frames and features to super-resolve the subsequent frame. Our approach takes full advantage of the inter-frame information from multiple LR frames and the context information from previously predicted HR frames, producing temporally consistent highquality results while maintaining real-time speed by directly reusing previous features and frames. Extensive evaluations and comparisons demonstrate that our approach produces state-of-the-art results on a standard benchmark dataset, with advantages in terms of accuracy, efficiency, and visual quality over the existing approaches.


2016 ◽  
Vol 686 ◽  
pp. 168-173 ◽  
Author(s):  
Péter Tamás

Nowadays the lean philosophy is very important in the improvement of the manufacturing processes. The value stream mapping is a fundamental lean device which can decrease the material-and information flow wastes. This paper introduces in details the static and dynamic value stream mapping’s method and it summarizes the application possibilities of these devices. There are numerous research directions in this topic which will be outlined in this paper.


2014 ◽  
Vol 556-562 ◽  
pp. 4788-4791
Author(s):  
Zhen Wei Li ◽  
Jing Zhang ◽  
Xin Liu ◽  
Li Zhuo

Recently bag-of-words (BoW) model as image feature has been widely used in content-based image retrieval. Most of existing approaches of creating BoW ignore the spatial context information. In order to better describe the image content, the BoW with spatial context information is created in this paper. Firstly, image’s regions of interest are detected and the focus of attention shift is produced through visual attention model. The color and SIFT features are extracted from the region of interest and BoW is created through cluster analysis method. Secondly, the spatial context information among objects in an image is generated by using the spatial coding method based on the focus of attention shift. Then the image is represented as the model of BoW with spatial context. Finally, the model of spatial context BoW is applied into image retrieval to evaluate the performance of the proposed method. Experimental results show the proposed method can effectively improve the accuracy of the image retrieval.


Sign in / Sign up

Export Citation Format

Share Document