A novel efficient m-out-of-n oblivious transfer scheme

Author(s):  
H. Vanishree ◽  
S.R.S. Iyengar
2018 ◽  
Vol 714 ◽  
pp. 15-26 ◽  
Author(s):  
Jianchang Lai ◽  
Yi Mu ◽  
Fuchun Guo ◽  
Rongmao Chen ◽  
Sha Ma

2013 ◽  
Vol 38 (3) ◽  
pp. 36-41
Author(s):  
Karen Di Franco

Since 2010, Book Works has been digitising material from its archive – whether finished works, ephemera, correspondence, photographs, or manuscripts – to give access to the working processes of the organisation (at www.bookworks.org.uk). The archive database is constructed around a chronological timeline and includes a search facility that allows visitors to filter and select material using a bespoke classification system. It currently comprises detailed content relating to two case studies from Book Works back catalogue: After the Freud Museum by Susan Hiller and Erasmus is late by Liam Gillick, as well as ephemera and material from other works. The project has been developed in collaboration with Ligatus Research Centre, University of the Arts London, with support from the AHRC Knowledge Transfer scheme.


2021 ◽  
Vol 32 (3) ◽  
Author(s):  
Dimitrios Bellos ◽  
Mark Basham ◽  
Tony Pridmore ◽  
Andrew P. French

AbstractOver recent years, many approaches have been proposed for the denoising or semantic segmentation of X-ray computed tomography (CT) scans. In most cases, high-quality CT reconstructions are used; however, such reconstructions are not always available. When the X-ray exposure time has to be limited, undersampled tomograms (in terms of their component projections) are attained. This low number of projections offers low-quality reconstructions that are difficult to segment. Here, we consider CT time-series (i.e. 4D data), where the limited time for capturing fast-occurring temporal events results in the time-series tomograms being necessarily undersampled. Fortunately, in these collections, it is common practice to obtain representative highly sampled tomograms before or after the time-critical portion of the experiment. In this paper, we propose an end-to-end network that can learn to denoise and segment the time-series’ undersampled CTs, by training with the earlier highly sampled representative CTs. Our single network can offer two desired outputs while only training once, with the denoised output improving the accuracy of the final segmentation. Our method is able to outperform state-of-the-art methods in the task of semantic segmentation and offer comparable results in regard to denoising. Additionally, we propose a knowledge transfer scheme using synthetic tomograms. This not only allows accurate segmentation and denoising using less real-world data, but also increases segmentation accuracy. Finally, we make our datasets, as well as the code, publicly available.


Sign in / Sign up

Export Citation Format

Share Document