DESIGN OF TWO-LEVEL PIPELINED SYSTOLIC ARRAY AND ITS APPLICATION TO IMAGE

1992 ◽  
Vol 02 (03) ◽  
pp. 247-263
Author(s):  
CHEIN-WEI JEN ◽  
CHI-MIN LIU

Two-level pipelined systolic array can attain parallelism down to lower levels and provide much higher throughput and computational speed than conventional ones. This paper presents a design procedure starting from an algorithm representation, called Dependence Graph (DG). Arrays with different performances can be obtained by applying the various linear transformation matrices on DG. Image resampling is a process for image construction and display. It has important applications in image processing or in digital TV. In this paper, two design considerations are applied to build high-performance VLSI image resampler. First, two-level pipelined systolic array is designed to maximize parallelism and also make VLSI implementation highly feasible. Second, a modified two-pass resampling scheme is devised to reduce the amount of required storage and increase the concurrency between two passes of resampling. This image resampler can get a throughput of one pixel per clock period being smaller than the latency of an adder. The requirement for storage is only several line buffers.

2012 ◽  
Vol 17 (4) ◽  
pp. 207-216 ◽  
Author(s):  
Magdalena Szymczyk ◽  
Piotr Szymczyk

Abstract The MATLAB is a technical computing language used in a variety of fields, such as control systems, image and signal processing, visualization, financial process simulations in an easy-to-use environment. MATLAB offers "toolboxes" which are specialized libraries for variety scientific domains, and a simplified interface to high-performance libraries (LAPACK, BLAS, FFTW too). Now MATLAB is enriched by the possibility of parallel computing with the Parallel Computing ToolboxTM and MATLAB Distributed Computing ServerTM. In this article we present some of the key features of MATLAB parallel applications focused on using GPU processors for image processing.


Author(s):  
Hiroshi Yamamoto ◽  
Yasufumi Nagai ◽  
Shinichi Kimura ◽  
Hiroshi Takahashi ◽  
Satoko Mizumoto ◽  
...  

Author(s):  
Philip C. Kendall ◽  
Jonathan S. Comer

This chapter describes methodological and design considerations central to the scientific evaluation of treatment efficacy and effectiveness. Matters of design, procedure, measurement, data analysis, and reporting are examined and discussed. The authors consider key concepts of controlled comparisons, random assignment, the use of treatment manuals, integrity and adherence checks, sample and setting selection, treatment transportability, handling missing data, assessing clinical significance, identifying mechanisms of change, and consolidated standards for communicating study findings to the scientific community. Examples from the treatment outcome literature are offered, and guidelines are suggested for conducting treatment evaluations that maximize both scientific rigor and clinical relevance.


2013 ◽  
Vol 21 (3) ◽  
pp. 552-562
Author(s):  
Hsuan-Chun Liao ◽  
Mochamad Asri ◽  
Tsuyoshi Isshiki ◽  
Dongju Li ◽  
Hiroaki Kunieda

2007 ◽  
Vol 121-123 ◽  
pp. 1351-1354
Author(s):  
Yu Sheng Chien ◽  
Che Hsin Lin ◽  
Fu Jen Kao ◽  
Cheng Wen Ko

This paper proposes a novel microfluidic system for cell/microparticle recognition and manipulation utilizing digital image processing technique (DIP) and optical tweezer under microfluidic configuration. Digital image processing technique is used to count and recognize the cell/particle samples and then sends a control signal to generate a laser pulse to manipulate the target cell/particle optically. The optical tweezer system is capable of catching, moving and switching the target cells at the downstream of the microchannel. The trapping force of the optical tweezer is also demonstrated utilizing Stocks-drag method and electroosmotic flow. The proposed system provides a simple but high-performance solution for microparticle manipulation in a microfluidic device.


Perception ◽  
1986 ◽  
Vol 15 (4) ◽  
pp. 373-386 ◽  
Author(s):  
Nigel D Haig

For recognition of a target there must be some form of comparison process between the image of that target and a stored representation of that target. In the case of faces there must be a very large number of such stored representations, yet human beings seem able to perform comparisons at phenomenal speed. It is possible that faces are memorised by fitting unusual features or combinations of features onto a bland prototypical face, and such a data-compression technique would help to explain our computational speed. If humans do indeed function in this fashion, it is necessary to ask just what are the features that distinguish one face from another, and also, what are the features that form the basic set of the prototypical face. The distributed apertures technique was further developed in an attempt to answer both questions. Four target faces, stored in an image-processing computer, were each divided up into 162 contiguous squares that could be displayed in their correct positions in any combination of 24 or fewer squares. Each observer was required to judge which of the four target faces was displayed during a 1 s presentation, and the proportion of correct responses for each individual square was computed. The resultant response distributions, displayed as brightness maps, give a vivid impression of the relative saliency of each feature square, both for the individual targets and for all of them combined. The results, while broadly confirming previous work, contain some very interesting and surprising details about the differences between the target faces.


2014 ◽  
Vol 687-691 ◽  
pp. 3733-3737
Author(s):  
Dan Wu ◽  
Ming Quan Zhou ◽  
Rong Fang Bie

Massive image processing technology requires high requirements of processor and memory, and it needs to adopt high performance of processor and the large capacity memory. While the single or single core processing and traditional memory can’t satisfy the need of image processing. This paper introduces the cloud computing function into the massive image processing system. Through the cloud computing function it expands the virtual space of the system, saves computer resources and improves the efficiency of image processing. The system processor uses multi-core DSP parallel processor, and develops visualization parameter setting window and output results using VC software settings. Through simulation calculation we get the image processing speed curve and the system image adaptive curve. It provides the technical reference for the design of large-scale image processing system.


Sign in / Sign up

Export Citation Format

Share Document