System complexity criteria and synthesis of high-performance multifunctional parallel ADC in Rademacher's and Haar-Krestenson's theoretical and numerical bases

Author(s):  
Nataliia Vozna ◽  
Yaroslav Nykolaichuk ◽  
Oleg Zastavnyy ◽  
Volodymyr Pikh
2008 ◽  
Vol 50 (5) ◽  
Author(s):  
Rainer Buchty ◽  
Wolfgang Karl

AbstractAlready today we face architectures featuring up to several hundreds of processors, being able to manage several thousand concurrent threads. Future architectures, however, will not only see an increase in parallelism but also feature an increase in heterogeneity and reconfigurability. Judging from current production and prototype architectures, we also see that such systems will be tiled, i. e., individual cores with local memory interconnected through some means of on-chip communication. Current discussions show that existing approaches to application mapping, parallelization, data locality optimization, and system management do not match these upcoming architectures well, thus rather hampering than harnessing the power of future systems. We will therefore outline the requirements of upcoming architectures and demonstrate how self-organization, including bio-inspired, techniques may help to manage system complexity. Key to these techniques is a sophisticated decentralized, hierarchical monitoring approach suitable for sustained real-time monitoring and event correlation for current and future high-performance architectures.


2021 ◽  
Author(s):  
Victor Dumitriu

A number of modern digital processing systems implement complex multi-mode applications with high performance requirements and strict operating constraints; examples include video processing and telecommunication applications. A number of these systems use increasingly large FPGAs as the implementation medium, due to reduced development costs. The combination of increases in FPGA capacity and system complexity has lead to a non-linear increase in system implementation effort. If left unchecked, implementation effort for such systems will reach the point where it becomes a design and development bottleneck. At the same time, the reduction in transistor size used to manufacture these devices can lead to increased device fault rates. To address these two problems, the Multi-mode Adaptive Collaborative Reconfigurable self-Organized System (MACROS) Framework and design methodology is proposed and described in this work. The MACROS Framework other the ability for run-time architecture adaptation by integrating FPGA configuration into regular operation. The MACROS Framework allows for run-time generation of Application-Specific Processors (ASPs) through the deployment, assembly and integration of pre-built functional units; the framework further allows the relocation of functional units without affecting system functionality. The use of functional units as building blocks allows the system to be implemented on a piece-by-piece basis, which reduces the complexity of mapping, placement and routing tasks; the ability to relocate functional units allows fault mitigation by avoiding faulty regions in a device. The proposed framework has been used to implement multiple video processing systems which were used as verification and testing instruments. The MACROS framework was found to successfully support run-time architecture adaptation in the form of functional unit deployment and relocation in high performance systems. For large systems (more than 100 functional units), the MACROS Framework implementation effort, measured as time cost, was found to be one third that of a traditional (monolithic) system; more importantly, in MACRO Systems this time cost was found to increase linearly with system complexity (the number of functional units). When considering fault mitigation capabilities, the resource overhead associated with the MACROS Framework was found to be up to 85 % smaller than a traditional Triple Module Redundancy (TMR) solution.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-12
Author(s):  
Viviane M. Gomes ◽  
Joao R. B. Paiva ◽  
Marcio R. C. Reis ◽  
Gabriel A. Wainer ◽  
Wesley P. Calixto

This work proposes a complexity metric which maps internal connections of the system and its relationship with the environment through the application of sensitivity analysis. The proposed methodology presents (i) system complexity metric, (ii) system sensitivity metric, and (iii) two models as case studies. Based on the system dynamics, the complexity metric maps the internal connections through the states of the system and the metric of sensitivity evaluates the contribution of each parameter to the output variability. The models are simulated in order to quantify the complexity and the sensitivity and to analyze the behavior of the systems leading to the assumption that the system complexity is closely linked to the most sensitive parameters. As findings from results, it may be observed that systems may exhibit high performance as a result of optimized configurations given by their natural complexity.


2021 ◽  
Author(s):  
Victor Dumitriu

A number of modern digital processing systems implement complex multi-mode applications with high performance requirements and strict operating constraints; examples include video processing and telecommunication applications. A number of these systems use increasingly large FPGAs as the implementation medium, due to reduced development costs. The combination of increases in FPGA capacity and system complexity has lead to a non-linear increase in system implementation effort. If left unchecked, implementation effort for such systems will reach the point where it becomes a design and development bottleneck. At the same time, the reduction in transistor size used to manufacture these devices can lead to increased device fault rates. To address these two problems, the Multi-mode Adaptive Collaborative Reconfigurable self-Organized System (MACROS) Framework and design methodology is proposed and described in this work. The MACROS Framework other the ability for run-time architecture adaptation by integrating FPGA configuration into regular operation. The MACROS Framework allows for run-time generation of Application-Specific Processors (ASPs) through the deployment, assembly and integration of pre-built functional units; the framework further allows the relocation of functional units without affecting system functionality. The use of functional units as building blocks allows the system to be implemented on a piece-by-piece basis, which reduces the complexity of mapping, placement and routing tasks; the ability to relocate functional units allows fault mitigation by avoiding faulty regions in a device. The proposed framework has been used to implement multiple video processing systems which were used as verification and testing instruments. The MACROS framework was found to successfully support run-time architecture adaptation in the form of functional unit deployment and relocation in high performance systems. For large systems (more than 100 functional units), the MACROS Framework implementation effort, measured as time cost, was found to be one third that of a traditional (monolithic) system; more importantly, in MACRO Systems this time cost was found to increase linearly with system complexity (the number of functional units). When considering fault mitigation capabilities, the resource overhead associated with the MACROS Framework was found to be up to 85 % smaller than a traditional Triple Module Redundancy (TMR) solution.


2013 ◽  
Vol 433-435 ◽  
pp. 1419-1422
Author(s):  
Yang Liu ◽  
Yong Tie ◽  
Ming Li Xiao ◽  
Shun Na

Color bar signal generator is one of the most important measurements for PAL TV applications. However, its use in resource constrained applications such as color TV video adjustment is limited because of the high architecture complexity involved. Due to the broad range of applications presently offered by FPGA (filed programmable gate array), their use has become increasingly widespread in different areas and sectors. In this paper, we present the design and implementation of a color bar signal generator based on FPGA and MC1377 with a high performance and low system complexity. The FPGA technology proved to be a proper platform to meet these two contrasting requirements. The signal structure, generation principle, and FPGA implementation of composite PAL synchronize signal and RGB color signal are described. The interface circuits, hardware and software of the system are introduced. The software is simulated using MAXplusII platform. Simulation results indicate the effectiveness and stability of the signal generator.


Author(s):  
A. V. Crewe ◽  
M. Isaacson ◽  
D. Johnson

A double focusing magnetic spectrometer has been constructed for use with a field emission electron gun scanning microscope in order to study the electron energy loss mechanism in thin specimens. It is of the uniform field sector type with curved pole pieces. The shape of the pole pieces is determined by requiring that all particles be focused to a point at the image slit (point 1). The resultant shape gives perfect focusing in the median plane (Fig. 1) and first order focusing in the vertical plane (Fig. 2).


Author(s):  
N. Yoshimura ◽  
K. Shirota ◽  
T. Etoh

One of the most important requirements for a high-performance EM, especially an analytical EM using a fine beam probe, is to prevent specimen contamination by providing a clean high vacuum in the vicinity of the specimen. However, in almost all commercial EMs, the pressure in the vicinity of the specimen under observation is usually more than ten times higher than the pressure measured at the punping line. The EM column inevitably requires the use of greased Viton O-rings for fine movement, and specimens and films need to be exchanged frequently and several attachments may also be exchanged. For these reasons, a high speed pumping system, as well as a clean vacuum system, is now required. A newly developed electron microscope, the JEM-100CX features clean high vacuum in the vicinity of the specimen, realized by the use of a CASCADE type diffusion pump system which has been essentially improved over its predeces- sorD employed on the JEM-100C.


Author(s):  
John W. Coleman

In the design engineering of high performance electromagnetic lenses, the direct conversion of electron optical design data into drawings for reliable hardware is oftentimes difficult, especially in terms of how to mount parts to each other, how to tolerance dimensions, and how to specify finishes. An answer to this is in the use of magnetostatic analytics, corresponding to boundary conditions for the optical design. With such models, the magnetostatic force on a test pole along the axis may be examined, and in this way one may obtain priority listings for holding dimensions, relieving stresses, etc..The development of magnetostatic models most easily proceeds from the derivation of scalar potentials of separate geometric elements. These potentials can then be conbined at will because of the superposition characteristic of conservative force fields.


Author(s):  
J W Steeds ◽  
R Vincent

We review the analytical powers which will become more widely available as medium voltage (200-300kV) TEMs with facilities for CBED on a nanometre scale come onto the market. Of course, high performance cold field emission STEMs have now been in operation for about twenty years, but it is only in relatively few laboratories that special modification has permitted the performance of CBED experiments. Most notable amongst these pioneering projects is the work in Arizona by Cowley and Spence and, more recently, that in Cambridge by Rodenburg and McMullan.There are a large number of potential advantages of a high intensity, small diameter, focussed probe. We discuss first the advantages for probes larger than the projected unit cell of the crystal under investigation. In this situation we are able to perform CBED on local regions of good crystallinity. Zone axis patterns often contain information which is very sensitive to thickness changes as small as 5nm. In conventional CBED, with a lOnm source, it is very likely that the information will be degraded by thickness averaging within the illuminated area.


Author(s):  
Klaus-Ruediger Peters

A new generation of high performance field emission scanning electron microscopes (FSEM) is now commercially available (JEOL 890, Hitachi S 900, ISI OS 130-F) characterized by an "in lens" position of the specimen where probe diameters are reduced and signal collection improved. Additionally, low voltage operation is extended to 1 kV. Compared to the first generation of FSEM (JE0L JSM 30, Hitachi S 800), which utilized a specimen position below the final lens, specimen size had to be reduced but useful magnification could be impressively increased in both low (1-4 kV) and high (5-40 kV) voltage operation, i.e. from 50,000 to 200,000 and 250,000 to 1,000,000 x respectively.At high accelerating voltage and magnification, contrasts on biological specimens are well characterized1 and are produced by the entering probe electrons in the outmost surface layer within -vl nm depth. Backscattered electrons produce only a background signal. Under these conditions (FIG. 1) image quality is similar to conventional TEM (FIG. 2) and only limited at magnifications >1,000,000 x by probe size (0.5 nm) or non-localization effects (%0.5 nm).


Sign in / Sign up

Export Citation Format

Share Document