scholarly journals Implementation of ARINC 659 Bus Controller for Space-Borne Computers

Electronics ◽  
2019 ◽  
Vol 8 (4) ◽  
pp. 435 ◽  
Author(s):  
Shuang Jiang ◽  
Shibin Liu ◽  
Chenguang Guo ◽  
Xu Fan ◽  
Teng Ma ◽  
...  

As one of the key technologies of Honeywell, the aeronautical radio incorporated (ARINC) 659 bus is popular in current space-borne computers. However, Honeywell does not design ARINC 659 bus controller separately, and there are only a few papers about FPGA-based ARINC 659 bus controllers. Accordingly, to promote the extremely high performance needs of space-borne computers, this paper designs an ARINC 659 bus controller chip which integrates two independent bus interface units (BIUs), one 8-bit MCU, and several peripheral interfaces (i.e., UART, SPI, and I2C). Because the two BIUs are identical and mutually checked, the symmetry problem is emphatically dealt with in the design of this bus controller, and effective timing convergence is realized, which makes the bus controller work reliably and stably. In addition, due to the circuit’s large scale, design for testability (DFT) is also considered. Accordingly, on-chip clock (OCC) and scanning compression test technique are used to realize the at-speed test and shorten the test time, respectively.

Author(s):  
Zhiyuan He ◽  
Zebo Peng ◽  
Petru Eles

High temperature has become a technological barrier to the testing of high performance systems-on-chip, especially when deep submicron technologies are employed. In order to reduce test time while keeping the temperature of the cores under test within a safe range, thermal-aware test scheduling techniques are required. In this chapter, the authors address the test time minimization problem as how to generate the shortest test schedule such that the temperature limits of individual cores and the limit on the test-bus bandwidth are satisfied. In order to avoid overheating during the test, the authors partition test sets into shorter test sub-sequences and add cooling periods in between, such that applying a test sub-sequence will not drive the core temperature going beyond the limit. Furthermore, based on the test partitioning scheme, the authors interleave the test sub-sequences from different test sets in such a manner that a cooling period reserved for one core is utilized for the test transportation and application of another core. The authors have proposed an approach to minimize the test application time by exploring alternative test partitioning and interleaving schemes with variable length of test sub-sequences and cooling periods as well as alternative test schedules. Experimental results have shown the efficiency of the proposed approach.


1995 ◽  
Vol 380 ◽  
Author(s):  
R. Fabian Pease

ABSTRACTThe drive to increasingly higher density ultra-large-scale-integration (ULSI) (of electronic circuits) is fuelled primarily by cost; on-chip interconnects are far cheaper than the less dense offchip interconnects. At the same time the escalating cost of an IC factory (‘fab’) is making headlines as it goes through $1B and a large part of this escalation is the cost of high performance lithography tools. The lithographic technology to go below 0.1μm will almost certainly be very different from an extension of today's optical projection and the cost of replacing today's technology will be enormous. A second drawback to higher density is the resistance of narrow interconnects. As a result some people have suggested that this situation is analogous to that of airliner speed which increased over a period of thirty years from about 100 mph to close to 600 mph but has not increased in the last 35 years. Still faster speed was technically possible, and hence was pursued by the military, but is uneconomical for most commercial use. Current technology might take us to 0.1μm which will probably be state of the art 10 years hence so technologies for replacing optical lithography e.g. scanned arrays of proximal probes should be researched now. Other challenges include how to achieve useful interconnect networks employing 50 nm features.


2020 ◽  
Vol 12 (4) ◽  
pp. 64 ◽  
Author(s):  
Qaiser Ijaz ◽  
El-Bay Bourennane ◽  
Ali Kashif Bashir ◽  
Hira Asghar

Modern datacenters are reinforcing the computational power and energy efficiency by assimilating field programmable gate arrays (FPGAs). The sustainability of this large-scale integration depends on enabling multi-tenant FPGAs. This requisite amplifies the importance of communication architecture and virtualization method with the required features in order to meet the high-end objective. Consequently, in the last decade, academia and industry proposed several virtualization techniques and hardware architectures for addressing resource management, scheduling, adoptability, segregation, scalability, performance-overhead, availability, programmability, time-to-market, security, and mainly, multitenancy. This paper provides an extensive survey covering three important aspects—discussion on non-standard terms used in existing literature, network-on-chip evaluation choices as a mean to explore the communication architecture, and virtualization methods under latest classification. The purpose is to emphasize the importance of choosing appropriate communication architecture, virtualization technique and standard language to evolve the multi-tenant FPGAs in datacenters. None of the previous surveys encapsulated these aspects in one writing. Open problems are indicated for scientific community as well.


Author(s):  
Hung Kiem Nguyen ◽  
Tu Xuan Tran

The requirements for high performance and low power consumption are becoming more and more inevitable when designing modern embedded systems, especially for the next generation multi-mode multimedia or communication standards. Ultra large-scale integration reconfigurable System-on-Chips (SoCs) have been proposed to achieve not only better performance and lower energy consumption but also higher flexibility and versatility in comparison with the conventional architectures. The unique characteristic of such systems is integration of many types of heterogeneous reconfigurable processing fabrics based on a Network-on-Chip. This paper analyzes and emphasizes the key research trends of the reconfigurable System-on-Chips (SoCs). Firstly, the emerging hardware architecture of SoCs is highlighted. Afterwards, the key issues of designing the reconfigurable SoCs are discussed, with the focus on the challenges when designing reconfigurable hardware fabrics and reconfigurable Network-on-Chips. Finally, some state-of-the-art reconfigurable SoCs are briefly discussed.


2012 ◽  
Vol 2012 ◽  
pp. 1-15 ◽  
Author(s):  
Andrew G. Schmidt ◽  
William V. Kritikos ◽  
Shanyuan Gao ◽  
Ron Sass

As the number of cores per discrete integrated circuit (IC) device grows, the importance of the network on chip (NoC) increases. However, the body of research in this area has focused on discrete IC devices alone which may or may not serve the high-performance computing community which needs to assemble many of these devices into very large scale, parallel computing machines. This paper describes an integrated on-chip/off-chip network that has been implemented on an all-FPGA computing cluster. The system supports MPI-style point-to-point messages, collectives, and other novel communication. Results include the resource utilization and performance (in latency and bandwidth).


Author(s):  
C.K. Wu ◽  
P. Chang ◽  
N. Godinho

Recently, the use of refractory metal silicides as low resistivity, high temperature and high oxidation resistance gate materials in large scale integrated circuits (LSI) has become an important approach in advanced MOS process development (1). This research is a systematic study on the structure and properties of molybdenum silicide thin film and its applicability to high performance LSI fabrication.


2008 ◽  
Author(s):  
D. L. McMullin ◽  
A. R. Jacobsen ◽  
D. C. Carvan ◽  
R. J. Gardner ◽  
J. A. Goegan ◽  
...  

Author(s):  
A. Ferrerón Labari ◽  
D. Suárez Gracia ◽  
V. Viñals Yúfera

In the last years, embedded systems have evolved so that they offer capabilities we could only find before in high performance systems. Portable devices already have multiprocessors on-chip (such as PowerPC 476FP or ARM Cortex A9 MP), usually multi-threaded, and a powerful multi-level cache memory hierarchy on-chip. As most of these systems are battery-powered, the power consumption becomes a critical issue. Achieving high performance and low power consumption is a high complexity challenge where some proposals have been already made. Suarez et al. proposed a new cache hierarchy on-chip, the LP-NUCA (Low Power NUCA), which is able to reduce the access latency taking advantage of NUCA (Non-Uniform Cache Architectures) properties. The key points are decoupling the functionality, and utilizing three specialized networks on-chip. This structure has been proved to be efficient for data hierarchies, achieving a good performance and reducing the energy consumption. On the other hand, instruction caches have different requirements and characteristics than data caches, contradicting the low-power embedded systems requirements, especially in SMT (simultaneous multi-threading) environments. We want to study the benefits of utilizing small tiled caches for the instruction hierarchy, so we propose a new design, ID-LP-NUCAs. Thus, we need to re-evaluate completely our previous design in terms of structure design, interconnection networks (including topologies, flow control and routing), content management (with special interest in hardware/software content allocation policies), and structure sharing. In CMP environments (chip multiprocessors) with parallel workloads, coherence plays an important role, and must be taken into consideration.


Author(s):  
Rudolf Schlangen ◽  
Jon Colburn ◽  
Joe Sarmiento ◽  
Bala Tarun Nelapatla ◽  
Puneet Gupta

Abstract Driven by the need for higher test-compression, increasingly many chip-makers are adopting new DFT architectures such as “Extreme-Compression” (XTR, supported by Synopsys) with on-chip pattern generation and MISR based compression of chain output data. This paper discusses test-loop requirements in general and gives Advantest 93k specific guidelines on test-pattern release and ATE setup necessary to enable the most established EFA techniques such as LVP and SDL (aka DLS, LADA) within the XTR test architecture.


Author(s):  
Ray Talacka ◽  
Nandu Tendolkar ◽  
Cynthia Paquette

Abstract The use of memory arrays to drive yield enhancement has driven the development of many technologies. The uniformity of the arrays allows for easy testing and defect location. Unfortunately, the complexities of the logic circuitry are not represented well in the memory arrays. As technologies push to smaller geometries and the layout and timing of the logic circuitry become more problematic the ability to address yield issue is becoming critical. This paper presents the added yield enhancement capabilities of using e600 core Scan Chain and Scan Pattern testing for logic debug, ways to interpret the fail data, and test methodologies to balance test time and acquiring data. Selecting a specific test methodology and using today's advanced tools like Freescale's DFT/FA has been proven to find more yield issues, earlier, enabling quicker issue resolution.


Sign in / Sign up

Export Citation Format

Share Document