scholarly journals Design of a General Purpose 8-bit RISC Processor for Computer Architecture Learning

2015 ◽  
Vol 19 (2) ◽  
Author(s):  
Antonio Hernandez Zavala ◽  
Oscar Camacho Nieto ◽  
Jorge Adalberto Huerta Ruelas ◽  
Arodí Rafael Carvallo Dominguez
2014 ◽  
Vol 981 ◽  
pp. 58-61 ◽  
Author(s):  
Hui Jing Yang ◽  
Hao Fan ◽  
Huai Guo Dong

This paper targets the computer architecture courses and presents an Field Programmable Gate Array implementation of a RISC Processor via Verilog HDL design. It has 8-bit instruction words and 4 general purpose registers. It have two instruction formats. And it has been designed with Verilog HDL, synthesized using Quatus II 12.0, simulated using ModelSim simulator, and then implemented on Altera Cyclone IV FPGA that has 484 available Input/Output pins and 50MHz clock oscillator. The final overall simulation's experimental data verify the correctness of the processor.


Impact ◽  
2019 ◽  
Vol 2019 (10) ◽  
pp. 44-46
Author(s):  
Masato Edahiro ◽  
Masaki Gondo

The pace of technology's advancements is ever-increasing and intelligent systems, such as those found in robots and vehicles, have become larger and more complex. These intelligent systems have a heterogeneous structure, comprising a mixture of modules such as artificial intelligence (AI) and powertrain control modules that facilitate large-scale numerical calculation and real-time periodic processing functions. Information technology expert Professor Masato Edahiro, from the Graduate School of Informatics at the Nagoya University in Japan, explains that concurrent advances in semiconductor research have led to the miniaturisation of semiconductors, allowing a greater number of processors to be mounted on a single chip, increasing potential processing power. 'In addition to general-purpose processors such as CPUs, a mixture of multiple types of accelerators such as GPGPU and FPGA has evolved, producing a more complex and heterogeneous computer architecture,' he says. Edahiro and his partners have been working on the eMBP, a model-based parallelizer (MBP) that offers a mapping system as an efficient way of automatically generating parallel code for multi- and many-core systems. This ensures that once the hardware description is written, eMBP can bridge the gap between software and hardware to ensure that not only is an efficient ecosystem achieved for hardware vendors, but the need for different software vendors to adapt code for their particular platforms is also eliminated.


1996 ◽  
Vol 33 (3) ◽  
pp. 251-260 ◽  
Author(s):  
James O. Hamblen

Using VHDL based modelling, synthesis, and simulation in an introductory computer architecture laboratory In many existing curricula, there is a notable lack of recent research advances in CAD tools and rapid prototyping using logic synthesis. This paper describes a novel introductory computer architecture laboratory that utilizes these new developments. VHDL based logic synthesis and timing simulations are used to design a RISC processor.


2017 ◽  
Author(s):  
Ben Langmead ◽  
Christopher Wilks ◽  
Valentin Antonescu ◽  
Rone Charles

AbstractGeneral-purpose processors can now contain many dozens of processor cores and support hundreds of simultaneous threads of execution. To make best use of these threads, genomics software must contend with new and subtle computer architecture issues. We discuss some of these and propose methods for improving thread scaling in tools that analyze each read independently, such as read aligners. We implement these methods in new versions of Bowtie, Bowtie 2 and HISAT. We greatly improve thread scaling in many scenarios, including on the recent Intel Xeon Phi architecture. We also highlight how bottlenecks are exacerbated by variable-record-length file formats like FASTQ and suggest changes that enable superior scaling.


2020 ◽  
Vol 24 (23) ◽  
pp. 17525-17539 ◽  
Author(s):  
Alberto Falcone ◽  
Alfredo Garro ◽  
Marat S. Mukhametzhanov ◽  
Yaroslav D. Sergeyev

AbstractNumerical computing is a key part of the traditional computer architecture. Almost all traditional computers implement the IEEE 754-1985 binary floating point standard to represent and work with numbers. The architectural limitations of traditional computers make impossible to work with infinite and infinitesimal quantities numerically. This paper is dedicated to the Infinity Computer, a new kind of a supercomputer that allows one to perform numerical computations with finite, infinite, and infinitesimal numbers. The already available software simulator of the Infinity Computer is used in different research domains for solving important real-world problems, where precision represents a key aspect. However, the software simulator is not suitable for solving problems in control theory and dynamics, where visual programming tools like Simulink are used frequently. In this context, the paper presents an innovative solution that allows one to use the Infinity Computer arithmetic within the Simulink environment. It is shown that the proposed solution is user-friendly, general purpose, and domain independent.


Very-large-scale integration (VLSI) offers new opportunities in computer architecture. The cost of a processor has been reduced to that of a few thousand bytes of memory, with the result that parallel computers can be constructed as easily and economically as their sequential predecessors. In particular, a parallel computer constructed by replication of a standard computing element is well suited to the mass-production economics of the technology. The emergence of the new parallel computers has stimulated the development of new programming languages and algorithms. One example is the Occam language which has been designed to enable applications to be expressed in a form suitable for execution on a variety of parallel architectures. Further developments in language and architecture will enable processing resources to be allocated and deallocated as freely as memory, giving rise to some hope that users of general-purpose parallel computers will be freed from the current need to design algorithms to suit specific architectures.


2000 ◽  
Vol 10 (06) ◽  
pp. 475-481 ◽  
Author(s):  
AMOS R. OMONDI

The last decade saw a proliferation of research into the design of neurocomputers. Although such work still continues, much of it is never beyond the prototype-machine stage. In this paper, we argue that, on the whole, neurocomputers are no longer viable; like, say, database computers before them, their time has passed before they became a common reality. We consider the implementation of hardware neural networks, from the level of arithmetic to complete individual processors and parallel processors and show that currents trends in computer architecture and implementation are not supportive of a case for custom neurocomputers. We argue that in the future, neural-network processing ought to be mostly restricted to general-purpose processors or to processors that have been designed for other widely-used applications. There are just one or two, rather narrow, exceptions to this.


Sign in / Sign up

Export Citation Format

Share Document