scholarly journals How Fast Can We Multiply Large Integers on an Actual Computer?

Author(s):  
Martin Fürer
Keyword(s):  
Author(s):  
Nova T. Zamora ◽  
Kam Meng Chong ◽  
Ashish Gupta

Abstract This paper presented the recent application of die powerup in Thermal Imaging as applied to the detection of defects causing thermal failure on revenue products or units not being captured using other available techniques. Simulating the condition on an actual computer setup, the infrared (IR) camera should capture images simultaneously as the entire bootup process is being executed by the processor, thus revealing a series of images and thermal information on each and every step of the startup process. This metrology gives the failure analyst a better approach to acquire a set of information that substantiate in the conduct of rootcause analysis of thermal-related failure in revenue units, especially on customer returns. Defective units were intentionally engineered in order to collect the thermal response data and eventually come up with a plot of all known thermal-related defects.


2007 ◽  
Vol 23 (5) ◽  
pp. 2321-2344 ◽  
Author(s):  
Ulla Bunz ◽  
Carey Curry ◽  
William Voon
Keyword(s):  

Author(s):  
Kevin Larkin ◽  
Glenn Finger

<blockquote>Although one-to-one laptop programs are being introduced in many schools, minimal research has been conducted regarding their effectiveness in primary schools. Evidence-based research is needed to inform significant funding, deployment and student use of computers. This article analyses key findings from a study conducted in four Year 7 classrooms in which students were provided with netbook computers as an alternative to more expensive laptop computers. Variable access was provided to students including computer to student ratios of one-to-one and one-to-two. Findings indicated that increased access to the netbook computers resulted in increases in computer usage by these students, compared with their minimal use of computers before the study. However, despite the increased access, actual computer usage remained limited. The article reports that factors contributing to the minimal use of computers included individual teacher agency, a crowded curriculum, and the historical use of computers. Implications for policy and practice are suggested.</blockquote><p> </p>


2010 ◽  
Vol 39 ◽  
pp. 169-175
Author(s):  
Mahamad Saipunidzam ◽  
Mohd Taib Shakirah ◽  
Noor Ibrahim Mohammad

This paper presents the empirical results from experiment on speed-accuracy trade-off in ‘drag and drop’ movements. Although the previous work has modeled a theorem, framework and achievement, but some researcher focused only on the system error. In this paper, we design an analysis mechanism with the software that could behave as actual computer in producing error (system bug). This study aims at investigating the user anticipation in multiple modalities in relation to users’ performance. The multiple modalities represent four feedbacks; default, audio, text and visual that integrate with error enforcement forcing error with certain percentage to simulate system error and actual computer behavior. The finding shows the average means of 4.0667 for feedback modality of which, we conclude that the users anticipate with feedback modality can reacts to improve their performance in movement mechanism.


Author(s):  
Annette C. Easton ◽  
George Easton

The gaps that exist between a students self-perceived computer skills, their actual computer skills, and the computer skills deemed important in business pose an interesting challenge for business schools, today, and for the foreseeable future. One strategy for managing these literacy gaps is developing curriculum that better tailors content to the evolving literacy of students. In an effort to operationalize this strategy, we have undertaken a study to measure the magnitude of the literacy gaps and the effectiveness of an introductory computer course required in our undergraduate business program. This paper presents the initial results of that study.


Author(s):  
Wen-Shian Tzeng ◽  
P. R. Strutt

A computer method has been developed which facilitates the rapid use of the Philips 300 goniometer stage for quantitative diffraction contrast analyses of lattice imperfections in thin-foil specimens. Using this method it is a straightforward matter to obtain 20 (or even more) different two beam conditions on a single randomly oriented grain in a polycrystalline specimen. Furthermore, the method avoids ambiguity in determining the sense of the atomic displacement of planar defects. Thus it is particularly useful for the precise characterization of stacking faults and anti-phase boundaries. The generality of the method enables it to be used for thin-foil specimens with any crystallographic structure. Basic steps involved in the actual computer program are briefly summarized.


2011 ◽  
Vol 271-273 ◽  
pp. 1313-1317
Author(s):  
Li Xian Fan ◽  
Yong Zhao Xu ◽  
Hong Tao Li

Virtualization technology has revolted the computer technology. In the paper, we present our virtual computer cluster. The components of computer cluster have been improved by virtualization technology from the actual computer server to virtual server instance. Virtual computer cluster includes front-end machine and the cluster nodes. The front-end machine can be physical machines or virtual machines; the cluster node can also be virtual or physical machine node. The front-end machine of the cluster communicates with the cluster nodes through physical or virtual network adapter. The front-end machine connects with the cluster management software in each node so as to monitor and control every physical or virtual node within the cluster. A single virtual computer cluster extends its resources with adding physical or virtual resources including computing resources, storage resources and etc. within secure and stabile occasions with special needs. The technology can be widely applied to cluster and parallel computing demand but with lower-cost system design scenarios.


2012 ◽  
Vol 253-255 ◽  
pp. 1324-1329
Author(s):  
Meng Nan Zhang ◽  
Hong Ze Xu

The accurate measurement of the speed of urban rail vehicle is the basis of normal operation of the train controlling. Since speed measuring devices of vehicle-borne is inevitably disturbed by the sensors or the external environment, the deviation between the measured speed and the actual value, which varies randomly, is impossible to eliminate. This paper utilizes the method of minimum variance prediction to predict the speed of the train. By this way, the variance of the deviation between the predicted value and the actual value of the speed can be minimized. The model of the speed of the train is also discretized, which overcomes the shortcomings that the transitional models and control theory are limited to theoretical analysis but cannot be used in the actual computer control systems. In the section of simulation, the article shows the actual simulation results, which prove that this method has strong practicability.


2001 ◽  
Vol 123 (01) ◽  
pp. 44-46 ◽  
Author(s):  
Paul Sharke

BMW’s Z9 study car combines a haptic input device, on the console, with a display screen. With the long-range problem in view, BMW began speaking to the engineers at Immersion about the possibility of designing a mouse for the car. The target vehicle would be the 2001 7 series. An actual computer mouse in a car is one of those products that probably would cause a crash. According to an expert, strength of the mechatronics discipline is its notion of multivariable optimization, the idea of trying to solve the problem in the right place. Although the medical trainers for which Immersion provides tactile feedback look similar when seen from Schena’s favorite zoomed-out perspective, up close they are fundamentally distinct.


Author(s):  
Dietrich Hartmann ◽  
Karl R. Leimbach

Abstract Multilevel parallelization concepts for structural optimization provide potential to significantly improve the productivity in CAE. In particular, if large scale structural systems are to be designed with respect to a specified optimization criterion subject to a large set of constraints substantial speedup factors are achievable. This contribution is discussing generic as well as specific parallelization concepts of large structural optimization problems. Based upon these concepts transputer systems are applied as a platform for the transition from concepts to actual computer implementation.


Sign in / Sign up

Export Citation Format

Share Document