scholarly journals A novel design for hardware interface board with reduced resource utilization

Author(s):  
G. S. Ananth ◽  
N. Shylashree ◽  
Satish Tunga ◽  
Latha B. N.

The final cost of an integrated circuit (IC) is proportional to its testing time. One of the main goals of test engineers when building an IC test solution is to reduce test time. Reduction of Test time is achieved by multi-site testing where multiple ICs are tested simultaneously using automated test equipment (ATE). During multi-site testing, if a certain test requires abundant resources, it is accomplished by testing one set of ICs at a time while the other ICs remain idle, thus lengthening the total test time. In digital-analog hybrid ICs, both analog and digital tests need to be performed, increasing the tester resource requirement and causing digital resource shortage. This paper describes a hardware interface board (HIB) design for a test case of a digital-analog IC on Teradyne’s ETS-364 ATE. The HIB's design allows the ATE to perform multi-site I<sup>2</sup>C based tests, which usually require lot of tester resources, utilizing only two digital resources and one measurement resource. This design achieves halving the I2C test time while lowering the number of resources necessary for multi-site testing compared to set-by-set testing. The proposed work has achieved up to 90.625% of resource reduction for multisite testing for a single test.

Mathematics ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. 1857
Author(s):  
A. D. Shrivathsan ◽  
R. Krishankumar ◽  
Arunodaya Raj Mishra ◽  
K. S. Ravichandran ◽  
Samarjit Kar ◽  
...  

This paper focuses on an exciting and essential problem in software companies. The software life cycle includes testing software, which is often time-consuming, and is a critical phase in the software development process. To reduce time spent on testing and to maintain software quality, the idea of a systematic selection of test cases is needed. Attracted by the claim, researchers presented test case prioritization (TCP) by applying the concepts of multi-criteria decision-making (MCDM). However, the literature on TCP suffers from the following issues: (i) difficulty in properly handling uncertainty; (ii) systematic evaluation of criteria by understanding the hesitation of experts; and (iii) rational prioritization of test cases by considering the nature of criteria. Motivated by these issues, an integrated approach is put forward that could circumvent the problem in this paper. The main aim of this research is to develop a decision model with integrated methods for TCP. The core importance of the proposed model is to (i) provide a systematic/methodical decision on TCP with a reduction in testing time and cost; (ii) help software personnel choose an apt test case from the suite for testing software; (iii) reduce human bias by mitigating intervention of personnel in the decision process. To this end, probabilistic linguistic information (PLI) is adopted as the preference structure that could flexibly handle uncertainty by associating occurrence probability to each linguistic term. Furthermore, an attitude-based entropy measure is presented for criteria weight calculation, and finally, the EDAS ranking method is extended to PLI for TCP. An empirical study of TCP in a software company is presented to certify the integrated approach’s effectiveness. The strengths and weaknesses of the introduced approach are conferred by comparing it with the relevant methods.


Author(s):  
Maggie Makar ◽  
Adith Swaminathan ◽  
Emre Kıcıman

The potential for using machine learning algorithms as a tool for suggesting optimal interventions has fueled significant interest in developing methods for estimating heterogeneous or individual treatment effects (ITEs) from observational data. While several methods for estimating ITEs have been recently suggested, these methods assume no constraints on the availability of data at the time of deployment or test time. This assumption is unrealistic in settings where data acquisition is a significant part of the analysis pipeline, meaning data about a test case has to be collected in order to predict the ITE. In this work, we present Data Efficient Individual Treatment Effect Estimation (DEITEE), a method which exploits the idea that adjusting for confounding, and hence collecting information about confounders, is not necessary at test time. DEITEE allows the development of rich models that exploit all variables at train time but identifies a minimal set of variables required to estimate the ITE at test time. Using 77 semi-synthetic datasets with varying data generating processes, we show that DEITEE achieves significant reductions in the number of variables required at test time with little to no loss in accuracy. Using real data, we demonstrate the utility of our approach in helping soon-to-be mothers make planning and lifestyle decisions that will impact newborn health.


Author(s):  
Tomislav Kurevija ◽  
Kristina Strpić ◽  
Sonja Koščak-Kolin

Theory of the Thermal Response Testing (TRT) is a well-known part of sizing process of the geothermal exchange system. Multiple parameters influence accuracy of effective ground thermal conductivity measurement; like testing time, variable power, climate interferences, groundwater effect etc. To improve accuracy of the TRT we introduced procedure to additionally analyze falloff temperature decline after power test. Method is based on a premise of analogy between TRT and petroleum well testing, since origin of both procedures lies in diffusivity equation with solutions for heat conduction or pressure analysis during radial flow. Applying pressure build-up test interpretation technique to the borehole heat exchanger testing, greater accuracy could be achieved since ground conductivity could be obtained from this period. Analysis was conducted on coaxial exchanger with five different power steps, and with both direct and reverse flow regime. Each test was set with 96hr of a classical TRT, followed by 96hr of temperature decline, making it almost 2000 hours of cumulative borehole testing. Results showed that ground conductivity value could vary as much as 25% depending on test time, seasonal period and power fluctuations while thermal conductivity obtained from a falloff period gives more stable values with only 10% value variation.


2021 ◽  
Author(s):  
Marlene Jensen ◽  
Juliane Wippler ◽  
Manuel Kleiner

Field studies are central to environmental microbiology and microbial ecology as they enable studies of natural microbial communities. Metaproteomics, the study of protein abundances in microbial communities, allows to study these communities ‘in situ’ which requires protein preservation directly in the field as protein abundance patterns can change rapidly after sampling. Ideally, a protein preservative for field deployment works rapidly and preserves the whole proteome, is stable in long-term storage, is non-hazardous and easy to transport, and is available at low cost. Although these requirements might be met by several protein preservatives, an assessment of their suitability in field conditions when targeted for metaproteomics is currently lacking. Here, we compared the protein preservation performance of flash freezing and the preservation solution RNAlater™ using the marine gutless oligochaete Olavius algarvensis and its symbiotic microbes as a test case. In addition, we evaluated long-term RNAlater™ storage after 1 day, 1 week and 4 weeks at room temperature (22-23 °C). We evaluated protein preservation using one dimensional liquid chromatography tandem mass spectrometry (1D-LC-MS/MS). We found that RNAlater™ and flash freezing preserved proteins equally well in terms of total number of identified proteins or relative abundances of individual proteins and none of the test time points were altered compared to t0. Moreover, we did not find biases against specific taxonomic groups or proteins with particular biochemical properties. Based on our metaproteomics data and the logistical requirements for field deployment we recommend RNAlater™ for protein preservation of field-collected samples when targeted for metaproteomcis.


Software maintenance is one of the most expensive activities in software life cycle. It costs nearly 70% of the total cost of the software. Either to adopt the new requirement or to correct the functionality, software undergoes maintenance. As a consequent of maintenance activities, software undergoes many reforms. Newly added software components may affect the working of existing components and also may introduce faults in existing components. The regression testing tries to reveal the faults that might have been introduced due to these reformations. Running all the prior existing test cases may not be feasible due to constraints like time, cost and resources. Test case prioritization may help in ordered execution of test cases. Running a faulty or fault prone component early in testing process may help in revealing more faults per unit of time. And hence may reduce the testing time. There have been many different criteria for assigning the priority to test cases. But none of the approaches so far have considered the object oriented design metrics for determining the priority of test cases. Object oriented design metrics have been empirically studied for their impact of software maintainability, reliability, testability and quality but usage of these metrics in test case prioritization is still an open area of research. The research reported in this paper evaluates subset of CK metrics. Metrics considered from CK suite include Coupling between objects (CBO), Depth of Inheritance tree (DIT), weighted methods per class (WMC), Number of children (NOC), and Response for a class (RFC). Study also considers four other metrics namely publically inherited methods (PIM), weighted attributes per class (WAC), number of methods inherited (NMI) and number of methods overridden. A model is built based on these metrics for the prediction of software quality and based on the quality measures software modules are classified with the help of Support Vector Machine (SVM) algorithm. The proposed approach is implemented in WEKA tool and analysed on experimental data extracted from open source software. Proposed work would firstly help the tester in identifying the low quality modules and then prioritize the test cases based on quality centric approach. The work also attempts to automate test case prioritization in object oriented testing. The results obtained are encouraging.


2004 ◽  
Vol 16 (4) ◽  
pp. 397-403 ◽  
Author(s):  
Kazuki Nakada ◽  
◽  
Tetsuya Asai ◽  
Yoshihito Amemiya

The present paper proposes analog integrated circuit (IC) implementation of a biologically inspired controller in quadruped robot locomotion. Our controller is based on the central pattern generator (CPG), which is known as the biological neural network that generates fundamental rhythmic movements in locomotion of animals. Many CPG-based controllers for robot locomotion have been proposed, but have mostly been implemented in software on digital microprocessors. Such a digital processor operates accurately, but it can only process sequentially. Thus, increasing the degree of freedom of physical parts of a robot deteriorates the performance of a CPG-based controller. We therefore implemented a CPG-based controller in an analog complementary metal-oxide-semiconductor (CMOS) circuit that processes in parallel essentially, making it suitable for real-time locomotion control in a multi-legged robot. Using the simulation program with integrated circuit emphasis (SPICE), we show that our controller generates stable rhythmic patterns for locomotion control in a quadruped walking robot, and change its rhythmic patterns promptly.


2019 ◽  
Vol 40 (6) ◽  
pp. 917-923
Author(s):  
Chern Sheng Lin ◽  
Chang-Yu Hung ◽  
Chung Ting Chen ◽  
Ke-Chun Lin ◽  
Kuo Liang Huang

Purpose This study aims to present an optical alignment and compensation control of die bonder for chips containing through-silicon vias and develop three-dimensional integrated circuit stacked packaging for compact size and multifunction. Design/methodology/approach The machine vision, optical alignment method and sub-pixel technology in dynamic imaging condition are used. Through a comparison of reference image, the chip alignment calibration can improve machine accuracy and stability. Findings According to the experimental data and preliminary results of the analysis, accuracy can be achieved within the desired range, and the accuracy is much better than traditional die bonder equipment. The results help further research in die bonder for chips containing through-silicon vias. Originality/value In subsequent testing of the chip, the machine can simultaneously test multiple chips to save test time and increase productivity.


Author(s):  
Chetan J. Shingadiya Et.al

Software Testing is an important aspect of the real time software development process. Software testing always assures the quality of software product. As associated with software testing, there are few very important issues where there is a need to pay attention on it in the process of software development test. These issues are generation of effective test case and test suite as well as optimization of test case and suite while doing testing of software product. The important issue is that testing time of the test case and test suite. It is very much important that after development of software product effective testing should be performed. So to overcome these issues of optimization, we have proposed new approach for test suite optimization using genetic algorithm (GA). Genetic algorithm is evolutionary in nature so it is often used for optimization of problem by researcher. In this paper, our aim is to study various selections methods like tournament selection, rank selection and roulette wheel selection and then we apply this genetic algorithm (GA) on various programs which will generate optimized test suite with parameters like fitness value of test case, test suite and take minimum amount of time for execution after certain preset generation. In this paper our main objectives as per the experimental investigation, we show that tournament selection works very fine as compared to other methods with respect fitness selection of test case and test suites, testing time of test case and test suites as well as  number of requirements.


Sign in / Sign up

Export Citation Format

Share Document