Productivity and Code Quality Improvement of Mixed-Signal Test Software by applying Software Engineering Methods

Author(s):  
S. Vock ◽  
U. Flogaus ◽  
H.M. von Staudt
Author(s):  
Stefan Vock ◽  
Hans Martin von Staudt

Abstract Typical mixed-signal ICs are approaching 1000 or even more parametric tests. These tests are usually coded in a procedural or a semi-object oriented language. The huge code base of the programs is a significant challenge for maintaining code quality which inherently translates into outgoing quality. The paper will present software metrics of typical mixedsignal power management and audio devices with regard to the number of tests conducted. It will be shown that classical ways to handle test programs are error prone and tend to systematically repeat known mistakes. The adoption of selected software engineering methods can avoid such mistakes and improves the productivity of the mixed-signal test generation. Results of a pilot project show significant productivity improvement. Open-source based software is employed to provide the necessary tool support. They establish a potential roadmap to become independent of proprietary tester specific tool sets.


2016 ◽  
Vol 6 (4) ◽  
pp. 137-150
Author(s):  
Doohwan Kim ◽  
◽  
YooJin Jung ◽  
Jang-Eui Hong

2020 ◽  
Vol 10 (20) ◽  
pp. 7088
Author(s):  
Luka Pavlič ◽  
Marjan Heričko ◽  
Tina Beranič

In scientific research, evidence is often based on empirical data. Scholars tend to rely on students as participants in experiments in order to validate their thesis. They are an obvious choice when it comes to scientific research: They are usually willing to participate and are often themselves pursuing an education in the experiment’s domain. The software engineering domain is no exception. However, readers, authors, and reviewers do sometimes question the validity of experimental data that is gathered in controlled experiments from students. This is why we will address this difficult-to-answer question: Are students a proper substitute for experienced professional engineers while performing experiments in a typical software engineering experiment. As we demonstrate in this paper, it is not a “yes or no” answer. In some aspects, students were not outperformed by professionals, but in others, students would not only give different answers compared to professionals, but their answers would also diverge. In this paper we will show and analyze the results of a controlled experiment in the source code quality domain in terms of comparing student and professional responses. We will show that authors have to be careful when employing students in experiments, especially when complex and advanced domains are addressed. However, they may be a proper substitution in cases, where non-advanced aspects are required.


Author(s):  
Marcos Kalinowski ◽  
Gleison Santos ◽  
Rafael Prikladnicki ◽  
Ana Regina Rocha ◽  
Kival Chaves Weber ◽  
...  

2016 ◽  
Vol 25 (5) ◽  
pp. 393-399 ◽  
Author(s):  
Megan D. Herbers ◽  
Joseph A. Heaser

Background The high risk and low volume of medical emergencies, combined with long periods between training sessions, on 2 progressive care units at Mayo Clinic, Rochester, Minnesota, established the importance of transforming how nursing staff are trained to respond to medical emergencies. Objectives To increase confidence levels and improve nursing performance during medical emergencies via in situ simulation. Methods An in situ, mock code quality improvement program was developed and implemented to increase nurses’ confidence while improving nursing performance when responding to medical emergencies. For 2 years, each unit conducted mock codes and collected data related to confidence levels and response times based on the recommendations in the 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Results In those 2 years, nursing staff response times for calling for help improved 12%, time elapsed before initiating compressions improved 52%, and time to initial defibrillation improved 37%. Additionally, staff showed an increase in perceived confidence levels. Staff reported their appreciation of the opportunity for hands-on practice with the equipment, reinforcing their knowledge and refining their medical emergency skills. Conclusions In situ mock codes significantly improve response times and increase staff confidence levels. In situ mock codes are a quick and efficient way to provide hands-on practice and allow staff to work as a team.


Sign in / Sign up

Export Citation Format

Share Document