Artificial Intelligence Enhancements in the field of Functional Verification

2021 ◽  
Vol 69 (4) ◽  
pp. 95-102
Author(s):  
Diana DRANGA ◽  
◽  
Radu-Daniel BOLCAȘ ◽  

Functional Verification is one of the main processes in the Research and Development of new System-on-Chip. As chips are becoming more and more complex, this step becomes an extensive bottleneck which can vastly delay the chip mass production. It is a mandatory step as the design needs to not contain any faults, to ensure proper functioning. If this step is bypassed, large major financial losses and customer dissatisfaction can happen later in the process. Additionally, if the verification process is prolonging for a long period of time, to achieve a higher quality product, it will also cause a financial impact. Therefore, the solution is to find ways to optimize this activity. This paper contains a review on how Artificial Intelligence can reduce this blockage, taking into consideration the time spent on implementing the verification environment and the time of attaining the aimed coverage percentage. The engineer will take a decision on which causes of time-consuming processes presented in the paper will be reduced, depending on project specifics and his or her experience. A candidate for optimizing the training of the Neural Network is the Nvidia’s Computer Unified Device Architecture (CUDA). CUDA is parallel computing platform that make use of the GPU, peculiarly of the CUDA cores located inside Nvidia GPUs.

2020 ◽  
Vol 96 (3s) ◽  
pp. 585-588
Author(s):  
С.Е. Фролова ◽  
Е.С. Янакова

Предлагаются методы построения платформ прототипирования высокопроизводительных систем на кристалле для задач искусственного интеллекта. Изложены требования к платформам подобного класса и принципы изменения проекта СнК для имплементации в прототип. Рассматриваются методы отладки проектов на платформе прототипирования. Приведены результаты работ алгоритмов компьютерного зрения с использованием нейросетевых технологий на FPGA-прототипе семантических ядер ELcore. Methods have been proposed for building prototyping platforms for high-performance systems-on-chip for artificial intelligence tasks. The requirements for platforms of this class and the principles for changing the design of the SoC for implementation in the prototype have been described as well as methods of debugging projects on the prototyping platform. The results of the work of computer vision algorithms using neural network technologies on the FPGA prototype of the ELcore semantic cores have been presented.


BMJ Open ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. e046265
Author(s):  
Shotaro Doki ◽  
Shinichiro Sasahara ◽  
Daisuke Hori ◽  
Yuichi Oi ◽  
Tsukasa Takahashi ◽  
...  

ObjectivesPsychological distress is a worldwide problem and a serious problem that needs to be addressed in the field of occupational health. This study aimed to use artificial intelligence (AI) to predict psychological distress among workers using sociodemographic, lifestyle and sleep factors, not subjective information such as mood and emotion, and to examine the performance of the AI models through a comparison with psychiatrists.DesignCross-sectional study.SettingWe conducted a survey on psychological distress and living conditions among workers. An AI model for predicting psychological distress was created and then the results were compared in terms of accuracy with predictions made by psychiatrists.ParticipantsAn AI model of the neural network and six psychiatrists.Primary outcomeThe accuracies of the AI model and psychiatrists for predicting psychological distress.MethodsIn total, data from 7251 workers were analysed to predict moderate and severe psychological distress. An AI model of the neural network was created and accuracy, sensitivity and specificity were calculated. Six psychiatrists used the same data as the AI model to predict psychological distress and conduct a comparison with the AI model.ResultsThe accuracies of the AI model and psychiatrists for predicting moderate psychological distress were 65.2% and 64.4%, respectively, showing no significant difference. The accuracies of the AI model and psychiatrists for predicting severe psychological distress were 89.9% and 85.5%, respectively, indicating that the AI model had significantly higher accuracy.ConclusionsA machine learning model was successfully developed to screen workers with depressed mood. The explanatory variables used for the predictions did not directly ask about mood. Therefore, this newly developed model appears to be able to predict psychological distress among workers easily, regardless of their subjective views.


Author(s):  
Indar Sugiarto ◽  
Doddy Prayogo ◽  
Henry Palit ◽  
Felix Pasila ◽  
Resmana Lim ◽  
...  

This paper describes a prototype of a computing platform dedicated to artificial intelligence explorations. The platform, dubbed as PakCarik, is essentially a high throughput computing platform with GPU (graphics processing units) acceleration. PakCarik is an Indonesian acronym for Platform Komputasi Cerdas Ramah Industri Kreatif, which can be translated as “Creative Industry friendly Intelligence Computing Platform”. This platform aims to provide complete development and production environment for AI-based projects, especially to those that rely on machine learning and multiobjective optimization paradigms. The method for constructing PakCarik was based on a computer hardware assembling technique that uses commercial off-the-shelf hardware and was tested on several AI-related application scenarios. The testing methods in this experiment include: high-performance lapack (HPL) benchmarking, message passing interface (MPI) benchmarking, and TensorFlow (TF) benchmarking. From the experiment, the authors can observe that PakCarik's performance is quite similar to the commonly used cloud computing services such as Google Compute Engine and Amazon EC2, even though falls a bit behind the dedicated AI platform such as Nvidia DGX-1 used in the benchmarking experiment. Its maximum computing performance was measured at 326 Gflops. The authors conclude that PakCarik is ready to be deployed in real-world applications and it can be made even more powerful by adding more GPU cards in it.


The objective of this undertaking is to apply neural systems to phishing email recognition and assess the adequacy of this methodology. We structure the list of capabilities, process the phishing dataset, and execute the Neural Network frameworks. we analyze its exhibition against that of other real Artificial Intelligence Techniques – DT , K-nearest , NB and SVM machine.. The equivalent dataset and list of capabilities are utilized in the correlation. From the factual examination, we infer that Neural Networks with a proper number of concealed units can accomplish acceptable precision notwithstanding when the preparation models are rare. Additionally, our element determination is compelling in catching the qualities of phishing messages, as most AI calculations can yield sensible outcomes with it.


Author(s):  
Qiang Zhu ◽  
Tsuneo Nakata ◽  
Masataka Mine ◽  
Kenichiro Kuroki ◽  
Yoichi Endo ◽  
...  

Author(s):  
Meghna Babubhai Patel ◽  
Jagruti N. Patel ◽  
Upasana M. Bhilota

ANN can work the way the human brain works and can learn the way we learn. The neural network is this kind of technology that is not an algorithm; it is a network that has weights on it, and you can adjust the weights so that it learns. You teach it through trials. It is a fact that the neural network can operate and improve its performance after “teaching” it, but it needs to undergo some process of learning to acquire information and be familiar with them. Nowadays, the age of smart devices dominates the technological world, and no one can deny their great value and contributions to mankind. A dramatic rise in the platforms, tools, and applications based on machine learning and artificial intelligence has been seen. These technologies not only impacted software and the internet industry but also other verticals such as healthcare, legal, manufacturing, automobile, and agriculture. The chapter shows the importance of latest technology used in ANN and future trends in ANN.


Author(s):  
Anand Parey ◽  
Amandeep Singh Ahuja

Gearboxes are employed in a wide variety of applications, ranging from small domestic appliances to the rather gigantic power plants and marine propulsion systems. Gearbox failure may not only result in significant financial losses resulting from downtime of machinery but may also place human life at risk. Gearbox failure in transmission systems of warships and single engine aircraft, beside other military applications, is unacceptable. The criticality of the gearbox in rotary machines has resulted in enormous effort on the part of researchers to develop new and efficient methods of diagnosing faults in gearboxes so that timely rectification can be undertaken before catastrophic failure occurs. Artificial intelligence (AI) has been a significant milestone in automated gearbox fault diagnosis (GFD). This chapter reviews over a decade of research efforts on fault diagnosis of gearboxes with AI techniques. Some of areas of AI in GFD which still merit attention have been identified and discussed at the end of the chapter.


Author(s):  
Jessica Gissella Maradey Lázaro ◽  
Kevin Cáceres ◽  
Gianina Garrido

Abstract In daily life, is very common to witness scenes in which it is necessary to obtain different ranges of colors in the paintings that are used either with water or oil. This range of colors comes from the fusion and homogenization of primary colors or tones. Frequently, the process of mixing and dosing are carried out by people who, by trial and error, determine the color desired by the user. Then, the quality and precision of the paint is affected and generate customer dissatisfaction, claims, waste, and low productivity. This article shows the design and start up process of an automatic mixer prototype that doses and mixes paint to create complex color shades by implementing a human-machine interface and a control stage and verification. Also, the results of this investigation shows the engineering process carried out to obtain a prototype of a functional, automatic, exact mixing machine and a homogenous and quality product that meets the customer’s requirements. Improvements that will can do and future work are included too.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1946
Author(s):  
Jae-Eun Lee ◽  
Chuljun Lee ◽  
Dong-Wook Kim ◽  
Daeseok Lee ◽  
Young-Ho Seo

In this paper, we propose an on-chip learning method that can overcome the poor characteristics of pre-developed practical synaptic devices, thereby increasing the accuracy of the neural network based on the neuromorphic system. The fabricated synaptic devices, based on Pr1−xCaxMnO3, LiCoO2, and TiOx, inherently suffer from undesirable characteristics, such as nonlinearity, discontinuities, and asymmetric conductance responses, which degrade the neuromorphic system performance. To address these limitations, we have proposed a conductance-based linear weighted quantization method, which controls conductance changes, and trained a neural network to predict the handwritten digits from the standard database MNIST. Furthermore, we quantitatively considered the non-ideal case, to ensure reliability by limiting the conductance level to that which synaptic devices can practically accept. Based on this proposed learning method, we significantly improved the neuromorphic system, without any hardware modifications to the synaptic devices or neuromorphic systems. Thus, the results emphatically show that, even for devices with poor synaptic characteristics, the neuromorphic system performance can be improved.


Sign in / Sign up

Export Citation Format

Share Document