STATE-OF-THE-ART REVIEW OF MACHINE LEARNING APPLICATIONS TO GEOTECHNICAL EARTHQUAKE ENGINEERING PROBLEMS

Author(s):  
Zeinep Achmet ◽  
Luigi Di Sarno
2019 ◽  
Vol 212 (1) ◽  
pp. 26-37 ◽  
Author(s):  
Eyal Lotan ◽  
Rajan Jain ◽  
Narges Razavian ◽  
Girish M. Fatterpekar ◽  
Yvonne W. Lui

2012 ◽  
Vol 256-259 ◽  
pp. 145-148
Author(s):  
Yong Zhi Wang ◽  
Wei Ming Wang ◽  
Xiao Ming Yuan

Centrifugal shakers are widely regarded as state-of-the-art testing facilities of geotechnical earthquake engineering for providing an effective solution for the unequal gravity stress between models and prototypes typically in traditional test methods. Currently only two large centrifugal shakers are in the United States and Japan respectively, whereas no one has been established in China. Such situation remarkably lags behind the serious seismic conditions and the world largest construction scale of civil engineering in China. Due to the lack of experiences and the lot of difficulty, one significant task of large scale centrifugal shaker construction is tracking study on the world advanced facilities. The paper outlines the technology parameters and components of the two existing large centrifugal shakers. Through investigating and comparing the structural characteristics of the two facilities, the differences between the two are summed up and analyzed. The analyses indicate that key technologies mainly centre upon centrifuge arms, centrifuge buckets, exciting devices, power sources and guide-support devices. The results can provide assistance and reference to the construction of foreign and domestic large scale centrifugal shakers.


2020 ◽  
Vol 35 (33) ◽  
pp. 2043005
Author(s):  
Fernanda Psihas ◽  
Micah Groh ◽  
Christopher Tunnell ◽  
Karl Warburton

Neutrino experiments study the least understood of the Standard Model particles by observing their direct interactions with matter or searching for ultra-rare signals. The study of neutrinos typically requires overcoming large backgrounds, elusive signals, and small statistics. The introduction of state-of-the-art machine learning tools to solve analysis tasks has made major impacts to these challenges in neutrino experiments across the board. Machine learning algorithms have become an integral tool of neutrino physics, and their development is of great importance to the capabilities of next generation experiments. An understanding of the roadblocks, both human and computational, and the challenges that still exist in the application of these techniques is critical to their proper and beneficial utilization for physics applications. This review presents the current status of machine learning applications for neutrino physics in terms of the challenges and opportunities that are at the intersection between these two fields.


2020 ◽  
Vol 36 (4) ◽  
pp. 1769-1801 ◽  
Author(s):  
Yazhou Xie ◽  
Majid Ebad Sichani ◽  
Jamie E Padgett ◽  
Reginald DesRoches

Machine learning (ML) has evolved rapidly over recent years with the promise to substantially alter and enhance the role of data science in a variety of disciplines. Compared with traditional approaches, ML offers advantages to handle complex problems, provide computational efficiency, propagate and treat uncertainties, and facilitate decision making. Also, the maturing of ML has led to significant advances in not only the main-stream artificial intelligence (AI) research but also other science and engineering fields, such as material science, bioengineering, construction management, and transportation engineering. This study conducts a comprehensive review of the progress and challenges of implementing ML in the earthquake engineering domain. A hierarchical attribute matrix is adopted to categorize the existing literature based on four traits identified in the field, such as ML method, topic area, data resource, and scale of analysis. The state-of-the-art review indicates to what extent ML has been applied in four topic areas of earthquake engineering, including seismic hazard analysis, system identification and damage detection, seismic fragility assessment, and structural control for earthquake mitigation. Moreover, research challenges and the associated future research needs are discussed, which include embracing the next generation of data sharing and sensor technologies, implementing more advanced ML techniques, and developing physics-guided ML models.


Author(s):  
Myeong Sang Yu

The revolutionary development of artificial intelligence (AI) such as machine learning and deep learning have been one of the most important technology in many parts of industry, and also enhance huge changes in health care. The big data obtained from electrical medical records and digitalized images accelerated the application of AI technologies in medical fields. Machine learning techniques can deal with the complexity of big data which is difficult to apply traditional statistics. Recently, the deep learning techniques including convolutional neural network have been considered as a promising machine learning technique in medical imaging applications. In the era of precision medicine, otolaryngologists need to understand the potentialities, pitfalls and limitations of AI technology, and try to find opportunities to collaborate with data scientists. This article briefly introduce the basic concepts of machine learning and its techniques, and reviewed the current works on machine learning applications in the field of otolaryngology and rhinology.


Author(s):  
Ashenafi Zebene Woldaregay ◽  
Eirik Årsand ◽  
Taxiarchis Botsis ◽  
David Albers ◽  
Lena Mamykina ◽  
...  

BACKGROUND Diabetes mellitus is a chronic metabolic disorder that results in abnormal blood glucose (BG) regulations. The BG level is preferably maintained close to normality through self-management practices, which involves actively tracking BG levels and taking proper actions including adjusting diet and insulin medications. BG anomalies could be defined as any undesirable reading because of either a precisely known reason (normal cause variation) or an unknown reason (special cause variation) to the patient. Recently, machine-learning applications have been widely introduced within diabetes research in general and BG anomaly detection in particular. However, irrespective of their expanding and increasing popularity, there is a lack of up-to-date reviews that materialize the current trends in modeling options and strategies for BG anomaly classification and detection in people with diabetes. OBJECTIVE This review aimed to identify, assess, and analyze the state-of-the-art machine-learning strategies and their hybrid systems focusing on BG anomaly classification and detection including glycemic variability (GV), hyperglycemia, and hypoglycemia in type 1 diabetes within the context of personalized decision support systems and BG alarm events applications, which are important constituents for optimal diabetes self-management. METHODS A rigorous literature search was conducted between September 1 and October 1, 2017, and October 15 and November 5, 2018, through various Web-based databases. Peer-reviewed journals and articles were considered. Information from the selected literature was extracted based on predefined categories, which were based on previous research and further elaborated through brainstorming. RESULTS The initial results were vetted using the title, abstract, and keywords and retrieved 496 papers. After a thorough assessment and screening, 47 articles remained, which were critically analyzed. The interrater agreement was measured using a Cohen kappa test, and disagreements were resolved through discussion. The state-of-the-art classes of machine learning have been developed and tested up to the task and achieved promising performance including artificial neural network, support vector machine, decision tree, genetic algorithm, Gaussian process regression, Bayesian neural network, deep belief network, and others. CONCLUSIONS Despite the complexity of BG dynamics, there are many attempts to capture hypoglycemia and hyperglycemia incidences and the extent of an individual’s GV using different approaches. Recently, the advancement of diabetes technologies and continuous accumulation of self-collected health data have paved the way for popularity of machine learning in these tasks. According to the review, most of the identified studies used a theoretical threshold, which suffers from inter- and intrapatient variation. Therefore, future studies should consider the difference among patients and also track its temporal change over time. Moreover, studies should also give more emphasis on the types of inputs used and their associated time lag. Generally, we foresee that these developments might encourage researchers to further develop and test these systems on a large-scale basis.


2020 ◽  
Author(s):  
Westerley Oliveira ◽  
Michael Canesche ◽  
Lucas Reis ◽  
José Nacif ◽  
Ricardo Ferreira

Machine/Deep learning applications are currently the center of the attention of both industry and academia, turning these applications acceleration a very relevant research topic. Acceleration comes in different flavors, including parallelizing routines on a GPU, FPGA, or CGRA. In this work, we explore the placement and routing of Machine Learning applications dataflow graphs onto three heterogeneous CGRA architectures. We compare our results with the homogeneous case and with one of the state-of-the-art tools for placement and routing (P&R). Our algorithm executed, on average, 52% faster than Versatile Place&Routing (VPR) 8.1. Furthermore, a heterogeneous architecture reduces the cost without losing performance in 76% of the cases.


Sign in / Sign up

Export Citation Format

Share Document