ChemOS: An Orchestration Software to Democratize Autonomous Discovery

Author(s):  
Loı̈c M. Roch ◽  
Florian Häse ◽  
Christoph Kreisbeck ◽  
Teresa Tamayo-Mendoza ◽  
Lars P. E. Yunker ◽  
...  

<div>Autonomous or “self-driving” laboratories combine robotic platforms with artificial intelligence to increase the rate of scientific discovery. They have the potential to transform our traditional approaches to experimentation. Although autonomous laboratories recently gained increased attention, the requirements imposed by engineering the software packages often prevent their development. Indeed, autonomous laboratories require considerable effort in designing and writing advanced and robust software packages to control, orchestrate and synchronize automated instrumentations, cope with databases, and interact with various artificial intelligence algorithms. To overcome this limitation, we introduce ChemOS, a portable, modular and versatile software package, which supplies the structured layers indispensable for operating autonomous laboratories. Additionally, it enables remote control of laboratories, provides access to distributed computing resources, and comprises state-of-the-art machine learning methods. We believe that ChemOS will reduce the time-to-deployment from automated to autonomous discovery, and will provide the scientific community with an easy-to-use package to facilitate novel discovery, at a faster pace.</div>

Author(s):  
Loı̈c M. Roch ◽  
Florian Häse ◽  
Christoph Kreisbeck ◽  
Teresa Tamayo-Mendoza ◽  
Lars P. E. Yunker ◽  
...  

<div>Autonomous or “self-driving” laboratories combine robotic platforms with artificial intelligence to increase the rate of scientific discovery. They have the potential to transform our traditional approaches to experimentation. Although autonomous laboratories recently gained increased attention, the requirements imposed by engineering the software packages often prevent their development. Indeed, autonomous laboratories require considerable effort in designing and writing advanced and robust software packages to control, orchestrate and synchronize automated instrumentations, cope with databases, and interact with various artificial intelligence algorithms. To overcome this limitation, we introduce ChemOS, a portable, modular and versatile software package, which supplies the structured layers indispensable for operating autonomous laboratories. Additionally, it enables remote control of laboratories, provides access to distributed computing resources, and comprises state-of-the-art machine learning methods. We believe that ChemOS will reduce the time-to-deployment from automated to autonomous discovery, and will provide the scientific community with an easy-to-use package to facilitate novel discovery, at a faster pace.</div>


2021 ◽  
Author(s):  
Kai Guo ◽  
Zhenze Yang ◽  
Chi-Hua Yu ◽  
Markus J. Buehler

This review revisits the state of the art of research efforts on the design of mechanical materials using machine learning.


2018 ◽  
Vol 186 ◽  
pp. 09004
Author(s):  
André Schaaff ◽  
Marc Wenger

The work environment has deeply evolved in recent decades with the generalisation of IT in terms of hardware, online resources and software. Librarians do not escape this movement and their working environment is becoming essentially digital (databases, online publications, Wikis, specialised software, etc.). With the Big Data era, new tools will be available, implementing artificial intelligence, text mining, machine learning, etc. Most of these technologies already exist but they will become widespread and strongly impact our ways of working. The development of social networks that are "business" oriented will also have an increasing influence. In this context, it is interesting to reflect on how the work environment of librarians will evolve. Maintaining interest in the daily work is fundamental and over-automation is not desirable. It is imperative to keep the human-driven factor. We draw on state of the art new technologies which impact their work, and initiate a discussion about how to integrate them while preserving their expertise.


Machines ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 21 ◽  
Author(s):  
Abe Zeid ◽  
Sarvesh Sundaram ◽  
Mohsen Moghaddam ◽  
Sagar Kamarthi ◽  
Tucker Marion

Recent advances in manufacturing technology, such as cyber–physical systems, industrial Internet, AI (Artificial Intelligence), and machine learning have driven the evolution of manufacturing architectures into integrated networks of automation devices, services, and enterprises. One of the resulting challenges of this evolution is the increased need for interoperability at different levels of the manufacturing ecosystem. The scope ranges from shop–floor software, devices, and control systems to Internet-based cloud-platforms, providing various services on-demand. Successful implementation of interoperability in smart manufacturing would, thus, result in effective communication and error-prone data-exchange between machines, sensors, actuators, users, systems, and platforms. A significant challenge to this is the architecture and the platforms that are used by machines and software packages. A better understanding of the subject can be achieved by studying industry-specific communication protocols and their respective logical semantics. A review of research conducted in this area is provided in this article to gain perspective on the various dimensions and types of interoperability. This article provides a multi-faceted approach to the research area of interoperability by reviewing key concepts and existing research efforts in the domain, as well as by discussing challenges and solutions.


1996 ◽  
Vol 3 (29) ◽  
Author(s):  
Lars Arge

Ordered Binary-Decision Diagrams (OBDD) are the state-of-the art<br />data structure for boolean function manipulation and there exist<br />several software packages for OBDD manipulation. OBDDs have<br />been successfully used to solve problems in e.g. digital-systems design, verification and testing, in mathematical logic, concurrent system design and in artificial intelligence. The OBDDs used in many of these applications quickly get larger than the available main memory and it becomes essential to consider the problem of minimizing the Input/Output (I/O) communication. In this paper we analyze why existing OBDD manipulation algorithms perform poorly in an I/O environment and develop new I/O-efficient algorithms.


2020 ◽  
Vol 34 (4) ◽  
pp. 571-584
Author(s):  
Rajarshi Biswas ◽  
Michael Barz ◽  
Daniel Sonntag

AbstractImage captioning is a challenging multimodal task. Significant improvements could be obtained by deep learning. Yet, captions generated by humans are still considered better, which makes it an interesting application for interactive machine learning and explainable artificial intelligence methods. In this work, we aim at improving the performance and explainability of the state-of-the-art method Show, Attend and Tell by augmenting their attention mechanism using additional bottom-up features. We compute visual attention on the joint embedding space formed by the union of high-level features and the low-level features obtained from the object specific salient regions of the input image. We embed the content of bounding boxes from a pre-trained Mask R-CNN model. This delivers state-of-the-art performance, while it provides explanatory features. Further, we discuss how interactive model improvement can be realized through re-ranking caption candidates using beam search decoders and explanatory features. We show that interactive re-ranking of beam search candidates has the potential to outperform the state-of-the-art in image captioning.


2019 ◽  
Vol 11 (7) ◽  
pp. 2963-2986 ◽  
Author(s):  
Nikos Dipsis ◽  
Kostas Stathis

Abstract The numerous applications of internet of things (IoT) and sensor networks combined with specialized devices used in each has led to a proliferation of domain specific middleware, which in turn creates interoperability issues between the corresponding architectures and the technologies used. But what if we wanted to use a machine learning algorithm to an IoT application so that it adapts intelligently to changes of the environment, or enable a software agent to enrich with artificial intelligence (AI) a smart home consisting of multiple and possibly incompatible technologies? In this work we answer these questions by studying a framework that explores how to simplify the incorporation of AI capabilities to existing sensor-actuator networks or IoT infrastructures making the services offered in such settings smarter. Towards this goal we present eVATAR+, a middleware that implements the interactions within the context of such integrations systematically and transparently from the developers’ perspective. It also provides a simple and easy to use interface for developers to use. eVATAR+ uses JAVA server technologies enhanced by mediator functionality providing interoperability, maintainability and heterogeneity support. We exemplify eVATAR+ with a concrete case study and we evaluate the relative merits of our approach by comparing our work with the current state of the art.


2020 ◽  
Author(s):  
Andrew Kamal

With the emergence of regressional mathematics and algebraic topology comes advancements in the field of artificial intelligence and machine learning. Such advancements when looking into problems such as nuclear fusion and entropy, can be utilized to analyze unsolved abnormalities in the area of fusion related research. Proof theory will be utilized throughout this paper. For logical mathematical proofs: n represents an unknown number, e represents point of entropy, and m represents maximum point, f represents fusion. This paper will look into analysis of the topic of nuclear fusion and unsolved problems as hardness problems and attempt to formulate computational proofs in relation to entropy, fusion maximum, heat transfer, and entropy transfer mechanisms. This paper will not only be centered around logical proofs but also around computational mechanisms such as distributed computing and its potential role in analyzing computational hardness in relation to fusion related problems. We will summarize a proposal for experimentation utilizing further logical proof formalities and the decentralized-internet SDK for a computational pipeline in order to solve fusion related hardness problems.


Author(s):  
A. B.M. Shawkat Ali

From the beginning, machine learning methodology, which is the origin of artificial intelligence, has been rapidly spreading in the different research communities with successful outcomes. This chapter aims to introduce for system analysers and designers a comparatively new statistical supervised machine learning algorithm called support vector machine (SVM). We explain two useful areas of SVM, that is, classification and regression, with basic mathematical formulation and simple demonstration to make easy the understanding of SVM. Prospects and challenges of future research in this emerging area are also described. Future research of SVM will provide improved and quality access to the users. Therefore, developing an automated SVM system with state-of-the-art technologies is of paramount importance, and hence, this chapter will link up an important step in the system analysis and design perspective to this evolving research arena.


Author(s):  
Md Nazmus Saadat ◽  
Muhammad Shuaib

The aim of this chapter is to introduce newcomers to deep learning, deep learning platforms, algorithms, applications, and open-source datasets. This chapter will give you a broad overview of the term deep learning, in context to deep learning machine learning, and Artificial Intelligence (AI) is also introduced. In Introduction, there is a brief overview of the research achievements of deep learning. After Introduction, a brief history of deep learning has been also discussed. The history started from a famous scientist called Allen Turing (1951) to 2020. In the start of a chapter after Introduction, there are some commonly used terminologies, which are used in deep learning. The main focus is on the most recent applications, the most commonly used algorithms, modern platforms, and relevant open-source databases or datasets available online. While discussing the most recent applications and platforms of deep learning, their scope in future is also discussed. Future research directions are discussed in applications and platforms. The natural language processing and auto-pilot vehicles were considered the state-of-the-art application, and these applications still need a good portion of further research. Any reader from undergraduate and postgraduate students, data scientist, and researchers would be benefitted from this.


Sign in / Sign up

Export Citation Format

Share Document