Mathematics for Future Computing and Communications

2021 ◽  

For 80 years, mathematics has driven fundamental innovation in computing and communications. This timely book provides a panorama of some recent ideas in mathematics and how they will drive continued innovation in computing, communications and AI in the coming years. It provides a unique insight into how the new techniques that are being developed can be used to provide theoretical foundations for technological progress, just as mathematics was used in earlier times by Turing, von Neumann, Shannon and others. Edited by leading researchers in the field, chapters cover the application of new mathematics in computer architecture, software verification, quantum computing, compressed sensing, networking, Bayesian inference, machine learning, reinforcement learning and many other areas.

1991 ◽  
Vol 15 (2) ◽  
pp. 123-138
Author(s):  
Joachim Biskup ◽  
Bernhard Convent

In this paper the relationship between dependency theory and first-order logic is explored in order to show how relational chase procedures (i.e., algorithms to decide inference problems for dependencies) can be interpreted as clever implementations of well known refutation procedures of first-order logic with resolution and paramodulation. On the one hand this alternative interpretation provides a deeper insight into the theoretical foundations of chase procedures, whereas on the other hand it makes available an already well established theory with a great amount of known results and techniques to be used for further investigations of the inference problem for dependencies. Our presentation is a detailed and careful elaboration of an idea formerly outlined by Grant and Jacobs which up to now seems to be disregarded by the database community although it definitely deserves more attention.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 930
Author(s):  
Fahimeh Hadavimoghaddam ◽  
Mehdi Ostadhassan ◽  
Ehsan Heidaryan ◽  
Mohammad Ali Sadri ◽  
Inna Chapanova ◽  
...  

Dead oil viscosity is a critical parameter to solve numerous reservoir engineering problems and one of the most unreliable properties to predict with classical black oil correlations. Determination of dead oil viscosity by experiments is expensive and time-consuming, which means developing an accurate and quick prediction model is required. This paper implements six machine learning models: random forest (RF), lightgbm, XGBoost, multilayer perceptron (MLP) neural network, stochastic real-valued (SRV) and SuperLearner to predict dead oil viscosity. More than 2000 pressure–volume–temperature (PVT) data were used for developing and testing these models. A huge range of viscosity data were used, from light intermediate to heavy oil. In this study, we give insight into the performance of different functional forms that have been used in the literature to formulate dead oil viscosity. The results show that the functional form f(γAPI,T), has the best performance, and additional correlating parameters might be unnecessary. Furthermore, SuperLearner outperformed other machine learning (ML) algorithms as well as common correlations that are based on the metric analysis. The SuperLearner model can potentially replace the empirical models for viscosity predictions on a wide range of viscosities (any oil type). Ultimately, the proposed model is capable of simulating the true physical trend of the dead oil viscosity with variations of oil API gravity, temperature and shear rate.


2021 ◽  
Vol 379 (4) ◽  
Author(s):  
Pavlo O. Dral ◽  
Fuchun Ge ◽  
Bao-Xin Xue ◽  
Yi-Fan Hou ◽  
Max Pinheiro ◽  
...  

AbstractAtomistic machine learning (AML) simulations are used in chemistry at an ever-increasing pace. A large number of AML models has been developed, but their implementations are scattered among different packages, each with its own conventions for input and output. Thus, here we give an overview of our MLatom 2 software package, which provides an integrative platform for a wide variety of AML simulations by implementing from scratch and interfacing existing software for a range of state-of-the-art models. These include kernel method-based model types such as KREG (native implementation), sGDML, and GAP-SOAP as well as neural-network-based model types such as ANI, DeepPot-SE, and PhysNet. The theoretical foundations behind these methods are overviewed too. The modular structure of MLatom allows for easy extension to more AML model types. MLatom 2 also has many other capabilities useful for AML simulations, such as the support of custom descriptors, farthest-point and structure-based sampling, hyperparameter optimization, model evaluation, and automatic learning curve generation. It can also be used for such multi-step tasks as Δ-learning, self-correction approaches, and absorption spectrum simulation within the machine-learning nuclear-ensemble approach. Several of these MLatom 2 capabilities are showcased in application examples.


The Analyst ◽  
2021 ◽  
Author(s):  
Barnaby Ellis ◽  
Conor A Whitley ◽  
Safaa Al Jedani ◽  
Caroline Smith ◽  
Philip Gunning ◽  
...  

A novel machine learning algorithm is shown to accurately discriminate between oral squamous cell carcinoma (OSCC) nodal metastases and surrounding lymphoid tissue on the basis of a single metric, the...


2018 ◽  
Vol 8 (4) ◽  
pp. 34 ◽  
Author(s):  
Vishal Saxena ◽  
Xinyu Wu ◽  
Ira Srivastava ◽  
Kehan Zhu

The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e., on hand-held devices that are energy constrained, which is energy prohibitive when employing digital von Neumann architectures. Recent explorations in digital neuromorphic hardware have shown promise, but offer low neurosynaptic density needed for scaling to applications such as intelligent cognitive assistants (ICA). Large-scale integration of nanoscale emerging memory devices with Complementary Metal Oxide Semiconductor (CMOS) mixed-signal integrated circuits can herald a new generation of Neuromorphic computers that will transcend the von Neumann bottleneck for cognitive computing tasks. Such hybrid Neuromorphic System-on-a-chip (NeuSoC) architectures promise machine learning capability at chip-scale form factor, and several orders of magnitude improvement in energy efficiency. Practical demonstration of such architectures has been limited as performance of emerging memory devices falls short of the expected behavior from the idealized memristor-based analog synapses, or weights, and novel machine learning algorithms are needed to take advantage of the device behavior. In this article, we review the challenges involved and present a pathway to realize large-scale mixed-signal NeuSoCs, from device arrays and circuits to spike-based deep learning algorithms with ‘brain-like’ energy-efficiency.


AI Magazine ◽  
2012 ◽  
Vol 33 (2) ◽  
pp. 55 ◽  
Author(s):  
Nisarg Vyas ◽  
Jonathan Farringdon ◽  
David Andre ◽  
John Ivo Stivoric

In this article we provide insight into the BodyMedia FIT armband system — a wearable multi-sensor technology that continuously monitors physiological events related to energy expenditure for weight management using machine learning and data modeling methods. Since becoming commercially available in 2001, more than half a million users have used the system to track their physiological parameters and to achieve their individual health goals including weight-loss. We describe several challenges that arise in applying machine learning techniques to the health care domain and present various solutions utilized in the armband system. We demonstrate how machine learning and multi-sensor data fusion techniques are critical to the system’s success.


2020 ◽  
Vol 12 (11) ◽  
pp. 4753
Author(s):  
Viju Raghupathi ◽  
Jie Ren ◽  
Wullianallur Raghupathi

Corporations have embraced the idea of corporate environmental, social, and governance (ESG) under the general framework of sustainability. Studies have measured and analyzed the impact of internal sustainability efforts on the performance of individual companies, policies, and projects. This exploratory study attempts to extract useful insight from shareholder sustainability resolutions using machine learning-based text analytics. Prior research has studied corporate sustainability disclosures from public reports. By studying shareholder resolutions, we gain insight into the shareholders’ perspectives and objectives. The primary source for this study is the Ceres sustainability shareholder resolution database, with 1737 records spanning 2009–2019. The study utilizes a combination of text analytic approaches (i.e., word cloud, co-occurrence, row-similarities, clustering, classification, etc.) to extract insights. These are novel methods of transforming textual data into useful knowledge about corporate sustainability endeavors. This study demonstrates that stakeholders, such as shareholders, can influence corporate sustainability via resolutions. The incorporation of text analytic techniques offers insight to researchers who study vast collections of unstructured bodies of text, improving the understanding of shareholder resolutions and reaching a wider audience.


Sign in / Sign up

Export Citation Format

Share Document