super computing
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 13)

H-INDEX

3
(FIVE YEARS 1)

J ◽  
2021 ◽  
Vol 4 (3) ◽  
pp. 452-476
Author(s):  
Tyler Lance Jaynes

What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they have to gaining some legal duties and protections insofar as they are sophisticated enough to display consciousness akin to humans. Should our species enable AIS to gain a digital body to inhabit (if we have not already done so), it is more pressing than ever that solid arguments be made as to how humanity can accept AIS as being cognizant of the same degree as we ourselves claim to be. By accepting the notion that AIS can and will be able to fool our senses into believing in their claim to possessing a will or ego, we may yet have a chance to address them as equals before some unforgivable travesty occurs betwixt ourselves and these super-computing beings.


2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. A236-A237
Author(s):  
Robert S Fredericks

Abstract The necessity of developing models that effectively organize data for the purpose of translating basic science to clinical care is being increasingly recognized. Reliance upon digital computational methods restricts the value of natural experience reportable by patients, often considered subjective. In the course of modeling phosphate metabolism in the context of clinical practice it has become evident that use of categories based on normality, as definition of health, is inconsistent with the experience of patients. Given the opportunity, patients can provide detailed observations upon their experience of heat as the principle component of metabolism. It seems logical that heat should also be the foundational principal component of models developed for the translation of data to clinical care. This strategy has been applied to modeling the role of ACE2 in the expression of variable phenotypes of COVID 19. Attempts to engage massive data and super-computing to the modeling of COVID 19 supported the assumption that ACE2 is a critical component causing disease. The finding is attributed to an influence, not on heat, but instead suggested bradykinin that has long been a proposed explanation for ACE inhibition on chronic cough. Our modeling would posit that the ACE system engages aldosterone and subsequent influence on heat and acid/base balance as the mediators of variance in the expression of individual phenotypes. This clarification has been useful for addressing complexity in the presentation of metabolic disorders including thyroid disease, Diabetes, bone health, sleep disorders, vascular disease and Chronic Fatigue Syndrome. It appears that the risk of developing ARDS shares a predisposition to chronic kidney disease mediated by excessive FGF23 effects, while the asymptomatic spreaders are more Klotho dependent. The vitamin D system is also complex and involved in the modulation of heat and phosphate. These and other components can be extended to understanding bone and the hematopoietic marrow niche governing immune responses and includes a role for modulation of the microbiome influences by ACE2. It is concluded that SARS-CoV-2 has helped to clarify the complexity of biology and has exposed the limitations of modeling strategies that do not include the application of case-based practice that can be described as “model-dependent realism” 1 as a means to discover the principle components of nature. The models are the valued product of the research that is mandated by the Helsinki accords when outcomes do not meet expectations. These models can facilitate the organization of all data in the appropriate translation to clinical care. 1 Hawking S., Mlodinow L. The Grand Design p 39–59, Bantom Books NY, 2010


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Mitsumasa Nakajima ◽  
Kenji Tanaka ◽  
Toshikazu Hashimoto

AbstractPhotonic neuromorphic computing is of particular interest due to its significant potential for ultrahigh computing speed and energy efficiency. The advantage of photonic computing hardware lies in its ultrawide bandwidth and parallel processing utilizing inherent parallelism. Here, we demonstrate a scalable on-chip photonic implementation of a simplified recurrent neural network, called a reservoir computer, using an integrated coherent linear photonic processor. In contrast to previous approaches, both the input and recurrent weights are encoded in the spatiotemporal domain by photonic linear processing, which enables scalable and ultrafast computing beyond the input electrical bandwidth. As the device can process multiple wavelength inputs over the telecom C-band simultaneously, we can use ultrawide optical bandwidth (~5 terahertz) as a computational resource. Experiments for the standard benchmarks showed good performance for chaotic time-series forecasting and image classification. The device is considered to be able to perform 21.12 tera multiplication–accumulation operations per second (MAC ∙ s−1) for each wavelength and can reach petascale computation speed on a single photonic chip by using wavelength division multiplexing. Our results are challenging for conventional Turing–von Neumann machines, and they confirm the great potential of photonic neuromorphic processing towards peta-scale neuromorphic super-computing on a photonic chip.


2020 ◽  
Author(s):  
Dongyu Xue ◽  
Han Zhang ◽  
Dongling Xiao ◽  
Yukang Gong ◽  
Guohui Chuai ◽  
...  

AbstractIn silico modelling and analysis of small molecules substantially accelerates the process of drug development. Representing and understanding molecules is the fundamental step for various in silico molecular analysis tasks. Traditionally, these molecular analysis tasks have been investigated individually and separately. In this study, we presented X-MOL, which applies large-scale pre-training technology on 1.1 billion molecules for molecular understanding and representation, and then, carefully designed fine-tuning was performed to accommodate diverse downstream molecular analysis tasks, including molecular property prediction, chemical reaction analysis, drug-drug interaction prediction, de novo generation of molecules and molecule optimization. As a result, X-MOL was proven to achieve state-of-the-art results on all these molecular analysis tasks with good model interpretation ability. Collectively, taking advantage of super large-scale pre-training data and super-computing power, our study practically demonstrated the utility of the idea of “mass makes miracles” in molecular representation learning and downstream in silico molecular analysis, indicating the great potential of using large-scale unlabelled data with carefully designed pre-training and fine-tuning strategies to unify existing molecular analysis tasks and substantially enhance the performance of each task.


Sign in / Sign up

Export Citation Format

Share Document