scholarly journals Physical Intelligence and Thermodynamic Computing

Author(s):  
Robert L. Fry

This paper proposes that intelligent processes can be completely explained by thermodynamic principles. They can equally be described by information-theoretic principles that, from the standpoint of the required optimizations, are functionally equivalent. The underlying theory arises from two axioms regarding distinguishability and causality. Their consequence is a theory of computation that applies to the only two kinds of physical processes possible—those that reconstruct the past and those that control the future. Dissipative physical processes fall into the first class, whereas intelligent ones comprise the second. The first kind of process is exothermic and the latter is endothermic. Similarly, the first process dumps entropy and energy to its environment, whereas the second reduces entropy while requiring energy to operate. It is shown that high intelligence efficiency and high energy efficiency are synonymous. The theory suggests the usefulness of developing a new computing paradigm called Thermodynamic Computing to engineer intelligent processes. The described engineering formalism for the design of thermodynamic computers is a hybrid combination of information theory and thermodynamics. Elements of the engineering formalism are introduced in the reverse-engineer of a cortical neuron. The cortical neuron provides perhaps the simplest and most insightful example of a thermodynamic computer possible. It can be seen as a basic building block for constructing more intelligent thermodynamic circuits.

2021 ◽  
Vol 21 (15&16) ◽  
pp. 1296-1306
Author(s):  
Seyed Mousavi

Our computers today, from sophisticated servers to small smartphones, operate based on the same computing model, which requires running a sequence of discrete instructions, specified as an algorithm. This sequential computing paradigm has not yet led to a fast algorithm for an NP-complete problem despite numerous attempts over the past half a century. Unfortunately, even after the introduction of quantum mechanics to the world of computing, we still followed a similar sequential paradigm, which has not yet helped us obtain such an algorithm either. Here a completely different model of computing is proposed to replace the sequential paradigm of algorithms with inherent parallelism of physical processes. Using the proposed model, instead of writing algorithms to solve NP-complete problems, we construct physical systems whose equilibrium states correspond to the desired solutions and let them evolve to search for the solutions. The main requirements of the model are identified and quantum circuits are proposed for its potential implementation.


Author(s):  
Zening Lin ◽  
Tao Jiang ◽  
Jianzhong Shang

Abstract In the past few decades, robotics research has witnessed an increasingly high interest in miniaturized, intelligent, and integrated robots. The imperative component of a robot is the actuator that determines its performance. Although traditional rigid drives such as motors and gas engines have shown great prevalence in most macroscale circumstances, the reduction of these drives to the millimeter or even lower scale results in a significant increase in manufacturing difficulty accompanied by a remarkable performance decline. Biohybrid robots driven by living cells can be a potential solution to overcome these drawbacks by benefiting from the intrinsic microscale self-assembly of living tissues and high energy efficiency, which, among other unprecedented properties, also feature flexibility, self-repair, and even multiple degrees of freedom. This paper systematically reviews the development of biohybrid robots. First, the development of biological flexible drivers is introduced while emphasizing on their advantages over traditional drivers. Second, up-to-date works regarding biohybrid robots are reviewed in detail from three aspects: biological driving sources, actuator materials, and structures with associated control methodologies. Finally, the potential future applications and major challenges of biohybrid robots are explored. Graphic abstract


2012 ◽  
Vol 84 (3) ◽  
pp. 411-423 ◽  
Author(s):  
Pietro Tundo

Since the Industrial Revolution, chlorine has featured as an iconic molecule in process chemistry even though its production by electrolysis of sodium chloride is very energy-intensive. Owing to its high energy and reactivity, chlorine allows the manufacture of chlorinated derivatives in a very easy way: AlCl3, SnCl4, TiCl4, SiCl4, ZnCl2, PCl3, PCl5, POCl3, COCl2, etc. in turn are pillar intermediates in the production of numerous everyday goods. This kind of chloride chemistry is widely used because the energy is transferred to these intermediates, making further syntheses easy. The environmental and health constraints (toxicity and eco-toxicity, ozone layer depletion) and the growing need for energy (energy efficiency, climate change) force us to take advantage from available knowledge to develop new chemical strategies. Substitution of chlorine in end products in compounds where “chlorine is used in the making” means that we avoid electrolysis as primary energetic source; this makes chemistry “without chlorine” considerably more difficult and illustrates why it has not found favor in the past. The rationale behind this Special Topic issue is to seek useful and industrially relevant examples for alternatives to chlorine in synthesis, so as to facilitate the development of industrially relevant and implementable breakthrough technologies.


2017 ◽  
Vol 598 ◽  
pp. A39 ◽  
Author(s):  
◽  
H. Abdalla ◽  
A. Abramowski ◽  
F. Aharonian ◽  
F. Ait Benkhali ◽  
...  

Studying the temporal variability of BL Lac objects at the highest energies provides unique insights into the extreme physical processes occurring in relativistic jets and in the vicinity of super-massive black holes. To this end, the long-term variability of the BL Lac object PKS 2155−304 is analyzed in the high (HE, 100 MeV < E < 300 GeV) and very high energy (VHE, E > 200 GeV) γ-ray domain. Over the course of ~9 yr of H.E.S.S. observations the VHE light curve in the quiescent state is consistent with a log-normal behavior. The VHE variability in this state is well described by flicker noise (power-spectral-density index βVHE = 1.10+0.10-0.13) on timescales larger than one day. An analysis of ~5.5 yr of HE Fermi-LAT data gives consistent results (βHE = 1.20+0.21-0.23, on timescales larger than 10 days) compatible with the VHE findings. The HE and VHE power spectral densities show a scale invariance across the probed time ranges. A direct linear correlation between the VHE and HE fluxes could neither be excluded nor firmly established. These long-term-variability properties are discussed and compared to the red noise behavior (β ~ 2) seen on shorter timescales during VHE-flaring states. The difference in power spectral noise behavior at VHE energies during quiescent and flaring states provides evidence that these states are influenced by different physical processes, while the compatibility of the HE and VHE long-term results is suggestive of a common physical link as it might be introduced by an underlying jet-disk connection.


2022 ◽  
pp. 1-27
Author(s):  
Clifford Bohm ◽  
Douglas Kirkpatrick ◽  
Arend Hintze

Abstract Deep learning (primarily using backpropagation) and neuroevolution are the preeminent methods of optimizing artificial neural networks. However, they often create black boxes that are as hard to understand as the natural brains they seek to mimic. Previous work has identified an information-theoretic tool, referred to as R, which allows us to quantify and identify mental representations in artificial cognitive systems. The use of such measures has allowed us to make previous black boxes more transparent. Here we extend R to not only identify where complex computational systems store memory about their environment but also to differentiate between different time points in the past. We show how this extended measure can identify the location of memory related to past experiences in neural networks optimized by deep learning as well as a genetic algorithm.


2013 ◽  
Vol 06 (01) ◽  
pp. 1330001 ◽  
Author(s):  
JING XU ◽  
DAE HOE LEE ◽  
YING SHIRLEY MENG

Significant progress has been achieved in the research on sodium intercalation compounds as positive electrode materials for Na-ion batteries. This paper presents an overview of the breakthroughs in the past decade for developing high energy and high power cathode materials. Two major classes, layered oxides and polyanion compounds, are covered. Their electrochemical performance and the related crystal structure, solid state physics and chemistry are summarized and compared.


2021 ◽  
pp. SP523-2021-76
Author(s):  
Robert W. Dalrymple

AbstractThis study reviews the morphology, hydrodynamics and sedimentology of 33 modern straits, including examples from diverse tectonic and climatic settings. Strait morphology ranges from short, simple straits to long, tortuous passages many 100s of kilometers long; depths range from 10 m to >1 km. The morphological building block of strait sedimentation is a constriction flanked by open basins; a single strait can contain one or several of these. Currents accelerate through the constrictions and decelerate in the basins, leading to a spatial alternation of high- and low-energy conditions. Currents in a strait can be classified as either ‘persistent’ (oceanic currents or density-driven circulation) or ‘intermittent’ (tidally or meteorologically generated currents). Constrictions tend to be bedload partings, with the development of transport paths that diverge outward. Deposition occurs where the flow decelerates, generating paired subaqueous ‘constriction-related deltas’ that can be of unequal size. Cross-bedding predominates in high-energy settings; muddy sediment waves and contourite drifts are present in some straits. In shallow straits that were exposed during the sea-level lowstand, strait deposits typically occur near or at the maximum flooding surface, and can overlie estuarine and fluvial deposits. The most energetic deposits need not occur at the time of maximum inundation.Supplementary material at https://doi.org/10.6084/m9.figshare.c.5746061


1994 ◽  
Vol 37 (2) ◽  
Author(s):  
I. Stanislawska

The paper presents two opposite approaches for single-station prediction and forecast. Both methods are based on different assumptions of physical processes in the ionosphere and need the different set of incoming data. Different heliogeophysical data, mainly f0F2 parameters from the past were analyzed for f0F2 obtaining for the requested period ahead. In the first method - the autocovariance prediction method - the time series of f0F2 from one station are used for daily forecast at that point. The second method may be used for obtaining f0F2 not only at the particular ionospheric station, but also at any point within the considered area.


2020 ◽  
Vol 9 (5) ◽  
Author(s):  
Anjishnu Bose ◽  
Parthiv Haldar ◽  
Aninda Sinha ◽  
Pritish Sinha ◽  
Shaswat Tiwari

We consider entanglement measures in 2-2 scattering in quantum field theories, focusing on relative entropy which distinguishes two different density matrices. Relative entropy is investigated in several cases which include \phi^4ϕ4 theory, chiral perturbation theory (\chi PTχPT) describing pion scattering and dilaton scattering in type II superstring theory. We derive a high energy bound on the relative entropy using known bounds on the elastic differential cross-sections in massive QFTs. In \chi PTχPT, relative entropy close to threshold has simple expressions in terms of ratios of scattering lengths. Definite sign properties are found for the relative entropy which are over and above the usual positivity of relative entropy in certain cases. We then turn to the recent numerical investigations of the S-matrix bootstrap in the context of pion scattering. By imposing these sign constraints and the \rhoρ resonance, we find restrictions on the allowed S-matrices. By performing hypothesis testing using relative entropy, we isolate two sets of S-matrices living on the boundary which give scattering lengths comparable to experiments but one of which is far from the 1-loop \chi PTχPT Adler zeros. We perform a preliminary analysis to constrain the allowed space further, using ideas involving positivity inside the extended Mandelstam region, and other quantum information theoretic measures based on entanglement in isospin.


2012 ◽  
Vol 1 (2) ◽  
pp. 11-16
Author(s):  
Amina Asif Siddiqui

The age old understanding that an individual with a hearing loss is incapable of acquiring verbal communication skills was readily accepted in the past, which led to the inadvertent but unfortunate coining of phrases “deaf and dumb” or “deaf and mute, " and the development of non-verbal or manual communication methodolgies of Sign language. Further, this caused the segregation and isolation of otherwise physically and intellectually competent individuals from mainstream society, unjustifiably denying them opportunities of education and vocation. Studies have proved that in the absence of any organic or inorganic complication, a child with a hearing loss may not only score a high Intelligence Quotient but can also acquire more than one language fluently. Early Intervention with appropriate amplification of residual hearing is underscored as the fundamental prerequisite for children with bilateral congenital profound sensorineural hearing loss, for subsequent acquisition of good listening and normal speech-language skills and plausible bilingualism; that further equips them with scholastic achievements comparable to their hearing peers. The past half century has witnessed stupendous technological enhancements in amplification devices manufactured for children having hearing loss, complemented by steady success in fostering their Inclusive Education. This paper highlights the urgent need in Pakistan to address this issue as well as the importance of early detection, diagnosis, and (re)habilitation along with parent training initiated within the first year of life. An otherwise anticipated disabling condition may be overcome completely if neonatal screening, which is not only inexpensive but also easy to perform; is made mandatory at all hospitals and maternity homes, as practiced in the developed world. This shall ease the challenges faced by the families of children having hearing loss; and enable the professionals working with them to successfully alleviate their communicative, social, educational and vocational difficulties, and ensure that they become successfully contributing members of our verbal society.


Sign in / Sign up

Export Citation Format

Share Document