An Introduction to Cellular Computing

Author(s):  
Martyn Amos ◽  
Gerald Owenson

The abstract operation of complex natural processes is often expressed in terms of networks of computational components such as Boolean logic gates or artificial neurons. The interaction of biological molecules and the flow of information controlling the development and behavior of organisms is particularly amenable to this approach, and these models are well established in the biological community. However, only relatively recently have papers appeared proposing the use of such systems to perform useful, human-defined tasks. Rather than merely using the network analogy as a convenient technique for clarifying our understanding of complex systems, it is now possible to harness the power of such systems for the purposes of computation. The purpose of this volume is to discuss such work. In this introductory chapter we place this work in historical context and provide an introduction to some of the underlying molecular biology. We then introduce recent developments in the field of cellular computing. Despite the relatively recent emergence of molecular computing as a distinct research area, the link between biology and computer science is not a new one. Of course, for years biologists have used computers to store and analyze experimental data. Indeed, it is widely accepted that the huge advances of the Human Genome Project (as well as other genome projects) were only made possible by the powerful computational tools available to them. Bioinformatics has emerged as the science of the 21st century, requiring the contributions of truly interdisciplinary scientists who are equally at home at the lab bench or writing software at the computer. However, the seeds of the relationship between biology and computer science were sown long ago, when the latter discipline did not even exist. When, in the 17th century, the French mathematician and philosopher René Descartes declared to Queen Christina of Sweden that animals could be considered a class of machines, she challenged him to demonstrate how a clock could reproduce. Three centuries later, with the publication of The General and Logical Theory of Automata [19] John von Neumann showed how a machine could indeed construct a copy of itself.

Inventions ◽  
2018 ◽  
Vol 3 (4) ◽  
pp. 72 ◽  
Author(s):  
Iris Kico ◽  
Nikos Grammalidis ◽  
Yiannis Christidis ◽  
Fotis Liarokapis

According to UNESCO, cultural heritage does not only include monuments and collections of objects, but also contains traditions or living expressions inherited from our ancestors and passed to our descendants. Folk dances represent part of cultural heritage and their preservation for the next generations appears of major importance. Digitization and visualization of folk dances form an increasingly active research area in computer science. In parallel to the rapidly advancing technologies, new ways for learning folk dances are explored, making the digitization and visualization of assorted folk dances for learning purposes using different equipment possible. Along with challenges and limitations, solutions that can assist the learning process and provide the user with meaningful feedback are proposed. In this paper, an overview of the techniques used for the recording of dance moves is presented. The different ways of visualization and giving the feedback to the user are reviewed as well as ways of performance evaluation. This paper reviews advances in digitization and visualization of folk dances from 2000 to 2018.


Author(s):  
Stephen Downes

This article discusses the topic of learning objects in three parts. First, it identifies a need for learning objects and describes their essential components based on this need. Second, drawing on concepts from recent developments in computer science, it describes learning objects from a theoretical perspective. Finally, it describes learning objects in practice, first as they are created or generated by content authors, and second, as they are displayed or used by students and other client groups.


2021 ◽  
Vol 15 ◽  
Author(s):  
Fabian Kiepe ◽  
Nils Kraus ◽  
Guido Hesselmann

Self-generated auditory input is perceived less loudly than the same sounds generated externally. The existence of this phenomenon, called Sensory Attenuation (SA), has been studied for decades and is often explained by motor-based forward models. Recent developments in the research of SA, however, challenge these models. We review the current state of knowledge regarding theoretical implications about the significance of Sensory Attenuation and its role in human behavior and functioning. Focusing on behavioral and electrophysiological results in the auditory domain, we provide an overview of the characteristics and limitations of existing SA paradigms and highlight the problem of isolating SA from other predictive mechanisms. Finally, we explore different hypotheses attempting to explain heterogeneous empirical findings, and the impact of the Predictive Coding Framework in this research area.


Author(s):  
Ioan DZITAC

Membrane Computing is a branch of Computer Science initiated by<br />Gheorghe Păun in 1998, in a technical report of Turku Centre for Computer Science<br />published as a journal paper ("Computing with Membranes" in Journal of Computer<br />and System Sciences) in 2000. Membrane systems, as Gheorghe Păun called the<br />models he has introduced, are known nowadays as "P Systems" (with the letter P<br />coming from the initial of the name of this research area "father").<br />This note is an overview of the impact in ISI WoS of Gheorghe Păun’s works, focused<br />on Membrane Computing and P Systems field, on the occasion of his 65th birthday<br />anniversary.


2021 ◽  
pp. 135406612110536
Author(s):  
Jonathan White

The making of modern authority centred on efforts to formalise and de-personalise power, and transnational orders such as the European Union have often been viewed as an extension of that project. As this article argues, recent developments tell a different story. More than a decade of crisis politics has seen institutions subordinated to and reshaped by individuals and the networks they form. Locating these tendencies in a wider historical context, the article argues that greater attention to informality in transnational governance needs to be paired with greater recognition of the normative questions it raises. Just as a separation between rulers and the offices of rule was central to the making of modern legal and political structures, the weakening of that separation creates legitimacy problems for contemporary authorities both national and supranational. Rather than acclaimed as flexible problem-solving, the step back from institutions should be viewed as a challenge to accountable rule.


Author(s):  
K. S. Prasath

Abstract: Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. It is a type of signal processing in which input is an image and output may be image or characteristics/features associated with that image. Nowadays, image processing is one among rapidly growing technologies. It forms core research area within engineering and computer science disciplines too. Image detection on road is primarily carried out with the help of camera with Raspberry pi 3 model b+ and stimulation software. The device is built in such a way that we can identify any potholes in the respective roads and able to rectify as soon as possible with the help of the device. The data signals shared by the device will be converted to text signals from which we can get it right. These devices are fixed at top of the lamppost which is located at the corners of the road from where the device is monitoring the road at 120 degree for weekly once respectively. Keywords: Image processing, Image detection on road, Raspberry pi 3, 120 degree


Author(s):  
Sergey V. Dorozhkin

There has been much recent activity in the research area of nanoparticles and nanocrystalline materials, in many fields of science and technology. This is due to their outstanding and unique physical, mechanical, chemical and biological characteristics. Recent developments in biomineralization have demonstrated that nano-sized particles play an important role in the formation of the hard tissues of animals. It is well established that the basic inorganic building blocks of bones and teeth of mammals are nano-sized and nanocrystalline calcium orthophosphates (in the form of apatites) of a biological origin. In mammals, tens to hundreds of nanocrystals of biological apatite are found to combine into self-assembled structures under the control of bio-organic matrixes. It was also confirmed experimentally that the structure of both dental enamel and bones could be mimicked by an oriented aggregation of nano-sized calcium orthophosphates, determined by the biomolecules. The application and prospective use of nano-sized and nanocrystalline calcium orthophosphates for clinical repair of damaged bones and teeth are also known. For example, a greater viability and a better proliferation of various cells were detected on smaller crystals of calcium orthophosphates. Furthermore, studies revealed that the differentiation of various cells was promoted by nano-sized calcium orthophosphates. Thus, the nano-sized and nanocrystalline forms of calcium orthophosphates have the potential to revolutionize the field of hard tissue engineering, in areas ranging from bone repair and augmentation to controlled drug delivery devices. This paper reviews the current state of knowledge and recent developments of various nano-sized and nanocrystalline calcium orthophosphates, covering topics from the synthesis and characterization to biomedical and clinical applications. This review also provides possible directions of future research and development.


Author(s):  
Douglas Griffith ◽  
Frank L. Greitzer

The purpose of this article is to re-address the vision of human- computer symbiosis expressed by J. C. R. Licklider nearly a half century ago, when he wrote: “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information- handling machines we know today” (Licklider, 1960). Unfortunately, little progress was made toward this vision over 4 decades following Licklider’s challenge, despite significant advancements in the fields of human factors and computer science. Licklider’s vision was largely forgotten. However, recent advances in information science and technology, psychology, and neuroscience have rekindled the potential of making the Licklider’s vision a reality. This article provides a historical context for and updates the vision, and it argues that such a vision is needed as a unifying framework for advancing IS&T.


2009 ◽  
pp. 2843-2864 ◽  
Author(s):  
Kostas Kolomvatsos ◽  
Stathes Hadjiefthymiades

The field of Multi-agent systems (MAS) has been an active area for many years due to the importance that agents have to many disciplines of research in computer science. MAS are open and dynamic systems where a number of autonomous software components, called agents, communicate and cooperate in order to achieve their goals. In such systems, trust plays an important role. There must be a way for an agent to make sure that it can trust another entity, which is a potential partner. Without trust, agents cannot cooperate effectively and without cooperation they cannot fulfill their goals. Many times, trust is based on reputation. It is an indication that we may trust someone. This important research area is investigated in this book chapter. We discuss main issues concerning reputation and trust in MAS. We present research efforts and give formalizations useful for understanding the two concepts.


Sign in / Sign up

Export Citation Format

Share Document