Importance of Nanosensors: Feynman's Vision and the Birth of Nanotechnology

MRS Bulletin ◽  
2007 ◽  
Vol 32 (9) ◽  
pp. 718-725 ◽  
Author(s):  
Jozef T. Devreese

In his visionary 1959 lecture at Caltech, Richard P. Feynman foresaw the potential of the ability to manipulate matter at the atomic scale. In this article, adapted from Integrated Nanosensors, MRS Symposium Proceedings Volume 952E, edited by I.K. Schuller, Y. Bruynseraede, L.M. Lechuga, and E. Johnson (2007), Jozef T. Devreese (University of Antwerp) discusses implementations of Feynman's vision in the field of nanosensors and perspectives of its further development and applications.Nanoparticles are unique tools as sensors. Particles with sizes at the nanoscale reveal physical properties that do not exist in bulk materials; these properties can operate well inside living cells. Nanosensors possess unique physical characteristics. Their sensitivity can be orders of magnitude better than that of conventional devices. Nanosensors possess such performance advantages as fast response and portability. State-of-the-art nanosensors are based on various advanced materials (quantum dots, nanoshells, nanopores, carbon nanotubes, etc.). Nanosensors furthermore allow for building an entirely new class of integrated devices that provide the elemental base for “intelligent sensors” capable of data processing, storage, and analysis. Advances can open unprecedented perspectives for the application of nanosensors in various fields, for example, as molecular-level diagnostic and treatment instruments in medicine and as networks of nanorobots for real-time monitoring of physiological parameters of a human body.

Author(s):  
Liang-Chien Liu ◽  
Ping-Han Yang ◽  
Shih-Chi Liao ◽  
Bing-Peng Li ◽  
Fu-Cheng Wang ◽  
...  

This article presents the development of a visual-servo filming robot for dolly & truck style camera movement in filming applications. The robot was implemented with a fast-response slider as the upper stage on top of the slow-response tracked robot body as the lower stage, to improve target tracking performance. A new switching controller was developed, which controlled the stages’ motions by balancing and adjusting the weights of vision error and slider’s noncentering error of the upper stage, thus achieving tracking performance better than the traditional master–slave control strategy. The simulations were carried out to evaluate the tracking performance of the model, particularly focusing on evaluating how the dual stage improves the overall response of the model. The similar evaluation was executed experimentally as well. Both results confirm that the fast-response characteristics of the upper stage can compensate the slow dynamics of lower stage, the tracked robot which is inevitably heavy due to its composition.


OR Spectrum ◽  
2021 ◽  
Author(s):  
Adejuyigbe O. Fajemisin ◽  
Laura Climent ◽  
Steven D. Prestwich

AbstractThis paper presents a new class of multiple-follower bilevel problems and a heuristic approach to solving them. In this new class of problems, the followers may be nonlinear, do not share constraints or variables, and are at most weakly constrained. This allows the leader variables to be partitioned among the followers. We show that current approaches for solving multiple-follower problems are unsuitable for our new class of problems and instead we propose a novel analytics-based heuristic decomposition approach. This approach uses Monte Carlo simulation and k-medoids clustering to reduce the bilevel problem to a single level, which can then be solved using integer programming techniques. The examples presented show that our approach produces better solutions and scales up better than the other approaches in the literature. Furthermore, for large problems, we combine our approach with the use of self-organising maps in place of k-medoids clustering, which significantly reduces the clustering times. Finally, we apply our approach to a real-life cutting stock problem. Here a forest harvesting problem is reformulated as a multiple-follower bilevel problem and solved using our approach.


2020 ◽  
pp. 1-16
Author(s):  
Meriem Khelifa ◽  
Dalila Boughaci ◽  
Esma Aïmeur

The Traveling Tournament Problem (TTP) is concerned with finding a double round-robin tournament schedule that minimizes the total distances traveled by the teams. It has attracted significant interest recently since a favorable TTP schedule can result in significant savings for the league. This paper proposes an original evolutionary algorithm for TTP. We first propose a quick and effective constructive algorithm to construct a Double Round Robin Tournament (DRRT) schedule with low travel cost. We then describe an enhanced genetic algorithm with a new crossover operator to improve the travel cost of the generated schedules. A new heuristic for ordering efficiently the scheduled rounds is also proposed. The latter leads to significant enhancement in the quality of the schedules. The overall method is evaluated on publicly available standard benchmarks and compared with other techniques for TTP and UTTP (Unconstrained Traveling Tournament Problem). The computational experiment shows that the proposed approach could build very good solutions comparable to other state-of-the-art approaches or better than the current best solutions on UTTP. Further, our method provides new valuable solutions to some unsolved UTTP instances and outperforms prior methods for all US National League (NL) instances.


AI ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 261-273
Author(s):  
Mario Manzo ◽  
Simone Pellino

COVID-19 has been a great challenge for humanity since the year 2020. The whole world has made a huge effort to find an effective vaccine in order to save those not yet infected. The alternative solution is early diagnosis, carried out through real-time polymerase chain reaction (RT-PCR) tests or thorax Computer Tomography (CT) scan images. Deep learning algorithms, specifically convolutional neural networks, represent a methodology for image analysis. They optimize the classification design task, which is essential for an automatic approach with different types of images, including medical. In this paper, we adopt a pretrained deep convolutional neural network architecture in order to diagnose COVID-19 disease from CT images. Our idea is inspired by what the whole of humanity is achieving, as the set of multiple contributions is better than any single one for the fight against the pandemic. First, we adapt, and subsequently retrain for our assumption, some neural architectures that have been adopted in other application domains. Secondly, we combine the knowledge extracted from images by the neural architectures in an ensemble classification context. Our experimental phase is performed on a CT image dataset, and the results obtained show the effectiveness of the proposed approach with respect to the state-of-the-art competitors.


2021 ◽  
Author(s):  
Danila Piatov ◽  
Sven Helmer ◽  
Anton Dignös ◽  
Fabio Persia

AbstractWe develop a family of efficient plane-sweeping interval join algorithms for evaluating a wide range of interval predicates such as Allen’s relationships and parameterized relationships. Our technique is based on a framework, components of which can be flexibly combined in different manners to support the required interval relation. In temporal databases, our algorithms can exploit a well-known and flexible access method, the Timeline Index, thus expanding the set of operations it supports even further. Additionally, employing a compact data structure, the gapless hash map, we utilize the CPU cache efficiently. In an experimental evaluation, we show that our approach is several times faster and scales better than state-of-the-art techniques, while being much better suited for real-time event processing.


Author(s):  
Sebastian Hoppe Nesgaard Jensen ◽  
Mads Emil Brix Doest ◽  
Henrik Aanæs ◽  
Alessio Del Bue

AbstractNon-rigid structure from motion (nrsfm), is a long standing and central problem in computer vision and its solution is necessary for obtaining 3D information from multiple images when the scene is dynamic. A main issue regarding the further development of this important computer vision topic, is the lack of high quality data sets. We here address this issue by presenting a data set created for this purpose, which is made publicly available, and considerably larger than the previous state of the art. To validate the applicability of this data set, and provide an investigation into the state of the art of nrsfm, including potential directions forward, we here present a benchmark and a scrupulous evaluation using this data set. This benchmark evaluates 18 different methods with available code that reasonably spans the state of the art in sparse nrsfm. This new public data set and evaluation protocol will provide benchmark tools for further development in this challenging field.


2020 ◽  
Vol 34 (07) ◽  
pp. 10607-10614 ◽  
Author(s):  
Xianhang Cheng ◽  
Zhenzhong Chen

Learning to synthesize non-existing frames from the original consecutive video frames is a challenging task. Recent kernel-based interpolation methods predict pixels with a single convolution process to replace the dependency of optical flow. However, when scene motion is larger than the pre-defined kernel size, these methods yield poor results even though they take thousands of neighboring pixels into account. To solve this problem in this paper, we propose to use deformable separable convolution (DSepConv) to adaptively estimate kernels, offsets and masks to allow the network to obtain information with much fewer but more relevant pixels. In addition, we show that the kernel-based methods and conventional flow-based methods are specific instances of the proposed DSepConv. Experimental results demonstrate that our method significantly outperforms the other kernel-based interpolation methods and shows strong performance on par or even better than the state-of-the-art algorithms both qualitatively and quantitatively.


Soil Research ◽  
2015 ◽  
Vol 53 (8) ◽  
pp. 907 ◽  
Author(s):  
David Clifford ◽  
Yi Guo

Given the wide variety of ways one can measure and record soil properties, it is not uncommon to have multiple overlapping predictive maps for a particular soil property. One is then faced with the challenge of choosing the best prediction at a particular point, either by selecting one of the maps, or by combining them together in some optimal manner. This question was recently examined in detail when Malone et al. (2014) compared four different methods for combining a digital soil mapping product with a disaggregation product based on legacy data. These authors also examined the issue of how to compute confidence intervals for the resulting map based on confidence intervals associated with the original input products. In this paper, we propose a new method to combine models called adaptive gating, which is inspired by the use of gating functions in mixture of experts, a machine learning approach to forming hierarchical classifiers. We compare it here with two standard approaches – inverse-variance weights and a regression based approach. One of the benefits of the adaptive gating approach is that it allows weights to vary based on covariate information or across geographic space. As such, this presents a method that explicitly takes full advantage of the spatial nature of the maps we are trying to blend. We also suggest a conservative method for combining confidence intervals. We show that the root mean-squared error of predictions from the adaptive gating approach is similar to that of other standard approaches under cross-validation. However under independent validation the adaptive gating approach works better than the alternatives and as such it warrants further study in other areas of application and further development to reduce its computational complexity.


1992 ◽  
Vol 57 (2) ◽  
pp. 415-424 ◽  
Author(s):  
Upendra K. Shukla ◽  
Raieshwar Singh ◽  
J. M. Khanna ◽  
Anil K. Saxena ◽  
Hemant K. Singh ◽  
...  

Antiparasitic and antidepressant activities exhibited by tetramisole (I) and its enantiomers prompted the study of its structural analogs trans-2-[N-(2-hydroxy-1,2,3,4-tetrahydronaphthalene/indane-1-yl)]iminothiazolidine (VIII/IX) and 2,3,4a,5,6,10b-hexahydronaphtho[1',2':4,5]-imidazo[2,1-b]thiazole (XII), 2,3,4a,5-tetrahydro-9bH-indeno[1',2':4,5]imidazo[2,1-b]thiazole (XIII), and 2,3,4a,5-tetrahydro-9bH-indeno[1',2':4,5]imidazo[2,1-b]thiazole (XVI), and a homolog 3,4,6,7-tetrahydro-7-phenyl-2H-imidazo[2,1-b]-1,3-thiazine (XX). While none of these compounds showed any noteworthy antiparasitic activity, the trans-2-[N-(2-hydroxy-1,2,3,4-tetrahydronaphthalene-1-yl)]iminothiazolidine (VIII) has shown marked antidepressant activity, better than imipramine in the tests used, and provides a new structural lead for antidepressants.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ying Li ◽  
Hang Sun ◽  
Shiyao Feng ◽  
Qi Zhang ◽  
Siyu Han ◽  
...  

Abstract Background Long noncoding RNAs (lncRNAs) play important roles in multiple biological processes. Identifying LncRNA–protein interactions (LPIs) is key to understanding lncRNA functions. Although some LPIs computational methods have been developed, the LPIs prediction problem remains challenging. How to integrate multimodal features from more perspectives and build deep learning architectures with better recognition performance have always been the focus of research on LPIs. Results We present a novel multichannel capsule network framework to integrate multimodal features for LPI prediction, Capsule-LPI. Capsule-LPI integrates four groups of multimodal features, including sequence features, motif information, physicochemical properties and secondary structure features. Capsule-LPI is composed of four feature-learning subnetworks and one capsule subnetwork. Through comprehensive experimental comparisons and evaluations, we demonstrate that both multimodal features and the architecture of the multichannel capsule network can significantly improve the performance of LPI prediction. The experimental results show that Capsule-LPI performs better than the existing state-of-the-art tools. The precision of Capsule-LPI is 87.3%, which represents a 1.7% improvement. The F-value of Capsule-LPI is 92.2%, which represents a 1.4% improvement. Conclusions This study provides a novel and feasible LPI prediction tool based on the integration of multimodal features and a capsule network. A webserver (http://csbg-jlu.site/lpc/predict) is developed to be convenient for users.


Sign in / Sign up

Export Citation Format

Share Document