scholarly journals Control of DC Motors to Guide Unmanned Underwater Vehicles

2021 ◽  
Vol 11 (5) ◽  
pp. 2144
Author(s):  
Timothy Sands

Many research manuscripts propose new methodologies, while others compare several state-of-the-art methods to ascertain the best method for a given application. This manuscript does both by introducing deterministic artificial intelligence (D.A.I.) to control direct current motors used by unmanned underwater vehicles (amongst other applications), and directly comparing the performance of three state-of-the-art nonlinear adaptive control techniques. D.A.I. involves the assertion of self-awareness statements and uses optimal (in a 2-norm sense) learning to compensate for the deleterious effects of error sources. This research reveals that deterministic artificial intelligence yields 4.8% lower mean and 211% lower standard deviation of tracking errors as compared to the best modeling method investigated (indirect self-tuner without process zero cancellation and minimum phase plant). The improved performance cannot be attributed to superior estimation. Coefficient estimation was merely on par with the best alternative methods; some coefficients were estimated more accurately, others less. Instead, the superior performance seems to be attributable to the modeling method. One noteworthy feature is that D.A.I. very closely followed a challenging square wave without overshoot—successfully settling at each switch of the square wave—while all of the other state-of-the-art methods were unable to do so.

2020 ◽  
Vol 8 (8) ◽  
pp. 578
Author(s):  
Timothy Sands

The major premise of deterministic artificial intelligence (D.A.I.) is to assert deterministic self-awareness statements based in either the physics of the underlying problem or system identification to establish governing differential equations. The key distinction between D.A.I. and ubiquitous stochastic methods for artificial intelligence is the adoption of first principles whenever able (in every instance available). One benefit of applying artificial intelligence principles over ubiquitous methods is the ease of the approach once the re-parameterization is derived, as done here. While the method is deterministic, researchers need only understand linear regression to understand the optimality of both self-awareness and learning. The approach necessitates full (autonomous) expression of a desired trajectory. Inspired by the exponential solution of ordinary differential equations and Euler’s expression of exponential solutions in terms of sinusoidal functions, desired trajectories will be formulated using such functions. Deterministic self-awareness statements, using the autonomous expression of desired trajectories with buoyancy control neglected, are asserted to control underwater vehicles in ideal cases only, while application to real-world deleterious effects is reserved for future study due to the length of this manuscript. In totality, the proposed methodology automates control and learning merely necessitating very simple user inputs, namely desired initial and final states and desired initial and final time, while tuning is eliminated completely.


Author(s):  
C. A. Danbaki ◽  
N. C. Onyemachi ◽  
D. S. M. Gado ◽  
G. S. Mohammed ◽  
D. Agbenu ◽  
...  

This study is a survey on state-of-the-art methods based on artificial intelligence and image processing for precision agriculture on Crop Management, Pest and Disease Management, Soil and Irrigation Management, Livestock Farming and the challenges it presents. Precision agriculture (PA) described as applying current technologies into conventional farming methods. These methods have proved to be highly efficient, sustainable and profitable to the farmer hence boosting the economy. This study is a survey on the current state of the art methods applied to precision agriculture. The application of precision agriculture is expected to yield an increase in productivity which ultimately ends in profit to the farmer, to the society increase sustainability and also improve the economy.


Author(s):  
Lu Cheng ◽  
Ahmadreza Mosallanezhad ◽  
Paras Sheth ◽  
Huan Liu

There have been increasing concerns about Artificial Intelligence (AI) due to its unfathomable potential power. To make AI address ethical challenges and shun undesirable outcomes, researchers proposed to develop socially responsible AI (SRAI). One of these approaches is causal learning (CL). We survey state-of-the-art methods of CL for SRAI. We begin by examining the seven CL tools to enhance the social responsibility of AI, then review how existing works have succeeded using these tools to tackle issues in developing SRAI such as fairness. The goal of this survey is to bring forefront the potentials and promises of CL for SRAI.


2020 ◽  
Author(s):  
Tahir Mahmood ◽  
Muhammad Owais ◽  
Kyoung Jun Noh ◽  
Hyo Sik Yoon ◽  
Adnan Haider ◽  
...  

BACKGROUND Accurate nuclei segmentation in histopathology images plays a key role in digital pathology. It is considered a prerequisite for the determination of cell phenotype, nuclear morphometrics, cell classification, and the grading and prognosis of cancer. However, it is a very challenging task because of the different types of nuclei, large intra-class variations, and diverse cell morphologies. Consequently, the manual inspection of such images under high-resolution microscopes is tedious and time-consuming. Alternatively, artificial intelligence (AI)-based automated techniques, which are fast, robust, and require less human effort, can be used. Recently, several AI-based nuclei segmentation techniques have been proposed. They have shown a significant performance improvement for this task, but there is room for further improvement. Thus, we propose an AI-based nuclei segmentation technique in which we adopt a new nuclei segmentation network empowered by residual skip connections to address this issue. OBJECTIVE The aim of this study was to develop an AI-based nuclei segmentation method for histopathology images of multiple organs. METHODS Our proposed residual-skip-connections-based nuclei segmentation network (R-NSN) is comprised of two main stages: Stain normalization and nuclei segmentation as shown in Figure 2. In the 1st stage, a histopathology image is stain normalized to balance the color and intensity variation. Subsequently, it is used as an input to the R-NSN in stage 2, which outputs a segmented image. RESULTS Experiments were performed on two publicly available datasets: 1) The Cancer Genomic Atlas (TCGA), and 2) Triple-negative Breast Cancer (TNBC). The results show that our proposed technique achieves an aggregated Jaccard index (AJI) of 0.6794, Dice coefficient of 0.8084, and F1-measure of 0.8547 on the TCGA dataset, and an AJI of 0.7332, Dice coefficient of 0.8441, precision of 0.8352, recall of 0.8306, and F1-measure of 0.8329 on the TNBC dataset. These values are higher than those of the state-of-the-art methods. CONCLUSIONS The proposed R-NSN has the potential to maintain crucial features by using the residual connectivity from the encoder to the decoder and uses only a few layers, which reduces the computational cost of the model. The selection of a good stain normalization technique, the effective use of residual connections to avoid information loss, and the use of only a few layers to reduce the computational cost yielded outstanding results. Thus, our nuclei segmentation method is robust and is superior to the state-of-the-art methods. We expect that this study will contribute to the development of computational pathology software for research and clinical use and enhance the impact of computational pathology.


2021 ◽  
Author(s):  
TH Nguyen-Vo ◽  
QH Nguyen ◽  
TTT Do ◽  
TN Nguyen ◽  
S Rahardja ◽  
...  

© 2019 Nguyen-Vo et al. Background: Pseudouridine modification is most commonly found among various kinds of RNA modification occurred in both prokaryotes and eukaryotes. This biochemical event has been proved to occur in multiple types of RNAs, including rRNA, mRNA, tRNA, and nuclear/nucleolar RNA. Hence, gaining a holistic understanding of pseudouridine modification can contribute to the development of drug discovery and gene therapies. Although some laboratory techniques have come up with moderately good outcomes in pseudouridine identification, they are costly and required skilled work experience. We propose iPseU-NCP - an efficient computational framework to predict pseudouridine sites using the Random Forest (RF) algorithm combined with nucleotide chemical properties (NCP) generated from RNA sequences. The benchmark dataset collected from Chen et al. (2016) was used to develop iPseU-NCP and fairly compare its performances with other methods. Results: Under the same experimental settings, comparing with three state-of-the-art methods including iPseU-CNN, PseUI, and iRNA-PseU, the Matthew's correlation coefficient (MCC) of our model increased by about 20.0%, 55.0%, and 109.0% when tested on the H. sapiens (H_200) dataset and by about 6.5%, 35.0%, and 150.0% when tested on the S. cerevisiae (S_200) dataset, respectively. This significant growth in MCC is very important since it ensures the stability and performance of our model. With those two independent test datasets, our model also presented higher accuracy with a success rate boosted by 7.0%, 13.0%, and 20.0% and 2.0%, 9.5%, and 25.0% when compared to iPseU-CNN, PseUI, and iRNA-PseU, respectively. For majority of other evaluation metrics, iPseU-NCP demonstrated superior performance as well. Conclusions: iPseU-NCP combining the RF and NPC-encoded features showed better performances than other existing state-of-the-art methods in the identification of pseudouridine sites. This also shows an optimistic view in addressing biological issues related to human diseases.


1998 ◽  
Vol 51 (1) ◽  
pp. 79-105 ◽  
Author(s):  
Paul J. Craven ◽  
Robert Sutton ◽  
Roland S. Burns

In recent years, both the offshore industry and the navies of the world have become increasingly interested in the potential operational usage of unmanned underwater vehicles. This paper provides a comprehensive review of a number of modern control approaches and artificial intelligence techniques which have been applied to the autopilot design problem for such craft.


2021 ◽  
Author(s):  
TH Nguyen-Vo ◽  
QH Nguyen ◽  
TTT Do ◽  
TN Nguyen ◽  
S Rahardja ◽  
...  

© 2019 Nguyen-Vo et al. Background: Pseudouridine modification is most commonly found among various kinds of RNA modification occurred in both prokaryotes and eukaryotes. This biochemical event has been proved to occur in multiple types of RNAs, including rRNA, mRNA, tRNA, and nuclear/nucleolar RNA. Hence, gaining a holistic understanding of pseudouridine modification can contribute to the development of drug discovery and gene therapies. Although some laboratory techniques have come up with moderately good outcomes in pseudouridine identification, they are costly and required skilled work experience. We propose iPseU-NCP - an efficient computational framework to predict pseudouridine sites using the Random Forest (RF) algorithm combined with nucleotide chemical properties (NCP) generated from RNA sequences. The benchmark dataset collected from Chen et al. (2016) was used to develop iPseU-NCP and fairly compare its performances with other methods. Results: Under the same experimental settings, comparing with three state-of-the-art methods including iPseU-CNN, PseUI, and iRNA-PseU, the Matthew's correlation coefficient (MCC) of our model increased by about 20.0%, 55.0%, and 109.0% when tested on the H. sapiens (H_200) dataset and by about 6.5%, 35.0%, and 150.0% when tested on the S. cerevisiae (S_200) dataset, respectively. This significant growth in MCC is very important since it ensures the stability and performance of our model. With those two independent test datasets, our model also presented higher accuracy with a success rate boosted by 7.0%, 13.0%, and 20.0% and 2.0%, 9.5%, and 25.0% when compared to iPseU-CNN, PseUI, and iRNA-PseU, respectively. For majority of other evaluation metrics, iPseU-NCP demonstrated superior performance as well. Conclusions: iPseU-NCP combining the RF and NPC-encoded features showed better performances than other existing state-of-the-art methods in the identification of pseudouridine sites. This also shows an optimistic view in addressing biological issues related to human diseases.


2019 ◽  
Vol 1 (3) ◽  
pp. 289-308 ◽  
Author(s):  
Lingbing Guo ◽  
Qingheng Zhang ◽  
Wei Hu ◽  
Zequn Sun ◽  
Yuzhong Qu

Knowledge graph (KG) completion aims at filling the missing facts in a KG, where a fact is typically represented as a triple in the form of ( head, relation, tail). Traditional KG completion methods compel two-thirds of a triple provided (e.g., head and relation) to predict the remaining one. In this paper, we propose a new method that extends multi-layer recurrent neural networks (RNNs) to model triples in a KG as sequences. It obtains state-of-the-art performance on the common entity prediction task, i.e., giving head (or tail) and relation to predict the tail (or the head), using two benchmark data sets. Furthermore, the deep sequential characteristic of our method enables it to predict the relations given head (or tail) only, and even predict the whole triples. Our experiments on these two new KG completion tasks demonstrate that our method achieves superior performance compared with several alternative methods.


BMC Genomics ◽  
2019 ◽  
Vol 20 (S10) ◽  
Author(s):  
Thanh-Hoang Nguyen-Vo ◽  
Quang H. Nguyen ◽  
Trang T.T. Do ◽  
Thien-Ngan Nguyen ◽  
Susanto Rahardja ◽  
...  

Abstract Background Pseudouridine modification is most commonly found among various kinds of RNA modification occurred in both prokaryotes and eukaryotes. This biochemical event has been proved to occur in multiple types of RNAs, including rRNA, mRNA, tRNA, and nuclear/nucleolar RNA. Hence, gaining a holistic understanding of pseudouridine modification can contribute to the development of drug discovery and gene therapies. Although some laboratory techniques have come up with moderately good outcomes in pseudouridine identification, they are costly and required skilled work experience. We propose iPseU-NCP – an efficient computational framework to predict pseudouridine sites using the Random Forest (RF) algorithm combined with nucleotide chemical properties (NCP) generated from RNA sequences. The benchmark dataset collected from Chen et al. (2016) was used to develop iPseU-NCP and fairly compare its performances with other methods. Results Under the same experimental settings, comparing with three state-of-the-art methods including iPseU-CNN, PseUI, and iRNA-PseU, the Matthew’s correlation coefficient (MCC) of our model increased by about 20.0%, 55.0%, and 109.0% when tested on the H. sapiens (H_200) dataset and by about 6.5%, 35.0%, and 150.0% when tested on the S. cerevisiae (S_200) dataset, respectively. This significant growth in MCC is very important since it ensures the stability and performance of our model. With those two independent test datasets, our model also presented higher accuracy with a success rate boosted by 7.0%, 13.0%, and 20.0% and 2.0%, 9.5%, and 25.0% when compared to iPseU-CNN, PseUI, and iRNA-PseU, respectively. For majority of other evaluation metrics, iPseU-NCP demonstrated superior performance as well. Conclusions iPseU-NCP combining the RF and NPC-encoded features showed better performances than other existing state-of-the-art methods in the identification of pseudouridine sites. This also shows an optimistic view in addressing biological issues related to human diseases.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6203
Author(s):  
Simon Watson ◽  
Daniel A. Duecker ◽  
Keir Groves

The inspection of aquatic environments is a challenging activity, which is made more difficult if the environment is complex or confined, such as those that are found in nuclear storage facilities and accident sites, marinas and boatyards, liquid storage tanks, or flooded tunnels and sewers. Human inspections of these environments are often dangerous or infeasible, so remote inspection using unmanned underwater vehicles (UUVs) is used. Due to access restrictions and environmental limitations, such as low illumination levels, turbidity, and a lack of salient features, traditional localisation systems that have been developed for use in large bodies of water cannot be used. This means that UUV capabilities are severely restricted to manually controlled low-quality visual inspections, generating non-geospatially located data. The localisation of UUVs in these environments would enable the autonomous behaviour and the development of accurate maps. This article presents a review of the state-of-the-art in localisation technologies for these environments and identifies areas of future research to overcome the challenges posed.


Sign in / Sign up

Export Citation Format

Share Document