scholarly journals Pipeline Implementation of Polyphase PSO for Adaptive Beamforming Algorithm

2017 ◽  
Vol 2017 ◽  
pp. 1-12
Author(s):  
Shaobing Huang ◽  
Li Yu ◽  
Fangjian Han ◽  
Yiwen Luo

Adaptive beamforming is a powerful technique for anti-interference, where searching and tracking optimal solutions are a great challenge. In this paper, a partial Particle Swarm Optimization (PSO) algorithm is proposed to track the optimal solution of an adaptive beamformer due to its great global searching character. Also, due to its naturally parallel searching capabilities, a novel Field Programmable Gate Arrays (FPGA) pipeline architecture using polyphase filter bank structure is designed. In order to perform computations with large dynamic range and high precision, the proposed implementation algorithm uses an efficient user-defined floating-point arithmetic. In addition, a polyphase architecture is proposed to achieve full pipeline implementation. In the case of PSO with large population, the polyphase architecture can significantly save hardware resources while achieving high performance. Finally, the simulation results are presented by cosimulation with ModelSim and SIMULINK.

Author(s):  
Sergey Pisetskiy ◽  
Mehrdad Kermani

This paper presents an improved design, complete analysis, and prototype development of high torque-to-mass ratio Magneto-Rheological (MR) clutches. The proposed MR clutches are intended as the main actuation mechanism of a robotic manipulator with five degrees of freedom. Multiple steps to increase the toque-to-mass ratio of the clutch are evaluated and implemented in one design. First, we focus on the Hall sensors’ configuration. Our proposed MR clutches feature embedded Hall sensors for the indirect torque measurement. A new arrangement of the sensors with no effect on the magnetic reluctance of the clutch is presented. Second, we improve the magnetization of the MR clutch. We utilize a new hybrid design that features a combination of an electromagnetic coil and a permanent magnet for improved torque-to-mass ratio. Third, the gap size reduction in the hybrid MR clutch is introduced and the effect of such reduction on maximum torque and the dynamic range of MR clutch is investigated. Finally, the design for a pair of MR clutches with a shared magnetic core for antagonistic actuation of the robot joint is presented and experimentally validated. The details of each approach are discussed and the results of the finite element analysis are used to highlight the required engineering steps and to demonstrate the improvements achieved. Using the proposed design, several prototypes of the MR clutch with various torque capacities ranging from 15 to 200 N·m are developed, assembled, and tested. The experimental results demonstrate the performance of the proposed design and validate the accuracy of the analysis used for the development.


1995 ◽  
Vol 117 (1) ◽  
pp. 155-157 ◽  
Author(s):  
F. C. Anderson ◽  
J. M. Ziegler ◽  
M. G. Pandy ◽  
R. T. Whalen

We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.


2011 ◽  
Vol 383-390 ◽  
pp. 471-475
Author(s):  
Yong Bin Hong ◽  
Cheng Fa Xu ◽  
Mei Guo Gao ◽  
Li Zhi Zhao

A radar signal processing system characterizing high instantaneous dynamic range and low system latency is designed based on a specifically developed signal processing platform. Instantaneous dynamic range loss is a critical problem when digital signal processing is performed on fixed-point FPGAs. In this paper, the problem is well resolved by increasing the wordlength according to signal-to-noise ratio (SNR) gain of the algorithms through the data path. The distinctive software structure featuring parallel pipelined processing and “data flow drive” reduces the system latency to one coherent processing interval (CPI), which significantly improves the maximum tracking angular velocity of the monopulse tracking radar. Additionally, some important electronic counter-countermeasures (ECCM) are incorporated into this signal processing system.


Robotica ◽  
1996 ◽  
Vol 14 (3) ◽  
pp. 321-327 ◽  
Author(s):  
R.E. Ellis ◽  
O.M. Ismaeil ◽  
M.G. Lipsett

SUMMARYA haptic interface is a computer-controlled mechanism designed to detect motion of a human operator without impeding that motion, and to feed back forces from a teleoperated robot or virtual environment. Design of such a device is not trivial, because of the many conflicting constraints the designer must face.As part of our research into haptics, we have developed a prototype planar mechanism. It has low apparent mass and damping, high structural stiffness, high force bandwidth, high force dynamic range, and an absence of mechanical singularities within its workspace. We present an analysis of the human-operator and mechanical constraints that apply to any such device, and propose methods for the evaluation of haptic interfaces. Our evaluation criteria are derived from the original task analysis, and are a first step towards a replicable methodology for comparing the performance of different devices.


2021 ◽  
Vol 59 (1) ◽  
pp. 227-232
Author(s):  
Anping Xu ◽  
Weidong Chen ◽  
Weijie Xie ◽  
Yajun Wang ◽  
Ling Ji

AbstractObjectivesHemoglobin (Hb) variant is one of the most common monogenic inherited disorders. We aimed to explore the prevalence and hematological and molecular characteristics of Hb variants in southern China.MethodsWe collected blood samples from all patients with suspected variants found during HbA1c measurement via a cation-exchange high-performance liquid chromatography system (Bio-Rad Variant II Turbo 2.0) or a capillary electrophoresis method (Sebia Capillarys). Hematological analysis, Sanger sequencing, and gap-PCR were performed for these samples.ResultsAmong the 311,024 patients tested, we found 1,074 Hb variant carriers, including 823 identified using Capillarys and 251 using Variant II Turbo 2.0, with a total carrier rate of 0.35%. We discovered 117 types of Hb variants (52 HBB, 47 HBA, and 18 HBD mutations) containing 18 new mutations. The most common variant found was Hb E, followed by Hb New York, Hb J-Bangkok, Hb Q-Thailand, Hb G-Coushatta, Hb G-Honolulu, Hb G-Taipei, and Hb Broomhill. Most heterozygotes for the Hb variant exhibited normal hematological parameters. However, most patients with compound heterozygotes for the Hb variant and thalassemia showed varied degrees of microcytic hypochromic anemia.ConclusionsThe prevalence of hemoglobin variants remains high and exhibits genetic diversity and widespread distribution in the population of southern China.


2020 ◽  
Vol 10 (1) ◽  
pp. 56-64 ◽  
Author(s):  
Neeti Kashyap ◽  
A. Charan Kumari ◽  
Rita Chhikara

AbstractWeb service compositions are commendable in structuring innovative applications for different Internet-based business solutions. The existing services can be reused by the other applications via the web. Due to the availability of services that can serve similar functionality, suitable Service Composition (SC) is required. There is a set of candidates for each service in SC from which a suitable candidate service is picked based on certain criteria. Quality of service (QoS) is one of the criteria to select the appropriate service. A standout amongst the most important functionality presented by services in the Internet of Things (IoT) based system is the dynamic composability. In this paper, two of the metaheuristic algorithms namely Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) are utilized to tackle QoS based service composition issues. QoS has turned into a critical issue in the management of web services because of the immense number of services that furnish similar functionality yet with various characteristics. Quality of service in service composition comprises of different non-functional factors, for example, service cost, execution time, availability, throughput, and reliability. Choosing appropriate SC for IoT based applications in order to optimize the QoS parameters with the fulfillment of user’s necessities has turned into a critical issue that is addressed in this paper. To obtain results via simulation, the PSO algorithm is used to solve the SC problem in IoT. This is further assessed and contrasted with GA. Experimental results demonstrate that GA can enhance the proficiency of solutions for SC problem in IoT. It can also help in identifying the optimal solution and also shows preferable outcomes over PSO.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3370 ◽  
Author(s):  
Saghi Forouhi ◽  
Rasoul Dehghani ◽  
Ebrahim Ghafar-Zadeh

This paper proposes a novel charge-based Complementary Metal Oxide Semiconductor (CMOS) capacitive sensor for life science applications. Charge-based capacitance measurement (CBCM) has significantly attracted the attention of researchers for the design and implementation of high-precision CMOS capacitive biosensors. A conventional core-CBCM capacitive sensor consists of a capacitance-to-voltage converter (CVC), followed by a voltage-to-digital converter. In spite of their high accuracy and low complexity, their input dynamic range (IDR) limits the advantages of core-CBCM capacitive sensors for most biological applications, including cellular monitoring. In this paper, after a brief review of core-CBCM capacitive sensors, we address this challenge by proposing a new current-mode core-CBCM design. In this design, we combine CBCM and current-controlled oscillator (CCO) structures to improve the IDR of the capacitive readout circuit. Using a 0.18 μm CMOS process, we demonstrate and discuss the Cadence simulation results to demonstrate the high performance of the proposed circuitry. Based on these results, the proposed circuit offers an IDR ranging from 873 aF to 70 fF with a resolution of about 10 aF. This CMOS capacitive sensor with such a wide IDR can be employed for monitoring cellular and molecular activities that are suitable for biological research and clinical purposes.


2004 ◽  
Vol 12 (02) ◽  
pp. 149-174 ◽  
Author(s):  
KILSEOK CHO ◽  
ALAN D. GEORGE ◽  
RAJ SUBRAMANIYAN ◽  
KEONWOOK KIM

Matched-field processing (MFP) localizes sources more accurately than plane-wave beamforming by employing full-wave acoustic propagation models for the cluttered ocean environment. The minimum variance distortionless response MFP (MVDR–MFP) algorithm incorporates the MVDR technique into the MFP algorithm to enhance beamforming performance. Such an adaptive MFP algorithm involves intensive computational and memory requirements due to its complex acoustic model and environmental adaptation. The real-time implementation of adaptive MFP algorithms for large surveillance areas presents a serious computational challenge where high-performance embedded computing and parallel processing may be required to meet real-time constraints. In this paper, three parallel algorithms based on domain decomposition techniques are presented for the MVDR–MFP algorithm on distributed array systems. The parallel performance factors in terms of execution times, communication times, parallel efficiencies, and memory capacities are examined on three potential distributed systems including two types of digital signal processor arrays and a cluster of personal computers. The performance results demonstrate that these parallel algorithms provide a feasible solution for real-time, scalable, and cost-effective adaptive beamforming on embedded, distributed array systems.


Sign in / Sign up

Export Citation Format

Share Document