scholarly journals A Parallel Architecture for the Partitioning Around Medoids (PAM) Algorithm for Scalable Multi-Core Processor Implementation with Applications in Healthcare

Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4129 ◽  
Author(s):  
Hassan Mushtaq ◽  
Sajid Gul Khawaja ◽  
Muhammad Usman Akram ◽  
Amanullah Yasin ◽  
Muhammad Muzammal ◽  
...  

Clustering is the most common method for organizing unlabeled data into its natural groups (called clusters), based on similarity (in some sense or another) among data objects. The Partitioning Around Medoids (PAM) algorithm belongs to the partitioning-based methods of clustering widely used for objects categorization, image analysis, bioinformatics and data compression, but due to its high time complexity, the PAM algorithm cannot be used with large datasets or in any embedded or real-time application. In this work, we propose a simple and scalable parallel architecture for the PAM algorithm to reduce its running time. This architecture can easily be implemented either on a multi-core processor system to deal with big data or on a reconfigurable hardware platform, such as FPGA and MPSoCs, which makes it suitable for real-time clustering applications. Our proposed model partitions data equally among multiple processing cores. Each core executes the same sequence of tasks simultaneously on its respective data subset and shares intermediate results with other cores to produce results. Experiments show that the computational complexity of the PAM algorithm is reduced exponentially as we increase the number of cores working in parallel. It is also observed that the speedup graph of our proposed model becomes more linear with the increase in number of data points and as the clusters become more uniform. The results also demonstrate that the proposed architecture produces the same results as the actual PAM algorithm, but with reduced computational complexity.

Author(s):  
LAKSHMI PRANEETHA

Now-a-days data streams or information streams are gigantic and quick changing. The usage of information streams can fluctuate from basic logical, scientific applications to vital business and money related ones. The useful information is abstracted from the stream and represented in the form of micro-clusters in the online phase. In offline phase micro-clusters are merged to form the macro clusters. DBSTREAM technique captures the density between micro-clusters by means of a shared density graph in the online phase. The density data in this graph is then used in reclustering for improving the formation of clusters but DBSTREAM takes more time in handling the corrupted data points In this paper an early pruning algorithm is used before pre-processing of information and a bloom filter is used for recognizing the corrupted information. Our experiments on real time datasets shows that using this approach improves the efficiency of macro-clusters by 90% and increases the generation of more number of micro-clusters within in a short time.


2020 ◽  
Vol 15 (2) ◽  
pp. 144-196 ◽  
Author(s):  
Mohammad R. Khosravi ◽  
Sadegh Samadi ◽  
Reza Mohseni

Background: Real-time video coding is a very interesting area of research with extensive applications into remote sensing and medical imaging. Many research works and multimedia standards for this purpose have been developed. Some processing ideas in the area are focused on second-step (additional) compression of videos coded by existing standards like MPEG 4.14. Materials and Methods: In this article, an evaluation of some techniques with different complexity orders for video compression problem is performed. All compared techniques are based on interpolation algorithms in spatial domain. In details, the acquired data is according to four different interpolators in terms of computational complexity including fixed weights quartered interpolation (FWQI) technique, Nearest Neighbor (NN), Bi-Linear (BL) and Cubic Cnvolution (CC) interpolators. They are used for the compression of some HD color videos in real-time applications, real frames of video synthetic aperture radar (video SAR or ViSAR) and a high resolution medical sample. Results: Comparative results are also described for three different metrics including two reference- based Quality Assessment (QA) measures and an edge preservation factor to achieve a general perception of various dimensions of the mentioned problem. Conclusion: Comparisons show that there is a decidable trade-off among video codecs in terms of more similarity to a reference, preserving high frequency edge information and having low computational complexity.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1633 ◽  
Author(s):  
Beom-Su Kim ◽  
Sangdae Kim ◽  
Kyong Hoon Kim ◽  
Tae-Eung Sung ◽  
Babar Shah ◽  
...  

Many applications are able to obtain enriched information by employing a wireless multimedia sensor network (WMSN) in industrial environments, which consists of nodes that are capable of processing multimedia data. However, as many aspects of WMSNs still need to be refined, this remains a potential research area. An efficient application needs the ability to capture and store the latest information about an object or event, which requires real-time multimedia data to be delivered to the sink timely. Motivated to achieve this goal, we developed a new adaptive QoS routing protocol based on the (m,k)-firm model. The proposed model processes captured information by employing a multimedia stream in the (m,k)-firm format. In addition, the model includes a new adaptive real-time protocol and traffic handling scheme to transmit event information by selecting the next hop according to the flow status as well as the requirement of the (m,k)-firm model. Different from the previous approach, two level adjustment in routing protocol and traffic management are able to increase the number of successful packets within the deadline as well as path setup schemes along the previous route is able to reduce the packet loss until a new path is established. Our simulation results demonstrate that the proposed schemes are able to improve the stream dynamic success ratio and network lifetime compared to previous work by meeting the requirement of the (m,k)-firm model regardless of the amount of traffic.


2014 ◽  
Vol 521 ◽  
pp. 252-255
Author(s):  
Jian Yuan Xu ◽  
Jia Jue Li ◽  
Jie Jun Zhang ◽  
Yu Zhu

The problem of intermittent generation peaking is highly concerned by the grid operator. To build control model for solving unbalance of peaking is great necessary. In this paper, we propose reserve classification control model which contain constant reserve control model with real-time reserve control model to guide the peaking balance of the grid with intermittent generation. The proposed model associate time-period constant reserve control model with real-time reserve control model to calculate, and use the peaking margin as intermediate variable. Therefore, the model solutions which are the capacity of reserve classification are obtained. The grid operators use the solution to achieve the peaking balance control. The proposed model was examined by real grid operation case, and the results of the case demonstrate the validity of the proposed model.


2014 ◽  
Vol 610 ◽  
pp. 339-344
Author(s):  
Qiang Guo ◽  
Yun Fei An

A UCA-Root-MUSIC algorithm for direction-of-arrival (DOA) estimation is proposed in this paper which is based on UCA-RB-MUSIC [1]. The method utilizes not only a unitary transformation matrix different from UCA-RB-MUSIC but also the multi-stage Wiener filter (MSWF) to estimate the signal subspace and the number of sources, so that the new method has lower computational complexity and is more conducive to the real-time implementation. The computer simulation results demonstrate the improvement with the proposed method.


2021 ◽  
Author(s):  
Ahmed Al-Sabaa ◽  
Hany Gamal ◽  
Salaheldin Elkatatny

Abstract The formation porosity of drilled rock is an important parameter that determines the formation storage capacity. The common industrial technique for rock porosity acquisition is through the downhole logging tool. Usually logging while drilling, or wireline porosity logging provides a complete porosity log for the section of interest, however, the operational constraints for the logging tool might preclude the logging job, in addition to the job cost. The objective of this study is to provide an intelligent prediction model to predict the porosity from the drilling parameters. Artificial neural network (ANN) is a tool of artificial intelligence (AI) and it was employed in this study to build the porosity prediction model based on the drilling parameters as the weight on bit (WOB), drill string rotating-speed (RS), drilling torque (T), stand-pipe pressure (SPP), mud pumping rate (Q). The novel contribution of this study is to provide a rock porosity model for complex lithology formations using drilling parameters in real-time. The model was built using 2,700 data points from well (A) with 74:26 training to testing ratio. Many sensitivity analyses were performed to optimize the ANN model. The model was validated using unseen data set (1,000 data points) of Well (B), which is located in the same field and drilled across the same complex lithology. The results showed the high performance for the model either for training and testing or validation processes. The overall accuracy for the model was determined in terms of correlation coefficient (R) and average absolute percentage error (AAPE). Overall, R was higher than 0.91 and AAPE was less than 6.1 % for the model building and validation. Predicting the rock porosity while drilling in real-time will save the logging cost, and besides, will provide a guide for the formation storage capacity and interpretation analysis.


2021 ◽  
Author(s):  
Chris Onof ◽  
Yuting Chen ◽  
Li-Pen Wang ◽  
Amy Jones ◽  
Susana Ochoa Rodriguez

<p>In this work a two-stage (rainfall nowcasting + flood prediction) analogue model for real-time urban flood forecasting is presented. The proposed approach accounts for the complexities of urban rainfall nowcasting while avoiding the expensive computational requirements of real-time urban flood forecasting.</p><p>The model has two consecutive stages:</p><ul><li><strong>(1) Rainfall nowcasting: </strong>0-6h lead time ensemble rainfall nowcasting is achieved by means of an analogue method, based on the assumption that similar climate condition will define similar patterns of temporal evolution of the rainfall. The framework uses the NORA analogue-based forecasting tool (Panziera et al., 2011), consisting of two layers. In the <strong>first layer, </strong>the 120 historical atmospheric (forcing) conditions most similar to the current atmospheric conditions are extracted, with the historical database consisting of ERA5 reanalysis data from the ECMWF and the current conditions derived from the US Global Forecasting System (GFS). In the <strong>second layer</strong>, twelve historical radar images most similar to the current one are extracted from amongst the historical radar images linked to the aforementioned 120 forcing analogues. Lastly, for each of the twelve analogues, the rainfall fields (at resolution of 1km/5min) observed after the present time are taken as one ensemble member. Note that principal component analysis (PCA) and uncorrelated multilinear PCA methods were tested for image feature extraction prior to applying the nearest neighbour technique for analogue selection.</li> <li><strong>(2) Flood prediction: </strong>we predict flood extent using the high-resolution rainfall forecast from Stage 1, along with a database of pre-run flood maps at 1x1 km<sup>2</sup> solution from 157 catalogued historical flood events. A deterministic flood prediction is obtained by using the averaged response from the twelve flood maps associated to the twelve ensemble rainfall nowcasts, where for each gridded area the median value is adopted (assuming flood maps are equiprobabilistic). A probabilistic flood prediction is obtained by generating a quantile-based flood map. Note that the flood maps were generated through rolling ball-based mapping of the flood volumes predicted at each node of the InfoWorks ICM sewer model of the pilot area.</li> </ul><p>The Minworth catchment in the UK (~400 km<sup>2</sup>) was used to demonstrate the proposed model. Cross‑assessment was undertaken for each of 157 flooding events by leaving one event out from training in each iteration and using it for evaluation. With a focus on the spatial replication of flood/non-flood patterns, the predicted flood maps were converted to binary (flood/non-flood) maps. Quantitative assessment was undertaken by means of a contingency table. An average accuracy rate (i.e. proportion of correct predictions, out of all test events) of 71.4% was achieved, with individual accuracy rates ranging from 57.1% to 78.6%). Further testing is needed to confirm initial findings and flood mapping refinement will be pursued.</p><p>The proposed model is fast, easy and relatively inexpensive to operate, making it suitable for direct use by local authorities who often lack the expertise on and/or capabilities for flood modelling and forecasting.</p><p><strong>References: </strong>Panziera et al. 2011. NORA–Nowcasting of Orographic Rainfall by means of Analogues. Quarterly Journal of the Royal Meteorological Society. 137, 2106-2123.</p>


2018 ◽  
Vol 15 (5) ◽  
pp. 593-625 ◽  
Author(s):  
Chi-Hé Elder ◽  
Michael Haugh

Abstract Dominant accounts of “speaker meaning” in post-Gricean contextualist pragmatics tend to focus on single utterances, making the theoretical assumption that the object of pragmatic analysis is restricted to cases where speakers and hearers agree on utterance meanings, leaving instances of misunderstandings out of their scope. However, we know that divergences in understandings between interlocutors do often arise, and that when they do, speakers can engage in a local process of meaning negotiation. In this paper, we take insights from interactional pragmatics to offer an empirically informed view on speaker meaning that incorporates both speakers’ and hearers’ perspectives, alongside a formalization of how to model speaker meanings in such a way that we can account for both understandings – the canonical cases – and misunderstandings, but critically, also the process of interactionally negotiating meanings between interlocutors. We highlight that utterance-level theories of meaning provide only a partial representation of speaker meaning as it is understood in interaction, and show that inferences about a given utterance at any given time are formally connected to prior and future inferences of participants. Our proposed model thus provides a more fine-grained account of how speakers converge on speaker meanings in real time, showing how such meanings are often subject to a joint endeavor of complex inferential work.


Sign in / Sign up

Export Citation Format

Share Document