scholarly journals A Fast Parameter Identification Framework for Personalized Pharmacokinetics

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Chenxi Yang ◽  
Negar Tavassolian ◽  
Wassim M. Haddad ◽  
James M. Bailey ◽  
Behnood Gholami

Abstract This paper introduces a novel framework for fast parameter identification of personalized pharmacokinetic problems. Given one sample observation of a new subject, the framework predicts the parameters of the subject based on prior knowledge from a pharmacokinetic database. The feasibility of this framework was demonstrated by developing a new algorithm based on the Cluster Newton method, namely the constrained Cluster Newton method, where the initial points of the parameters are constrained by the database. The algorithm was tested with the compartmental model of propofol on a database of 59 subjects. The average overall absolute percentage error based on constrained Cluster Newton method is 12.10% with the threshold approach, and 13.42% with the nearest-neighbor approach. The average computation time of one estimation is 13.10 seconds. Using parallel computing, the average computation time is reduced to 1.54 seconds, achieved with 12 parallel workers. The results suggest that the proposed framework can effectively improve the prediction accuracy of the pharmacokinetic parameters with limited observations in comparison to the conventional methods. Computation cost analyses indicate that the proposed framework can take advantage of parallel computing and provide solutions within practical response times, leading to fast and accurate parameter identification of pharmacokinetic problems.

2014 ◽  
Vol 10 (1) ◽  
pp. 133
Author(s):  
Rajif Agung Yunmar ◽  
Agus Harjoko

Abstrak            Bangsa yang besar adalah bangsa yang menghargai sejarah dan asal usulnya. Relief yang terdapat pada candi Borobudur menggambarkan banyak cerita, termasuk sejarah dan asal usul bangsa ini. Mulai dari cerita kehidupan kalangan kerajaan, kehidupan masyarakat, dan adat istiadat pada saat bangunan candi tersebut dibuat, dsb.            Penelitian ini merancang sebuah perangkat lunak mobile Android untuk identifikasi citra relief Candi Borobudur sehingga dapat membantu wisatawan dalam menerjemahkan pesan dan informasi yang terkandung didalamnya. Metode ekstraksi ciri yang digunakan adalah Speeded-Up Robust Feature (SURF) dan hierarchical k-means tree nearest-neighbor untuk identifikasinya.            Pengujian identifikasi citra relief dilakukan dengan berbagai macam variasi, yaitu sudut, jarak, rotasi, intensitas cahaya dan keutuhan citra masukan untuk melihat pengaruhnya terhadap hasil pengenalan citra relief tersebut. Metode identifikasi yang diajukan memberikan hasil pengenalan sebesar 93.30% dengan rata-rata waktu komputasi 59.55 detik.  Abstract            The great nation built from people who can respects they history and origins. Reliefs at Borobudur temple contained many stories, including the history and origins of this nation. Starting from the life story of the royal, society, and customs at the time of the building of the temple was made, and so on.            This study develops mobile Android software for identification of Borobudur Temple relief image object so that it can help travelers in translating the story and the information contained therein. Feature extraction method used is speeded-Up Robust Feature (SURF) and hierarchical k-means tree nearest-neighbor for identification.            Identification testing of relief images is done by different variations, ie angle, distance, rotation, intensity of the light and wholeness of image input to see the effect on the relief image recognition results. The proposed identification method gives recognition results of 93.30% and the average computation time for 59.55 seconds.


Author(s):  
Ervina Varijki ◽  
Bambang Krismono Triwijoyo

One type of cancer that is capable identified using MRI technology is breast cancer. Breast cancer is still the leading cause of death world. therefore early detection of this disease is needed. In identifying breast cancer, a doctor or radiologist analyzing the results of magnetic resonance image that is stored in the format of the Digital Imaging Communication In Medicine (DICOM). It takes skill and experience sufficient for diagnosis is appropriate, andaccurate, so it is necessary to create a digital image processing applications by utilizing the process of object segmentation and edge detection to assist the physician or radiologist in identifying breast cancer. MRI image segmentation using edge detection to identification of breast cancer using a method stages gryascale change the image format, then the binary image thresholding and edge detection process using the latest Robert operator. Of the20 tested the input image to produce images with the appearance of the boundary line of each region or object that is visible and there are no edges are cut off, with the average computation time less than one minute.


Signals ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 336-352
Author(s):  
Frank Zalkow ◽  
Julian Brandner ◽  
Meinard Müller

Flexible retrieval systems are required for conveniently browsing through large music collections. In a particular content-based music retrieval scenario, the user provides a query audio snippet, and the retrieval system returns music recordings from the collection that are similar to the query. In this scenario, a fast response from the system is essential for a positive user experience. For realizing low response times, one requires index structures that facilitate efficient search operations. One such index structure is the K-d tree, which has already been used in music retrieval systems. As an alternative, we propose to use a modern graph-based index, denoted as Hierarchical Navigable Small World (HNSW) graph. As our main contribution, we explore its potential in the context of a cross-version music retrieval application. In particular, we report on systematic experiments comparing graph- and tree-based index structures in terms of the retrieval quality, disk space requirements, and runtimes. Despite the fact that the HNSW index provides only an approximate solution to the nearest neighbor search problem, we demonstrate that it has almost no negative impact on the retrieval quality in our application. As our main result, we show that the HNSW-based retrieval is several orders of magnitude faster. Furthermore, the graph structure also works well with high-dimensional index items, unlike the tree-based structure. Given these merits, we highlight the practical relevance of the HNSW graph for music information retrieval (MIR) applications.


Author(s):  
FATHALLAH NOUBOUD ◽  
RÉJEAN PLAMONDON

This paper presents a real-time constraint-free handprinted character recognition system based on a structural approach. After the preprocessing operation, a chain code is extracted to represent the character. The classification is based on the use of a processor dedicated to string comparison. The average computation time to recognize a character is about 0.07 seconds. During the learning step, the user can define any set of characters or symbols to be recognized by the system. Thus there are no constraints on the handprinting. The experimental tests show a high degree of accuracy (96%) for writer-dependent applications. Comparisons with other system and methods are discussed. We also present a comparison between the processor used in this system and the Wagner and Fischer algorithm. Finally, we describe some applications of the system.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Xing Hu ◽  
Shiqiang Hu ◽  
Xiaoyu Zhang ◽  
Huanlong Zhang ◽  
Lingkun Luo

We propose a novel local nearest neighbor distance (LNND) descriptor for anomaly detection in crowded scenes. Comparing with the commonly used low-level feature descriptors in previous works, LNND descriptor has two major advantages. First, LNND descriptor efficiently incorporates spatial and temporal contextual information around the video event that is important for detecting anomalous interaction among multiple events, while most existing feature descriptors only contain the information of single event. Second, LNND descriptor is a compact representation and its dimensionality is typically much lower than the low-level feature descriptor. Therefore, not only the computation time and storage requirement can be accordingly saved by using LNND descriptor for the anomaly detection method with offline training fashion, but also the negative aspects caused by using high-dimensional feature descriptor can be avoided. We validate the effectiveness of LNND descriptor by conducting extensive experiments on different benchmark datasets. Experimental results show the promising performance of LNND-based method against the state-of-the-art methods. It is worthwhile to notice that the LNND-based approach requires less intermediate processing steps without any subsequent processing such as smoothing but achieves comparable event better performance.


2007 ◽  
Vol 46 (03) ◽  
pp. 324-331 ◽  
Author(s):  
P. Jäger ◽  
S. Vogel ◽  
A. Knepper ◽  
T. Kraus ◽  
T. Aach ◽  
...  

Summary Objectives: Pleural thickenings as biomarker of exposure to asbestos may evolve into malignant pleural mesothelioma. Foritsearly stage, pleurectomy with perioperative treatment can reduce morbidity and mortality. The diagnosis is based on a visual investigation of CT images, which is a time-consuming and subjective procedure. Our aim is to develop an automatic image processing approach to detect and quantitatively assess pleural thickenings. Methods: We first segment the lung areas, and identify the pleural contours. A convexity model is then used together with a Hounsfield unit threshold to detect pleural thickenings. The assessment of the detected pleural thickenings is based on a spline-based model of the healthy pleura. Results: Tests were carried out on 14 data sets from three patients. In all cases, pleural contours were reliably identified, and pleural thickenings detected. PC-based Computation times were 85 min for a data set of 716 slices, 35 min for 401 slices, and 4 min for 75 slices, resulting in an average computation time of about 5.2 s per slice. Visualizations of pleurae and detected thickeningswere provided. Conclusion: Results obtained so far indicate that our approach is able to assist physicians in the tedious task of finding and quantifying pleural thickenings in CT data. In the next step, our system will undergo an evaluation in a clinical test setting using routine CT data to quantifyits performance.


2010 ◽  
Vol 3 (6) ◽  
pp. 1555-1568 ◽  
Author(s):  
B. Mijling ◽  
O. N. E. Tuinder ◽  
R. F. van Oss ◽  
R. J. van der A

Abstract. The Ozone Profile Algorithm (OPERA), developed at KNMI, retrieves the vertical ozone distribution from nadir spectral satellite measurements of back scattered sunlight in the ultraviolet and visible wavelength range. To produce consistent global datasets the algorithm needs to have good global performance, while short computation time facilitates the use of the algorithm in near real time applications. To test the global performance of the algorithm we look at the convergence behaviour as diagnostic tool of the ozone profile retrievals from the GOME instrument (on board ERS-2) for February and October 1998. In this way, we uncover different classes of retrieval problems, related to the South Atlantic Anomaly, low cloud fractions over deserts, desert dust outflow over the ocean, and the intertropical convergence zone. The influence of the first guess and the external input data including the ozone cross-sections and the ozone climatologies on the retrieval performance is also investigated. By using a priori ozone profiles which are selected on the expected total ozone column, retrieval problems due to anomalous ozone distributions (such as in the ozone hole) can be avoided. By applying the algorithm adaptations the convergence statistics improve considerably, not only increasing the number of successful retrievals, but also reducing the average computation time, due to less iteration steps per retrieval. For February 1998, non-convergence was brought down from 10.7% to 2.1%, while the mean number of iteration steps (which dominates the computational time) dropped 26% from 5.11 to 3.79.


Author(s):  
T Kavitha ◽  
K. Jayasankar

<p>Compression technique is adopted to solve various big data problems such as storage and transmission. The growth of cloud computing and smart phone industries has led to generation of huge volume of digital data. Digital data can be in various forms as audio, video, images and documents. These digital data are generally compressed and stored in cloud storage environment. Efficient storing and retrieval mechanism of digital data by adopting good compression technique will result in reducing cost. The compression technique is composed of lossy and lossless compression technique. Here we consider Lossless image compression technique, minimizing the number of bits for encoding will aid in improving the coding efficiency and high compression. Fixed length coding cannot assure in minimizing bit length. In order to minimize the bits variable Length codes with prefix-free codes nature are preferred. However the existing compression model presented induce high computing overhead, to address this issue, this work presents an ideal and efficient modified Huffman technique that improves compression factor up to 33.44% for Bi-level images and 32.578% for Half-tone Images. The average computation time both encoding and decoding shows an improvement of 20.73% for Bi-level images and 28.71% for Half-tone images. The proposed work has achieved overall 2% increase in coding efficiency, reduced memory usage of 0.435% for Bi-level images and 0.19% for Half-tone Images. The overall result achieved shows that the proposed model can be adopted to support ubiquitous access to digital data.</p>


Author(s):  
Ning Yang ◽  
Shiaaulir Wang ◽  
Paul Schonfeld

A Parallel Genetic Algorithm (PGA) is used for a simulation-based optimization of waterway project schedules. This PGA is designed to distribute a Genetic Algorithm application over multiple processors in order to speed up the solution search procedure for a very large combinational problem. The proposed PGA is based on a global parallel model, which is also called a master-slave model. A Message-Passing Interface (MPI) is used in developing the parallel computing program. A case study is presented, whose results show how the adaption of a simulation-based optimization algorithm to parallel computing can greatly reduce computation time. Additional techniques which are found to further improve the PGA performance include: (1) choosing an appropriate task distribution method, (2) distributing simulation replications instead of different solutions, (3) avoiding the simulation of duplicate solutions, (4) avoiding running multiple simulations simultaneously in shared-memory processors, and (5) avoiding using multiple processors which belong to different clusters (physical sub-networks).


Sign in / Sign up

Export Citation Format

Share Document