Hybrid intelligent telemedical monitoring and predictive systems

Author(s):  
Uduak Umoh ◽  
Imo Eyoh ◽  
Vadivel S. Murugesan ◽  
Abdultaofeek Abayomi ◽  
Samuel Udoh

Healthcare systems need to overcome the high mortality rate associated with cardiovascular disease and improve patients’ health by using decision support models that are both quantitative and qualitative. However, existing models emphasize mathematical procedures, which are only good for analyzing quantitative decision variables and have failed to consider several relevant qualitative decision variables which cannot be simply quantified. In solving this problem, some models such as interval type-2 fuzzy logic (IT2FL) and flower pollination algorithm (FPA) have been used in isolation. IT2FL is a simplified version of T2FL, with a reduced computation complexity and additional design degrees of freedom, but it cannot naturally achieve the rules it uses in making decisions. FPA is a bio-inspired method based on the process of pollination, executed by the flowering plants, with the ability to learn, generalize and process numerous measurable data, but it is not able to describe how it reaches its decisions. The hybrid intelligent IT2FL-FPA system can conquer the constraints of individual approaches and strengthens their robustness to cope with healthcare data. This work describes a hybrid intelligent telemedical monitoring and predictive system using IT2FL and FPA. The main objective of this paper is to find the best membership functions (MFs) parameters of the IT2FL for an optimal solution. The FPA technique was employed to find the optimal parameters of the MFs used for IT2FLSs. The authors tested two data sets for the monitoring and prediction problems, namely: cardiovascular disease patients’ clinical and real-time datasets for shock-level monitoring and prediction.

2021 ◽  
Vol 9 (2) ◽  
pp. 152
Author(s):  
Edwar Lujan ◽  
Edmundo Vergara ◽  
Jose Rodriguez-Melquiades ◽  
Miguel Jiménez-Carrión ◽  
Carlos Sabino-Escobar ◽  
...  

This work introduces a fuzzy optimization model, which solves in an integrated way the berth allocation problem (BAP) and the quay crane allocation problem (QCAP). The problem is solved for multiple quays, considering vessels’ imprecise arrival times. The model optimizes the use of the quays. The BAP + QCAP, is a NP-hard (Non-deterministic polynomial-time hardness) combinatorial optimization problem, where the decision to assign available quays for each vessel adds more complexity. The imprecise vessel arrival times and the decision variables—berth and departure times—are represented by triangular fuzzy numbers. The model obtains a robust berthing plan that supports early and late arrivals and also assigns cranes to each berth vessel. The model was implemented in the CPLEX solver (IBM ILOG CPLEX Optimization Studio); obtaining in a short time an optimal solution for very small instances. For medium instances, an undefined behavior was found, where a solution (optimal or not) may be found. For large instances, no solutions were found during the assigned processing time (60 min). Although the model was applied for n = 2 quays, it can be adapted to “n” quays. For medium and large instances, the model must be solved with metaheuristics.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Geraldine Cáceres Sepúlveda ◽  
Silvia Ochoa ◽  
Jules Thibault

AbstractDue to the highly competitive market and increasingly stringent environmental regulations, it is paramount to operate chemical processes at their optimal point. In a typical process, there are usually many process variables (decision variables) that need to be selected in order to achieve a set of optimal objectives for which the process will be considered to operate optimally. Because some of the objectives are often contradictory, Multi-objective optimization (MOO) can be used to find a suitable trade-off among all objectives that will satisfy the decision maker. The first step is to circumscribe a well-defined Pareto domain, corresponding to the portion of the solution domain comprised of a large number of non-dominated solutions. The second step is to rank all Pareto-optimal solutions based on some preferences of an expert of the process, this step being performed using visualization tools and/or a ranking algorithm. The last step is to implement the best solution to operate the process optimally. In this paper, after reviewing the main methods to solve MOO problems and to select the best Pareto-optimal solution, four simple MOO problems will be solved to clearly demonstrate the wealth of information on a given process that can be obtained from the MOO instead of a single aggregate objective. The four optimization case studies are the design of a PI controller, an SO2 to SO3 reactor, a distillation column and an acrolein reactor. Results of these optimization case studies show the benefit of generating and using the Pareto domain to gain a deeper understanding of the underlying relationships between the various process variables and performance objectives.


2016 ◽  
Vol 8 (6) ◽  
Author(s):  
Joshua T. Bryson ◽  
Xin Jin ◽  
Sunil K. Agrawal

Designing an effective cable architecture for a cable-driven robot becomes challenging as the number of cables and degrees of freedom of the robot increase. A methodology has been previously developed to identify the optimal design of a cable-driven robot for a given task using stochastic optimization. This approach is effective in providing an optimal solution for robots with high-dimension design spaces, but does not provide insights into the robustness of the optimal solution to errors in the configuration parameters that arise in the implementation of a design. In this work, a methodology is developed to analyze the robustness of the performance of an optimal design to changes in the configuration parameters. This robustness analysis can be used to inform the implementation of the optimal design into a robot while taking into account the precision and tolerances of the implementation. An optimized cable-driven robot leg is used as a motivating example to illustrate the application of the configuration robustness analysis. Following the methodology, the effect on robot performance due to design variations is analyzed, and a modified design is developed which minimizes the potential performance degradations due to implementation errors in the design parameters. A robot leg is constructed and is used to validate the robustness analysis by demonstrating the predicted effects of variations in the design parameters on the performance of the robot.


Author(s):  
Mounir Hammouche ◽  
Philippe Lutz ◽  
Micky Rakotondrabe

The problem of robust and optimal output feedback design for interval state-space systems is addressed in this paper. Indeed, an algorithm based on set inversion via interval analysis (SIVIA) combined with interval eigenvalues computation and eigenvalues clustering techniques is proposed to seek for a set of robust gains. This recursive SIVIA-based algorithm allows to approximate with subpaving the set solutions [K] that satisfy the inclusion of the eigenvalues of the closed-loop system in a desired region in the complex plane. Moreover, the LQ tracker design is employed to find from the set solutions [K] the optimal solution that minimizes the inputs/outputs energy and ensures the best behaviors of the closed-loop system. Finally, the effectiveness of the algorithm is illustrated by a real experimentation on a piezoelectric tube actuator.


2016 ◽  
Vol 4 (1) ◽  
pp. 54-66
Author(s):  
Nguyen Khanh ◽  
Jimin Lee ◽  
Susan Reiser ◽  
Donna Parsons ◽  
Sara Russell ◽  
...  

A Methodology for Appropriate Testing When Data is Heterogeneous was originally published and copy written in the mid-1990s in Turbo Pascal and a 16-bit operating system.  While working on an ergonomic dissertation (Yearout, 1987), the author determined that the perceptual lighting preference data was heterogeneous and not normal.  Drs. Milliken and Johnson, the authors of Analysis of Messy Data Volume I: Designed Experiments (1989), advised that Satterthwaite’s Approximation with Bonferroni’s Adjustment to correct for pairwise error be used to analyze the heterogeneous data. This technique of applying linear combinations with adjusted degrees of freedom allowed the use of t-Table criteria to make group comparisons without using standard nonparametric techniques.  Thus data with unequal variances and unequal sample sizes could be analyzed without losing valuable information.  Variances to the 4th power were so large that they could not be reentered into basic calculators.  The solution was to develop an original software package which was written in Turbo Pascal on a 7 ¼ inch disk 16-bit operating system.  Current operating systems of 32 and 64 bits and more efficient programming languages have made the software obsolete and unusable. Using the old system could result either in many returns being incorrect or the system terminating.  The purpose of this research was to develop a spreadsheet algorithm with multiple interactive EXCEL worksheets that will efficiently apply Satterthwaite’s Approximation with Bonferroni’s Adjustment to solve the messy data problem.  To ensure that the pedagogy is accurate, the resulting package was successfully tested in the classroom with academically diverse students.  A comparison between this technique and EXCEL’s Add-Ins Analysis ToolPak for a t-test Two-Sample Assuming Unequal Variances was conducted using several different data sets.  The results of this comparison were that the EXCEL Add-Ins returned incorrect significant differences.  Engineers, ergonomists, psychologists, and social scientists will find the developed program very useful. A major benefit is that spreadsheets will continue to be current regardless of evolving operating systems’ status.


Author(s):  
David G. Alciatore

Abstract This paper presents the development and simulation results of a Heuristic Application-Specific Path Planner (HASPP) that can be used to automatically plan trajectories for a manipulator operating around obstacles. Since the implementation of HASPP is inherently application-specific due to dependence on heuristics, the application of HASPP to an eight degree of freedom Pipe Manipulator is presented as an illustrative example. This development and simulation was implemented on a Silicon Graphics Personal IRIS with the aid of WALKTHRU, a 3-D simulation and animation tool, and software developed in C. HASPP uses extensive knowledge of the manipulator’s workspace and makes certain assumptions about the environment in finding trajectories. The algorithm also makes use of the manipulator’s redundant degrees of freedom to avoid obstacles and joint limits during the trajectory while obtaining a heuristic near-optimal solution. The algorithm is rule-based, governed by heuristics and well-defined geometric tests, providing extremely fast results. It finds “good” trajectories that are optimal within the defined heuristics. When a trajectory is not feasible for the given geometry, the algorithm offers a diagnosis of the limiting constraints. The Pipe Manipulator HASPP implementation has been tested thoroughly with the computer graphics model and it has demonstrated the ability to reliably determine near-optimal collision-free erection trajectories completely automatically. No other planning techniques available in the literature have demonstrated the ability to solve problems as complex as the example presented here. The use of HASPP with simulation offers many application opportunities including plant design constructability studies, assembly and maintenance planning, pre-planning and pre-programming of equipment tasks, and equipment operator assistance. This work was the result of construction automation research sponsored by the National Science Foundation.


2021 ◽  
Author(s):  
Martha Frysztacki ◽  
Jonas Hörsch ◽  
Veit Hagenmeyer ◽  
Tom Brown

<p>Energy systems are typically modeled with a low spatial resolution that is based on administrative boundaries such as countries, which eases data collection and reduces computation times. However, a low spatial resolution can lead to sub-optimal investment decisions for renewable generation, transmission expansion or both. Ignoring power grid bottlenecks within regions tends to underestimate system costs, while combining locations with different renewable capacity factors tends to overestimate costs. We investigate these two competing effects in a capacity expansion model for Europe’s future power system that reduces carbon emissions by 95% compared to 1990s levels, taking advantage of newly-available high-resolution data sets and computational advances. We vary the model resolution by changing the number of substations, interpolating between a 37-node model where every country and synchronous zone is modeled with one node respectively, and a 512-node model based on the location of electricity substations. If we focus on the effect of renewable resource resolution and ignore network restrictions, we find that a higher resolution allows the optimal solution to concentrate wind and solar capacity at sites with higher capacity factors and thus reduces system costs by up to 10.5% compared to a low resolution model. This results in a big swing from offshore to onshore wind investment. However, if we introduce grid bottlenecks by raising the network resolution, costs increase by up to 19% as generation has to be sourced more locally where demand is high, typically at sites with worse capacity factors. These effects are most pronounced in scenarios where transmission expansion is limited, for example, by low social acceptance.</p>


Author(s):  
Hossein Arsham ◽  
M. Bardossy ◽  
D. K. Sharma

This chapter provides a critical overview of Linear Programming (LP) from a manager's perspective. The main objective is to provide managers with the essentials of LP as well as cautionary notes and defenses on common modeling issues and software limitations. The authors illustrate the findings by solving a simple LP directly on the original decision variables and constraints space without adding new variables or translating the model to fit a specific solution algorithm. The aims are the unification of diverse set of topics in their natural states in a manner that are easy to understand and providing useful information to the managers. The advances in computing software have brought LP tools to the desktop for a variety of applications to support managerial decision-making. However, it is already recognized that current LP tools, in ample circumstances, do not answer the managerial questions satisfactorily. For instance, there is a costly difference between the mathematical and managerial interpretations of sensitivity analysis. LP software packages provide one-change-at-a-time sensitivity results; the authors develop the largest sensitivity region, which allows for simultaneous dependent and/or independent changes, based on the optimal solution. The procedures are illustrated by numerical examples including LP in standard-form and LP in non standard-form.


Author(s):  
Tapan Kumar Singh ◽  
Kedar Nath Das

Most of the problems arise in real-life situation are complex natured. The level of the complexity increases due to the presence of highly non-linear constraints and increased number of decision variables. Finding the global solution for such complex problems is a greater challenge to the researchers. Fortunately, most of the time, bio-inspired techniques at least provide some near optimal solution, where the traditional methods become even completely handicapped. In this chapter, the behavioral study of a fly namely ‘Drosophila' has been presented. It is worth noting that, Drosophila uses it optimized behavior, particularly, when searches its food in the nature. Its behavior is modeled in to optimization and software is designed called Drosophila Food Search Optimization (DFO).The performance, DFO has been used to solve a wide range of both unconstrained and constrained benchmark function along with some of the real life problems. It is observed from the numerical results and analysis that DFO outperform the state of the art evolutionary techniques with faster convergence rate.


Author(s):  
Huug van den Dool

How many degrees of freedom are evident in a physical process represented by f(s, t)? In some form questions about “degrees of freedom” (d.o.f.) are common in mathematics, physics, statistics, and geophysics. This would mean, for instance, in how many independent directions a weight suspended from the ceiling could move. Dofs are important for three reasons that will become apparent in the remaining chapters. First, dofs are critically important in understanding why natural analogues can (or cannot) be applied as a forecast method in a particular problem (Chapter 7). Secondly, understanding dofs leads to ideas about truncating data sets efficiently, which is very important for just about any empirical prediction method (Chapters 7 and 8). Lastly, the number of dofs retained is one aspect that has a bearing on how nonlinear prediction methods can be (Chapter 10). In view of Chapter 5 one might think that the total number of orthogonal directions required to reproduce a data set is the dof. However, this is impractical as the dimension would increase (to infinity) with ever denser and slightly imperfect observations. Rather we need a measure that takes into account the amount of variance represented by each orthogonal direction, because some directions are more important than others. This allows truncation in EOF space without lowering the “effective” dof very much. We here think schematically of the total atmospheric or oceanic variance about the mean state as being made up by N equal additive variance processes. N can be thought of as the dimension of a phase space in which the atmospheric state at one moment in time is a point. This point moves around over time in the N-dimensional phase space. The climatology is the origin of the phase space. The trajectory of a sequence of atmospheric states is thus a complicated Lissajous figure in N dimensions, where, importantly, the range of the excursions in each of the N dimensions is the same in the long run. The phase space is a hypersphere with an equal probability radius in all N directions.


Sign in / Sign up

Export Citation Format

Share Document