baseline algorithm
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 24)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
pp. 193229682110600
Author(s):  
Ryan Armiger ◽  
Monika Reddy ◽  
Nick S. Oliver ◽  
Pantelis Georgiou ◽  
Pau Herrero

Background: User-developed automated insulin delivery systems, also referred to as do-it-yourself artificial pancreas systems (DIY APS), are in use by people living with type 1 diabetes. In this work, we evaluate, in silico, the DIY APS Loop control algorithm and compare it head-to-head with the bio-inspired artificial pancreas (BiAP) controller for which clinical data are available. Methods: The Python version of the Loop control algorithm called PyLoopKit was employed for evaluation purposes. A Python-MATLAB interface was created to integrate PyLoopKit with the UVa-Padova simulator. Two configurations of BiAP (non-adaptive and adaptive) were evaluated. In addition, the Tandem Basal-IQ predictive low-glucose suspend was used as a baseline algorithm. Two scenarios with different levels of variability were used to challenge the algorithms on the adult (n = 10) and adolescent (n = 10) virtual cohorts of the simulator. Results: Both BiAP and Loop improve, or maintain, glycemic control when compared with Basal-IQ. Under the scenario with lower variability, BiAP and Loop perform relatively similarly. However, BiAP, and in particular its adaptive configuration, outperformed Loop in the scenario with higher variability by increasing the percentage time in glucose target range 70-180 mg/dL (BiAP-Adaptive vs Loop vs Basal-IQ) (adults: 89.9% ± 3.2%* vs 79.5% ± 5.3%* vs 67.9% ± 8.3%; adolescents: 74.6 ± 9.5%* vs 53.0% ± 7.7% vs 55.4% ± 12.0%, where * indicates the significance of P < .05 calculated in sequential order) while maintaining the percentage time below range (adults: 0.89% ± 0.37% vs 1.72% ± 1.26% vs 3.41 ± 1.92%; adolescents: 2.87% ± 2.77% vs 4.90% ± 1.92% vs 4.17% ± 2.74%). Conclusions: Both Loop and BiAP algorithms are safe and improve glycemic control when compared, in silico, with Basal-IQ. However, BiAP appears significantly more robust to real-world challenges by outperforming Loop and Basal-IQ in the more challenging scenario.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Jonathan Shapey ◽  
Aaron Kujawa ◽  
Reuben Dorent ◽  
Guotai Wang ◽  
Alexis Dimitriadis ◽  
...  

AbstractAutomatic segmentation of vestibular schwannomas (VS) from magnetic resonance imaging (MRI) could significantly improve clinical workflow and assist patient management. We have previously developed a novel artificial intelligence framework based on a 2.5D convolutional neural network achieving excellent results equivalent to those achieved by an independent human annotator. Here, we provide the first publicly-available annotated imaging dataset of VS by releasing the data and annotations used in our prior work. This collection contains a labelled dataset of 484 MR images collected on 242 consecutive patients with a VS undergoing Gamma Knife Stereotactic Radiosurgery at a single institution. Data includes all segmentations and contours used in treatment planning and details of the administered dose. Implementation of our automated segmentation algorithm uses MONAI, a freely-available open-source framework for deep learning in healthcare imaging. These data will facilitate the development and validation of automated segmentation frameworks for VS and may also be used to develop other multi-modal algorithmic models.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-29
Author(s):  
Sándor Bartha ◽  
James Cheney ◽  
Vaishak Belle

Programming or scripting languages used in real-world systems are seldom designed with a formal semantics in mind from the outset. Therefore, developing well-founded analysis tools for these systems requires reverse-engineering a formal semantics as a first step. This can take months or years of effort. Can we (at least partially) automate this process? Though desirable, automatically reverse-engineering semantics rules from an implementation is very challenging, as found by Krishnamurthi, Lerner and Elberty. In this paper, we highlight that scaling methods with the size of the language is very difficult due to state space explosion, so we propose to learn semantics incrementally. We give a formalisation of Krishnamurthi et al.'s desugaring learning framework in order to clarify the assumptions necessary for an incremental learning algorithm to be feasible. We show that this reformulation allows us to extend the search space and express rules that Krishnamurthi et al. described as challenging, while still retaining feasibility. We evaluate enumerative synthesis as a baseline algorithm, and demonstrate that, with our reformulation of the problem, it is possible to learn correct desugaring rules for the example source and core languages proposed by Krishnamurthi et al., in most cases identical to the intended rules. In addition, with user guidance, our system was able to synthesize rules for desugaring list comprehensions and try/catch/finally constructs.


Computers ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 123
Author(s):  
Triyanna Widiyaningtyas ◽  
Indriana Hidayah ◽  
Teguh Bharata Adji

One of the well-known recommendation systems is memory-based collaborative filtering that utilizes similarity metrics. Recently, the similarity metrics have taken into account the user rating and user behavior scores. The user behavior score indicates the user preference in each product type (genre). The added user behavior score to the similarity metric results in more complex computation. To reduce the complex computation, we combined the clustering method and user behavior score-based similarity. The clustering method applies k-means clustering by determination of the number of clusters using the Silhouette Coefficient. Whereas the user behavior score-based similarity utilizes User Profile Correlation-based Similarity (UPCSim). The experimental results with the MovieLens 100k dataset showed a faster computation time of 4.16 s. In addition, the Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) values decreased by 1.88% and 1.46% compared to the baseline algorithm.


Author(s):  
Chunlong Fan ◽  
Zhimin Zhang ◽  
Jianzhong Qiao

Adversarial attack on neural networks has become an important problem restricting its security applications, and among adversarial attacks oriented towards the sample set, the universal perturbation design causing most sample output errors is critical to the study. This paper takes the neural network for image classification as the research object, summarizes the existing universal perturbation generation algorithm, proposes a universal perturbation generation algorithm combining batch stochastic gradient rise and spherical projection search, achieves loss function reduction through the iterative training of stochastic gradient rise in batch samples, and limits the universal perturbation search to a high-dimensional sphere with radius [Formula: see text] to reduce the search space of universal perturbation. Moreover, the regularized technology is introduced to improve the generation quality of universal perturbations. The experimental results show that compared with the baseline algorithm, the attack success rate increases by more than 10%, the solution efficiency of universal perturbation is improved by one order of magnitude, and the quality controllability of universal perturbation is better.


2021 ◽  
Author(s):  
Jonathan Shapey ◽  
Aaron Kujawa ◽  
Reuben Dorent ◽  
Guotai Wang ◽  
Alexis Dimitriadis ◽  
...  

Automatic segmentation of vestibular schwannomas (VS) from magnetic resonance imaging (MRI) could significantly improve clinical workflow and assist patient management. We have previously developed a novel artificial intelligence framework based on a 2.5D convolutional neural network achieving excellent results equivalent to those achieved by an independent human annotator. Here, we provide the first publicly-available annotated imaging dataset of VS by releasing the data and annotations used in our prior work. This collection contains a labelled dataset of 484 MR images collected on 242 consecutive patients with a VS undergoing Gamma Knife Stereotactic Radiosurgery at a single institution. Data includes all segmentations and contours used in treatment planning and details of the administered dose. Implementation of our automated segmentation algorithm uses MONAI, a freely-available open-source framework for deep learning in healthcare imaging. These data will facilitate the development and validation of automated segmentation frameworks for VS and may also be used to develop other multi-modal algorithmic models.


Author(s):  
Qihao Shan ◽  
Sanaz Mostaghim

AbstractIn this paper, we seek to achieve task allocation in swarm intelligence using an embodied evolutionary framework, which aims to generate divergent and specialized behaviors among a swarm of agents in an online and self-organized manner. In our considered scenario, specialization is encouraged through a bi-objective composite fitness function for the genomes, which is the weighted sum of a local and a global fitness function. The former depends only on the behavior of an agent itself, while the latter depends on the effectiveness of cooperation among all nearby agents. We have tested two existing variants of embodied evolution on this scenario and compared their performances against those of an individual random walk baseline algorithm. We have found out that those two embodied evolutionary algorithms have good performances at the extreme cases of weight configurations, but are not adequate when the two objective functions interact. We thus propose a novel bi-objective embodied evolutionary algorithm, which handles the aforementioned scenario by controlling the proportion of specialized behaviors via a dynamic reproductive isolation mechanism. Its performances are compared against those of other considered algorithms, as well as the theoretical Pareto frontier produced by NSGA-II.


2021 ◽  
Author(s):  
JoseM Pavia ◽  
Rafael Romero

The estimation of RxC ecological inference contingency tables from aggregate data defines one of the most salient and challenging problems in the field of quantitative social sciences. From the mathematical programming framework, this paper suggests a new direction for tackling this problem. For the first time in the literature, a procedure based on linear programming is proposed to attain estimates of local contingency tables. Based on this and the homogeneity hypothesis, we suggest two new ecological inference algorithms. These two new algorithms represent an important step forward in the ecological inference mathematical programming literature. In addition to generating estimates for local ecological inference contingency tables and amending the tendency to produce extreme transfer probability estimates previously observed in other mathematical programming procedures, they prove to be quite competitive and more accurate than the current linear programming baseline algorithm. The new algorithms place the linear programming approach once again in a prominent position in the ecological inference toolkit. We use a unique dataset with almost 500 elections, where the real transfer matrices are known, to assess their accuracy. Interested readers can easily use these new algorithms with the aid of the R package lphom.


2021 ◽  
Author(s):  
JoseM Pavia ◽  
Rafael Romero

The estimation of RxC ecological inference contingency tables from aggregate data defines one of the most salient and challenging problems in the field of quantitative social sciences. From the mathematical programming framework, this paper suggests a new direction for tackling this problem. For the first time in the literature, a procedure based on linear programming is proposed to attain estimates of local contingency tables. Based on this and the homogeneity hypothesis, we suggest two new ecological inference algorithms. These two new algorithms represent an important step forward in the ecological inference mathematical programming literature. In addition to generating estimates for local ecological inference contingency tables and amending the tendency to produce extreme transfer probability estimates previously observed in other mathematical programming procedures, they prove to be quite competitive and more accurate than the current linear programming baseline algorithm. The new algorithms place the linear programming approach once again in a prominent position in the ecological inference toolkit. We use a unique dataset with almost 500 elections, where the real transfer matrices are known, to assess their accuracy. Interested readers can easily use these new algorithms with the aid of the R package lphom.


2021 ◽  
Vol 14 (6) ◽  
pp. 984-996
Author(s):  
Yixing Yang ◽  
Yixiang Fang ◽  
Maria E. Orlowska ◽  
Wenjie Zhang ◽  
Xuemin Lin

A bipartite network is a network with two disjoint vertex sets and its edges only exist between vertices from different sets. It has received much interest since it can be used to model the relationship between two different sets of objects in many applications (e.g., the relationship between users and items in E-commerce). In this paper, we study the problem of efficient bi-triangle counting for a large bipartite network, where a bi-triangle is a cycle with three vertices from one vertex set and three vertices from another vertex set. Counting bi-triangles has found many real applications such as computing the transitivity coefficient and clustering coefficient for bipartite networks. To enable efficient bi-triangle counting, we first develop a baseline algorithm relying on the observation that each bi-triangle can be considered as the join of three wedges. Then, we propose a more sophisticated algorithm which regards a bi-triangle as the join of two super-wedges, where a wedge is a path with two edges while a super-wedge is a path with three edges. We further optimize the algorithm by ranking vertices according to their degrees. We have performed extensive experiments on both real and synthetic bipartite networks, where the largest one contains more than one billion edges, and the results show that the proposed solutions are up to five orders of magnitude faster than the baseline method.


Sign in / Sign up

Export Citation Format

Share Document