unrealistic assumption
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 21)

H-INDEX

8
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Rafaat Hussein

The understanding of the engineering performance of green laminated composites is necessary to the design of load bearing components in building and infrastructure construction, and packaging applications. These components are made of outer thin laminae called skins or faces and a thick inner layer called core. The use of bonding is unavoidable in the assembling of these composite products. Like all materials, the bonding materials have finite mechanical properties, e.g. stiffness, but when used in the literature, they are assumed perfectly rigid. That is an unrealistic assumption. Our analytical solutions change this assumption by using the real properties of bonding. In general, the analytical formulations are based on the equilibrium equations of forces, the compatibility of interlaminar stresses and deformation, and the geometrical conditions of the panels. Once solutions are obtained, the next step is to evaluate them. The numerical evaluations proved that perfect rigid bonding in laminated composites greatly underestimates the true performance. At low values of adhesive stiffness, the serviceability is multiple orders of magnitude of that at high values. The logical question is thus: what constitutes perfect bonding? The answer to this question lies in the core-to-adhesive stiffness. The lower the ration is the higher the error in using the rigid-bond theories. It is worth noting that green-composites in this chapter refer to components made of traditional materials such as wood, in addition to newly developed bio-based and bio-degradable and bio-based composites, made of renewable resources. In addition, bonding and adhesive are used interchangeably.


Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2443
Author(s):  
Ashraf Ahmad ◽  
Yousef AbuHour ◽  
Firas Alghanim

A Distributed Denial of Service (DDoS) attack is a type of cybercrime that renders a target service unavailable by overwhelming it with traffic from several sources (attack nodes). In this paper, we focus on DDoS attacks on a computer network by spreading bots throughout the network. A mathematical differential equation model is proposed to represent the dynamism of nodes at different compartments of the model. The model considers two levels of security, with the assumption that the recovered nodes do not return to the same security level. In previous models, the recovered nodes are returned to be suspect on the same security level, which is an unrealistic assumption. Moreover, it is assumed that the attacker can use the infected target nodes to attack again. With such epidemic-like assumptions of infection, different cases are presented and discussed, and the stability of the model is analyzed as well; reversing the symmetry transformation of attacking nodes population is also proven. The proposed model has many parameters in order to precisely describe the infection movement and propagation. Numerical simulation methods are used to solve the developed system of equations using MATLAB, with the intention of finding the best counteraction to control DDoS spread throughout a network.


Author(s):  
Torben Martinussen

This article surveys results concerning the interpretation of the Cox hazard ratio in connection to causality in a randomized study with a time-to-event response. The Cox model is assumed to be correctly specified, and we investigate whether the typical end product of such an analysis, the estimated hazard ratio, has a causal interpretation as a hazard ratio. It has been pointed out that this is not possible due to selection. We provide more insight into the interpretation of hazard ratios and differences, investigating what can be learned about a treatment effect from the hazard ratio approaching unity after a certain period of time. The conclusion is that the Cox hazard ratio is not causally interpretable as a hazard ratio unless there is no treatment effect or an untestable and unrealistic assumption holds. We give a hazard ratio that has a causal interpretation and study its relationship to the Cox hazard ratio. Expected final online publication date for the Annual Review of Statistics, Volume 9 is March 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Mantun Chen ◽  
Yongjun Wang ◽  
Zhiquan Qin ◽  
Xiatian Zhu

This work introduces a novel data augmentation method for few-shot website fingerprinting (WF) attack where only a handful of training samples per website are available for deep learning model optimization. Moving beyond earlier WF methods relying on manually-engineered feature representations, more advanced deep learning alternatives demonstrate that learning feature representations automatically from training data is superior. Nonetheless, this advantage is subject to an unrealistic assumption that there exist many training samples per website, which otherwise will disappear. To address this, we introduce a model-agnostic, efficient, and harmonious data augmentation (HDA) method that can improve deep WF attacking methods significantly. HDA involves both intrasample and intersample data transformations that can be used in a harmonious manner to expand a tiny training dataset to an arbitrarily large collection, therefore effectively and explicitly addressing the intrinsic data scarcity problem. We conducted expensive experiments to validate our HDA for boosting state-of-the-art deep learning WF attack models in both closed-world and open-world attacking scenarios, at absence and presence of strong defense. For instance, in the more challenging and realistic evaluation scenario with WTF-PAD-based defense, our HDA method surpasses the previous state-of-the-art results by nearly 3% in classification accuracy in the 20-shot learning case. An earlier version of this work Chen et al. (2021) has been presented as preprint in ArXiv (https://arxiv.org/abs/2101.10063).


Author(s):  
Xiaofeng Liu ◽  
Bo Hu ◽  
Linghao Jin ◽  
Xu Han ◽  
Fangxu Xing ◽  
...  

In this work, we propose a domain generalization (DG) approach to learn on several labeled source domains and transfer knowledge to a target domain that is inaccessible in training. Considering the inherent conditional and label shifts, we would expect the alignment of p(x|y) and p(y). However, the widely used domain invariant feature learning (IFL) methods relies on aligning the marginal concept shift w.r.t. p(x), which rests on an unrealistic assumption that p(y) is invariant across domains. We thereby propose a novel variational Bayesian inference framework to enforce the conditional distribution alignment w.r.t. p(x|y) via the prior distribution matching in a latent space, which also takes the marginal label shift w.r.t. p(y) into consideration with the posterior alignment. Extensive experiments on various benchmarks demonstrate that our framework is robust to the label shift and the cross-domain accuracy is significantly improved, thereby achieving superior performance over the conventional IFL counterparts.


2021 ◽  
Author(s):  
Kristin Jankowsky ◽  
Ulrich Schroeders

Attrition in longitudinal studies is a major threat to the representativeness of the data and the generalizability of the findings. Typical approaches to address systematic nonresponse are either expensive and unsatisfactory (e.g., oversampling) or rely on the unrealistic assumption of data missing at random (e.g., multiple imputation). Thus, models that effectively predict who most likely drops out in subsequent occasions might offer the opportunity to take countermeasures (e.g., incentives). With the current study, we introduce a longitudinal model validation approach and examine whether attrition in two nationally representative longitudinal panel studies can be predicted accurately. We compare the performance of a basic logistic regression model to a more flexible, data-driven machine learning algorithm––Gradient Boosting Machines. Our results show almost no difference in accuracies for both modeling approaches, which contradicts claims of similar studies on survey attrition. Prediction models could not be generalized across surveys and were less accurate when tested at a later survey wave. We discuss the implications of these findings for survey retention, the use of complex machine learning algorithms, and give some recommendations to deal with study attrition.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Isaac Klickstein ◽  
Francesco Sorrentino

AbstractThe field of optimal control typically requires the assumption of perfect knowledge of the system one desires to control, which is an unrealistic assumption for biological systems, or networks, typically affected by high levels of uncertainty. Here, we investigate the minimum energy control of network ensembles, which may take one of a number of possible realizations. We ensure the controller derived can perform the desired control with a tunable amount of accuracy and we study how the control energy and the overall control cost scale with the number of possible realizations. Our focus is in characterizing the solution of the optimal control problem in the limit in which the systems are drawn from a continuous distribution, and in particular, how to properly pose the weighting terms in the objective function. We verify the theory in three examples of interest: a unidirectional chain network with uncertain edge weights and self-loop weights, a network where each edge weight is drawn from a given distribution, and the Jacobian of the dynamics corresponding to the cell signaling network of autophagy in the presence of uncertain parameters.


Author(s):  
Yosi Ben-Asher ◽  
Esti Stein ◽  
Vladislav Tartakovsky

Pass transistor logic (PTL) is a circuit design technique wherein transistors are used as switches. The reconfigurable mesh (RM) is a model that exploits the power of PTLs signal switching, by enabling flexible bus connections in a grid of processing elements containing switches. RM algorithms have theoretical results proving that [Formula: see text] can speed up computations significantly. However, the RM assumes that the latency of broadcasting a signal through [Formula: see text] switches (bus length) is 1. This is an unrealistic assumption preventing physical realizations of the RM. We propose the restricted-RM (RRM) wherein the bus lengths are restricted to [Formula: see text], [Formula: see text]. We show that counting the number of 1-bits in an input of [Formula: see text] bits can be done in [Formula: see text] steps for [Formula: see text] by an [Formula: see text] RRM. An almost matching lower bound is presented, using a technique which adds to the few existing lower-bound techniques in this area. Finally, the algorithm was directly coded over an FPGA, outperforming an optimal tree of adders. This work presents an alternative way of counting, which is fundamental for summing, beating regular Boolean circuits for large numbers, where summing a vast amount of numbers is the basis of any accelerator in embedded systems such as neural-nets and streaming. a


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Weilong Wang ◽  
Kiyoshi Tamaki ◽  
Marcos Curty

AbstractMeasurement-device-independent quantum key distribution (MDI-QKD) can remove all detection side-channels from quantum communication systems. The security proofs require, however, that certain assumptions on the sources are satisfied. This includes, for instance, the requirement that there is no information leakage from the transmitters of the senders, which unfortunately is very difficult to guarantee in practice. In this paper we relax this unrealistic assumption by presenting a general formalism to prove the security of MDI-QKD with leaky sources. With this formalism, we analyze the finite-key security of two prominent MDI-QKD schemes—a symmetric three-intensity decoy-state MDI-QKD protocol and a four-intensity decoy-state MDI-QKD protocol—and determine their robustness against information leakage from both the intensity modulator and the phase modulator of the transmitters. Our work shows that MDI-QKD is feasible within a reasonable time frame of signal transmission given that the sources are sufficiently isolated. Thus, it provides an essential reference for experimentalists to ensure the security of implementations of MDI-QKD in the presence of information leakage.


Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 875
Author(s):  
Staša Milojević

We propose a new citation model which builds on the existing models that explicitly or implicitly include “direct” and “indirect” (learning about a cited paper’s existence from references in another paper) citation mechanisms. Our model departs from the usual, unrealistic assumption of uniform probability of direct citation, in which initial differences in citation arise purely randomly. Instead, we demonstrate that a two-mechanism model in which the probability of direct citation is proportional to the number of authors on a paper (team size) is able to reproduce the empirical citation distributions of articles published in the field of astronomy remarkably well, and at different points in time. Interpretation of our model is that the intrinsic citation capacity, and hence the initial visibility of a paper, will be enhanced when more people are intimately familiar with some work, favoring papers from larger teams. While the intrinsic citation capacity cannot depend only on the team size, our model demonstrates that it must be to some degree correlated with it, and distributed in a similar way, i.e., having a power-law tail. Consequently, our team-size model qualitatively explains the existence of a correlation between the number of citations and the number of authors on a paper.


Sign in / Sign up

Export Citation Format

Share Document