On Some Properties of the Mechanical Topology That Affect Parallel Solvers

Author(s):  
Alessandro Tasora ◽  
Dan Negrut

The efficiency of parallel solvers for large multibody systems is affected by the topology of the network of constraints. In the most general setting, that is the case of problems involving contacts between large numbers of parts, the mechanical topology cannot be predicted a priori and also changes during the simulation. Depending on the strategy for splitting the computational workload on the processing units, different types of worst case scenarios can happen. In this paper we discuss a few approaches to the parallelization of multibody solvers, ranging from the fine-grained parallism on GPU to coarse-grained parallelism in clusters, and we show how their bottlenecks are directly related to some graph properties of the mechanical topology. Drawing on the topological analysis of the constraint network and its splitting, lower bounds on the computational complexity of the solver methods are presented, and some guidelines for limiting the worst-case scenarios in parallel algorithms are put forward.

Symmetry ◽  
2019 ◽  
Vol 11 (12) ◽  
pp. 1440 ◽  
Author(s):  
Erhu Zhang ◽  
Bo Li ◽  
Peilin Li ◽  
Yajun Chen

Deep learning has been successfully applied to classification tasks in many fields due to its good performance in learning discriminative features. However, the application of deep learning to printing defect classification is very rare, and there is almost no research on the classification method for printing defects with imbalanced samples. In this paper, we present a deep convolutional neural network model to extract deep features directly from printed image defects. Furthermore, considering the asymmetry in the number of different types of defect samples—that is, the number of different kinds of defect samples is unbalanced—seven types of over-sampling methods were investigated to determine the best method. To verify the practical applications of the proposed deep model and the effectiveness of the extracted features, a large dataset of printing detect samples was built. All samples were collected from practical printing products in the factory. The dataset includes a coarse-grained dataset with four types of printing samples and a fine-grained dataset with eleven types of printing samples. The experimental results show that the proposed deep model achieves a 96.86% classification accuracy rate on the coarse-grained dataset without adopting over-sampling, which is the highest accuracy compared to the well-known deep models based on transfer learning. Moreover, by adopting the proposed deep model combined with the SVM-SMOTE over-sampling method, the accuracy rate is improved by more than 20% in the fine-grained dataset compared to the method without over-sampling.


Author(s):  
Wang Zheng-fang ◽  
Z.F. Wang

The main purpose of this study highlights on the evaluation of chloride SCC resistance of the material,duplex stainless steel,OOCr18Ni5Mo3Si2 (18-5Mo) and its welded coarse grained zone(CGZ).18-5Mo is a dual phases (A+F) stainless steel with yield strength:512N/mm2 .The proportion of secondary Phase(A phase) accounts for 30-35% of the total with fine grained and homogeneously distributed A and F phases(Fig.1).After being welded by a specific welding thermal cycle to the material,i.e. Tmax=1350°C and t8/5=20s,microstructure may change from fine grained morphology to coarse grained morphology and from homogeneously distributed of A phase to a concentration of A phase(Fig.2).Meanwhile,the proportion of A phase reduced from 35% to 5-10°o.For this reason it is known as welded coarse grained zone(CGZ).In association with difference of microstructure between base metal and welded CGZ,so chloride SCC resistance also differ from each other.Test procedures:Constant load tensile test(CLTT) were performed for recording Esce-t curve by which corrosion cracking growth can be described, tf,fractured time,can also be recorded by the test which is taken as a electrochemical behavior and mechanical property for SCC resistance evaluation. Test environment:143°C boiling 42%MgCl2 solution is used.Besides, micro analysis were conducted with light microscopy(LM),SEM,TEM,and Auger energy spectrum(AES) so as to reveal the correlation between the data generated by the CLTT results and micro analysis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Wei-Tang Chang ◽  
Stephanie K. Langella ◽  
Yichuan Tang ◽  
Sahar Ahmad ◽  
Han Zhang ◽  
...  

AbstractThe hippocampus is critical for learning and memory and may be separated into anatomically-defined hippocampal subfields (aHPSFs). Hippocampal functional networks, particularly during resting state, are generally analyzed using aHPSFs as seed regions, with the underlying assumption that the function within a subfield is homogeneous, yet heterogeneous between subfields. However, several prior studies have observed similar resting-state functional connectivity (FC) profiles between aHPSFs. Alternatively, data-driven approaches investigate hippocampal functional organization without a priori assumptions. However, insufficient spatial resolution may result in a number of caveats concerning the reliability of the results. Hence, we developed a functional Magnetic Resonance Imaging (fMRI) sequence on a 7 T MR scanner achieving 0.94 mm isotropic resolution with a TR of 2 s and brain-wide coverage to (1) investigate the functional organization within hippocampus at rest, and (2) compare the brain-wide FC associated with fine-grained aHPSFs and functionally-defined hippocampal subfields (fHPSFs). This study showed that fHPSFs were arranged along the longitudinal axis that were not comparable to the lamellar structures of aHPSFs. For brain-wide FC, the fHPSFs rather than aHPSFs revealed that a number of fHPSFs connected specifically with some of the functional networks. Different functional networks also showed preferential connections with different portions of hippocampal subfields.


Author(s):  
Zhuliang Yao ◽  
Shijie Cao ◽  
Wencong Xiao ◽  
Chen Zhang ◽  
Lanshun Nie

In trained deep neural networks, unstructured pruning can reduce redundant weights to lower storage cost. However, it requires the customization of hardwares to speed up practical inference. Another trend accelerates sparse model inference on general-purpose hardwares by adopting coarse-grained sparsity to prune or regularize consecutive weights for efficient computation. But this method often sacrifices model accuracy. In this paper, we propose a novel fine-grained sparsity approach, Balanced Sparsity, to achieve high model accuracy with commercial hardwares efficiently. Our approach adapts to high parallelism property of GPU, showing incredible potential for sparsity in the widely deployment of deep learning services. Experiment results show that Balanced Sparsity achieves up to 3.1x practical speedup for model inference on GPU, while retains the same high model accuracy as finegrained sparsity.


Synthese ◽  
2021 ◽  
Author(s):  
Matti Sarkia

AbstractThis paper analyzes three contrasting strategies for modeling intentional agency in contemporary analytic philosophy of mind and action, and draws parallels between them and similar strategies of scientific model-construction. Gricean modeling involves identifying primitive building blocks of intentional agency, and building up from such building blocks to prototypically agential behaviors. Analogical modeling is based on picking out an exemplary type of intentional agency, which is used as a model for other agential types. Theoretical modeling involves reasoning about intentional agency in terms of some domain-general framework of lawlike regularities, which involves no detailed reference to particular building blocks or exemplars of intentional agency (although it may involve coarse-grained or heuristic reference to some of them). Given the contrasting procedural approaches that they employ and the different types of knowledge that they embody, the three strategies are argued to provide mutually complementary perspectives on intentional agency.


2021 ◽  
Vol 83 (4) ◽  
Author(s):  
S. Adam Soule ◽  
Michael Zoeller ◽  
Carolyn Parcheta

AbstractHawaiian and other ocean island lava flows that reach the coastline can deposit significant volumes of lava in submarine deltas. The catastrophic collapse of these deltas represents one of the most significant, but least predictable, volcanic hazards at ocean islands. The volume of lava deposited below sea level in delta-forming eruptions and the mechanisms of delta construction and destruction are rarely documented. Here, we report on bathymetric surveys and ROV observations following the Kīlauea 2018 eruption that, along with a comparison to the deltas formed at Pu‘u ‘Ō‘ō over the past decade, provide new insight into delta formation. Bathymetric differencing reveals that the 2018 deltas contain more than half of the total volume of lava erupted. In addition, we find that the 2018 deltas are comprised largely of coarse-grained volcanic breccias and intact lava flows, which contrast with those at Pu‘u ‘Ō‘ō that contain a large fraction of fine-grained hyaloclastite. We attribute this difference to less efficient fragmentation of the 2018 ‘a‘ā flows leading to fragmentation by collapse rather than hydrovolcanic explosion. We suggest a mechanistic model where the characteristic grain size influences the form and stability of the delta with fine grain size deltas (Pu‘u ‘Ō‘ō) experiencing larger landslides with greater run-out supported by increased pore pressure and with coarse grain size deltas (Kīlauea 2018) experiencing smaller landslides that quickly stop as the pore pressure rapidly dissipates. This difference, if validated for other lava deltas, would provide a means to assess potential delta stability in future eruptions.


Author(s):  
Shanshan Yu ◽  
Jicheng Zhang ◽  
Ju Liu ◽  
Xiaoqing Zhang ◽  
Yafeng Li ◽  
...  

AbstractIn order to solve the problem of distributed denial of service (DDoS) attack detection in software-defined network, we proposed a cooperative DDoS attack detection scheme based on entropy and ensemble learning. This method sets up a coarse-grained preliminary detection module based on entropy in the edge switch to monitor the network status in real time and report to the controller if any abnormality is found. Simultaneously, a fine-grained precise attack detection module is designed in the controller, and a ensemble learning-based algorithm is utilized to further identify abnormal traffic accurately. In this framework, the idle computing capability of edge switches is fully utilized with the design idea of edge computing to offload part of the detection task from the control plane to the data plane innovatively. Simulation results of two common DDoS attack methods, ICMP and SYN, show that the system can effectively detect DDoS attacks and greatly reduce the southbound communication overhead and the burden of the controller as well as the detection delay of the attacks.


2017 ◽  
Vol 21 (4) ◽  
pp. 308-320 ◽  
Author(s):  
Mark Rubin

Hypothesizing after the results are known, or HARKing, occurs when researchers check their research results and then add or remove hypotheses on the basis of those results without acknowledging this process in their research report ( Kerr, 1998 ). In the present article, I discuss 3 forms of HARKing: (a) using current results to construct post hoc hypotheses that are then reported as if they were a priori hypotheses; (b) retrieving hypotheses from a post hoc literature search and reporting them as a priori hypotheses; and (c) failing to report a priori hypotheses that are unsupported by the current results. These 3 types of HARKing are often characterized as being bad for science and a potential cause of the current replication crisis. In the present article, I use insights from the philosophy of science to present a more nuanced view. Specifically, I identify the conditions under which each of these 3 types of HARKing is most and least likely to be bad for science. I conclude with a brief discussion about the ethics of each type of HARKing.


2018 ◽  
Vol 30 (2) ◽  
pp. 438-459
Author(s):  
Matti J. Haverila ◽  
Kai Christian Haverila

Purpose Customer-centric measures such as customer satisfaction and repurchase intent are important indicators of performance. The purpose of this paper is to examine what is the strength and significance of the path coefficients in a customer satisfaction model consisting of various customer-centric measures for different types of ski resort customer (i.e. day, weekend and ski holiday visitors as well as season pass holders) in a ski resort in Canada. Design/methodology/approach The results were analyzed using the partial least squares structural equation modeling approach for the four different types ski resort visitors. Findings There appeared to differences in the strength and significance in the customer satisfaction model relationships for the four types of ski resort visitors indicating that the a priori managerial classification of the ski resort visitors is warranted. Originality/value The research pinpoints differences in the strength and significance in the relationships between customer-centric measures for four different types ski resort visitors, i.e. day, weekend and ski holiday visitors as well as season pass holders, which have significant managerial implications for the marketing practice of the ski resort.


1991 ◽  
Vol 106 (3) ◽  
pp. 459-465 ◽  
Author(s):  
F. E. Donald ◽  
R. C. B. Slack ◽  
G. Colman

SUMMARYIsolates of Streptococcus pyogenes from vaginal swabs of children with vulvovaginitis received at Nottingham Public Health Laboratory during 1986–9 were studied. A total of 159 isolates was made during the 4 years, increasing from 17 in 1986 to 64 in 1989 and accounting for 11% of all vaginal swabs received from children. The numbers of throat swabs yielding S. pyogenes also showed an increase from 974 in 1986 to 1519 in 1989. A winter peak of isolates was noted for both vaginal swabs and throat swabs. A total of 98 strains from vaginal swabs were serotyped: 22 different types were identified, 61% of which were the common types M4, M6, R28 and M12. Erythromycin sensitivity was done on 89 strains; 84% were highly sensitive (MIC < 0·03 mg/1). There are no other reports of such large numbers in the literature; the reason for seeing this increase in Nottingham is unclear.


Sign in / Sign up

Export Citation Format

Share Document