scholarly journals Why does the spread of COVID-19 vary greatly in different countries? Revealing the efficacy of face masks in epidemic prevention

2021 ◽  
Vol 149 ◽  
Author(s):  
Jincheng Wei ◽  
Shurui Guo ◽  
Enshen Long ◽  
Li Zhang ◽  
Bizhen Shu ◽  
...  

Abstract The severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) is highly contagious, and the coronavirus disease 2019 (COVID-19) pandemic caused by it has forced many countries to adopt ‘lockdown’ measures to prevent the spread of the epidemic through social isolation of citizens. Some countries proposed universal mask wearing as a protection measure of public health to strengthen national prevention efforts and to limit the wider spread of the epidemic. In order to reveal the epidemic prevention efficacy of masks, this paper systematically evaluates the experimental studies of various masks and filter materials, summarises the general characteristics of the filtration efficiency of isolation masks with particle size, and reveals the actual efficacy of masks by combining the volume distribution characteristics of human exhaled droplets with different particle sizes and the SARS-CoV-2 virus load of nasopharynx and throat swabs from patients. The existing measured data show that the filtration efficiency of all kinds of masks for large particles and extra-large droplets is close to 100%. From the perspective of filtering the total number of pathogens discharged in the environment and protecting vulnerable individuals from breathing live viruses, the mask has a higher protective effect. If considering the weighted average filtration efficiency with different particle sizes, the filtration efficiencies of the N95 mask and the ordinary mask are 99.4% and 98.5%, respectively. The mask can avoid releasing active viruses to the environment from the source of infection, thus maximising the protection of vulnerable individuals by reducing the probability of inhaling a virus. Therefore, if the whole society strictly implements the policy of publicly wearing masks, the risk of large-scale spread of the epidemic can be greatly reduced. Compared with the overall cost of social isolation, limited personal freedoms and forced suspension of economic activities, the inconvenience for citizens caused by wearing masks is perfectly acceptable.

Author(s):  
Michael Mutz ◽  
Anne K. Reimers ◽  
Yolanda Demetriou

Abstract Observational and experimental studies show that leisure time sporting activity (LTSA) is associated with higher well-being. However, scholars often seem to assume that 1) LTSA fosters “general” life satisfaction, thereby ignoring effects on domain satisfaction; 2) the effect of LTSA on well-being is linear and independent of a person’s general activity level; 3) the amount of LTSA is more important than the repertoire of LTSA, i.e. the number of different activities; 4) all kinds of LTSA are equal in their effects, irrespective of spatial and organisational context conditions. Using data from the German SALLSA-Study (“Sport, Active Lifestyle and Life Satisfaction”), a large-scale CAWI-Survey (N = 1008) representing the population ≥ 14 years, the paper takes a closer look on these assumptions. Findings demonstrate that LTSA is associated with general life satisfaction and domain-specific satisfaction (concerning relationships, appearance, leisure, work and health), but that the relationship is most pronounced for leisure satisfaction. Associations of sport with life satisfaction, leisure satisfaction and subjective health are non-linear, approaching an injection point from which on additional LTSA is no longer beneficial. Moreover, findings lend support to the notion that diversity in LTSA matters, as individuals with higher variation in sports activities are more satisfied. Finally, results with regard to spatial and organizational context suggest that outdoor sports and club-organized sports have additional benefits.


Water ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 141
Author(s):  
Firoza Akhter ◽  
Maurizio Mazzoleni ◽  
Luigia Brandimarte

In this study, we explore the long-term trends of floodplain population dynamics at different spatial scales in the contiguous United States (U.S.). We exploit different types of datasets from 1790–2010—i.e., decadal spatial distribution for the population density in the US, global floodplains dataset, large-scale data of flood occurrence and damage, and structural and nonstructural flood protection measures for the US. At the national level, we found that the population initially settled down within the floodplains and then spread across its territory over time. At the state level, we observed that flood damages and national protection measures might have contributed to a learning effect, which in turn, shaped the floodplain population dynamics over time. Finally, at the county level, other socio-economic factors such as local flood insurances, economic activities, and socio-political context may predominantly influence the dynamics. Our study shows that different influencing factors affect floodplain population dynamics at different spatial scales. These facts are crucial for a reliable development and implementation of flood risk management planning.


Author(s):  
Anne Spinewine ◽  
Perrine Evrard ◽  
Carmel Hughes

Abstract Purpose Polypharmacy, medication errors and adverse drug events are frequent among nursing home residents. Errors can occur at any step of the medication use process. We aimed to review interventions aiming at optimization of any step of medication use in nursing homes. Methods We narratively reviewed quantitative as well as qualitative studies, observational and experimental studies that described interventions, their effects as well as barriers and enablers to implementation. We prioritized recent studies with relevant findings for the European setting. Results Many interventions led to improvements in medication use. However, because of outcome heterogeneity, comparison between interventions was difficult. Prescribing was the most studied aspect of medication use. At the micro-level, medication review, multidisciplinary work, and more recently, patient-centered care components dominated. At the macro-level, guidelines and legislation, mainly for specific medication classes (e.g., antipsychotics) were employed. Utilization of technology also helped improve medication administration. Several barriers and enablers were reported, at individual, organizational, and system levels. Conclusion Overall, existing interventions are effective in optimizing medication use. However there is a need for further European well-designed and large-scale evaluations of under-researched intervention components (e.g., health information technology, patient-centered approaches), specific medication classes (e.g., antithrombotic agents), and interventions targeting medication use aspects other than prescribing (e.g., monitoring). Further development and uptake of core outcome sets is required. Finally, qualitative studies on barriers and enablers for intervention implementation would enable theory-driven intervention design.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4206
Author(s):  
Farhan Nawaz ◽  
Hemant Kumar ◽  
Syed Ali Hassan ◽  
Haejoon Jung

Enabled by the fifth-generation (5G) and beyond 5G communications, large-scale deployments of Internet-of-Things (IoT) networks are expected in various application fields to handle massive machine-type communication (mMTC) services. Device-to-device (D2D) communications can be an effective solution in massive IoT networks to overcome the inherent hardware limitations of small devices. In such D2D scenarios, given that a receiver can benefit from the signal-to-noise-ratio (SNR) advantage through diversity and array gains, cooperative transmission (CT) can be employed, so that multiple IoT nodes can create a virtual antenna array. In particular, Opportunistic Large Array (OLA), which is one type of CT technique, is known to provide fast, energy-efficient, and reliable broadcasting and unicasting without prior coordination, which can be exploited in future mMTC applications. However, OLA-based protocol design and operation are subject to network models to characterize the propagation behavior and evaluate the performance. Further, it has been shown through some experimental studies that the most widely-used model in prior studies on OLA is not accurate for networks with networks with low node density. Therefore, stochastic models using quasi-stationary Markov chain are introduced, which are more complex but more exact to estimate the key performance metrics of the OLA transmissions in practice. Considering the fact that such propagation models should be selected carefully depending on system parameters such as network topology and channel environments, we provide a comprehensive survey on the analytical models and framework of the OLA propagation in the literature, which is not available in the existing survey papers on OLA protocols. In addition, we introduce energy-efficient OLA techniques, which are of paramount importance in energy-limited IoT networks. Furthermore, we discuss future research directions to combine OLA with emerging technologies.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-31
Author(s):  
Haida Zhang ◽  
Zengfeng Huang ◽  
Xuemin Lin ◽  
Zhe Lin ◽  
Wenjie Zhang ◽  
...  

Driven by many real applications, we study the problem of seeded graph matching. Given two graphs and , and a small set of pre-matched node pairs where and , the problem is to identify a matching between and growing from , such that each pair in the matching corresponds to the same underlying entity. Recent studies on efficient and effective seeded graph matching have drawn a great deal of attention and many popular methods are largely based on exploring the similarity between local structures to identify matching pairs. While these recent techniques work provably well on random graphs, their accuracy is low over many real networks. In this work, we propose to utilize higher-order neighboring information to improve the matching accuracy and efficiency. As a result, a new framework of seeded graph matching is proposed, which employs Personalized PageRank (PPR) to quantify the matching score of each node pair. To further boost the matching accuracy, we propose a novel postponing strategy, which postpones the selection of pairs that have competitors with similar matching scores. We show that the postpone strategy indeed significantly improves the matching accuracy. To improve the scalability of matching large graphs, we also propose efficient approximation techniques based on algorithms for computing PPR heavy hitters. Our comprehensive experimental studies on large-scale real datasets demonstrate that, compared with state-of-the-art approaches, our framework not only increases the precision and recall both by a significant margin but also achieves speed-up up to more than one order of magnitude.


2021 ◽  
Author(s):  
Federica Paglialunga ◽  
François Passelègue ◽  
Fabian Barras ◽  
Mathias Lebihain ◽  
Nicolas Brantut ◽  
...  

<p>Potential energy stored during the inter-seismic period by tectonic loading around faults can be released through earthquakes as radiated energy, heat and rupture energy. The latter is of first importance, since it controls both the nucleation and the propagation of the seismic rupture. On one side, the rupture energy estimated for natural earthquakes (also called Breakdown work) ranges between 1 J/m<sup>2</sup> and tens of MJ/m<sup>2</sup> for the largest events, and shows a clear slip dependence. On the other side, recent experimental studies highlighted that at the scale of the laboratory, rupture energy is a material property (energy required to break the fault interface), limited by an upper bound value corresponding to the rupture energy of the intact material (1 to 10 kJ/m<sup>2</sup>), independently of the size of the event, i.e. of the seismic slip.</p><p>To reconcile these contradictory observations, we performed stick-slip experiments, as an analog for earthquakes, in a bi-axial shear configuration. We analyzed the fault weakening during frictional rupture by accessing to the on-fault (1 mm away) stress-slip curve through strain-gauge array. We first estimated rupture energy by comparing the experimental strain with the theoretical predictions from both Linear Elastic Fracture Mechanics (LEFM) and the Cohesive Zone Model (CZM). Secondly, we compared these values to the breakdown work obtained from the integration of the stress-slip curve. Our results showed that, at the scale of our experiments, fault weakening is divided into two stages; the first one, corresponding to an energy of few J/m<sup>2</sup>, coherent with the estimated rupture energy (by LEFM and CZM), and a long-tailed weakening corresponding to a larger energy not observable at the rupture tip.</p><p>Using a theoretical analysis and numerical simulations, we demonstrated that only the first weakening stage controls the nucleation and the dynamics of the rupture tip. The breakdown work induced by the long-tailed weakening can enhance slip during rupture propagation and can allow the rupture to overcome stress heterogeneity along the fault. Additionally, we showed that at a large scale of observation the dynamics of the rupture tip can become controlled by the breakdown work induced by the long-tailed weakening, leading to a larger stress singularity at the rupture tip which becomes less sensitive to stress perturbations. We suggest that while the onset of frictional motions is related to fracture, natural earthquakes propagation is driven by frictional weakening with increasing slip, explaining the large values of estimated breakdown work for natural earthquakes, as well as the scale dependence in the dynamics of rupture.</p>


Author(s):  
Volodymyr Bondarenko ◽  
◽  
Oleksandr Filonenko ◽  
Mykhailo Petlovanyi ◽  
Vladyslav Ruskykh ◽  
...  

Purpose. Experimental studies of the interaction of blast-furnace and steel-making slags with open pit waters during their direct contact and assessment of the volume of filling of the formed man-made cavities during mining of mineral deposits. Methods. Based on the analysis, the current low level of metallurgical slag and the lack of real and effective directions of their large-scale utilization were determined. The laboratory studies of the interaction of metallurgical slags with open pit water at a certain time of interaction, generally accepted methods for studying the chemical composition and concentration of substances in water, computer-aided design software packages and drawings to determine the volumes of the open pit mined-out area were used. Results. The dynamics of changes in the products of interactions of steel-smelting slags with open-pit waters at a certain ratio and period of interaction was investigated. It was found that the concentration of pollutants upon contact of water with steel-making slag changes according to polynomial dependences on the time of their interaction, decreasing by the 30th day, which eliminates the danger for the aquifer. The safest type of metallurgical slag was recommended for the formation of the bottom layer of the backfill massif. The volumes of the mined-out area of the open pit were determined in detail to assess the volumes of placement of the backfill material based on metallurgical slags. Scientific novelty. The safety of the contact of backfill materials based on steelmaking slags with open pit water was scientifically proven, which is confirmed by the established polynomial patterns of changes in concentrations and pollutants from the ratio and time of interaction. Practical significance. The formation of the backfill massif on the basis of blast-furnace dump and steel-smelting slags will allow achieving an environmental effect, such as their safe disposal as a reclamation of technologically disturbed lands by mining and restoration of the economic value of the land plot, as well as preventing the formation of new dumps.


2021 ◽  
Author(s):  
Kazuki Murata ◽  
Shinji Sassa ◽  
Tomohiro Takagawa ◽  
Toshikazu Ebisuzaki ◽  
Shigenori Maruyama

Abstract We first propose and examine a method for digitizing analog data of submarine topography by focusing on the seafloor survey records available in the literature to facilitate a detailed analysis of submarine landslides and landslide-induced tsunamis. Second, we apply this digitization method to the seafloor topographic changes recorded before and after the 1923 Great Kanto earthquake tsunami event and evaluate its effectiveness. Third, we discuss the coseismic large-scale seafloor deformation at the Sagami Bay and the mouth of the Tokyo Bay, Japan. The results confirmed that the latitude / longitude and water depth values recorded by the lead sounding measurement method can be approximately extracted from the sea depth coordinates by triangulation survey through the overlaying of the currently available GIS map data without geometric correction such as affine transformation. Further, this proposed method allows us to obtain mesh data of depth changes in the sea area by using the interpolation method based on the IDW (Inverse Distance Weighted) average method through its application to the case of the 1923 Great Kanto Earthquake. Finally, we analyzed and compared the submarine topography before and after the 1923 tsunami event and the current seabed topography. Consequently, we found that these large-scale depth changes correspond to the valley lines that flow down as the topography of the Sagami Bay and the Tokyo Bay mouth.


Author(s):  
Ziyi Ma ◽  
Joseph Y. J. Chow

We propose a bilevel transit network frequency setting problem in which the upper level consists of analytical route cost functions and the lower level is an activity-based market equilibrium derived using MATSim-NYC. The use of MATSim in the lower-level problem incorporates sensitivity of the design process to competition from other modes, including ride-hail, and can support large-scale optimization. The proposed method is applied to the existing Brooklyn bus network, which includes 78 bus routes, 650,000 passengers per day, 550 route-km, and 4,696 bus stops. MATSim-NYC modeling of the existing bus network has a ridership-weighted average error per route of 21%. The proposed algorithm is applied to a benchmark network and confirms their predicted 20% growth in ridership using their benchmark design. Applying our proposed algorithm to their network with 78 routes and 24 periods, we have a problem with 3,744 decision variables. The algorithm converged within 10 iterations to a delta of 0.064%. Compared with the existing scenario, we increased ridership by 20% and reduced operating cost by 25%. We improved the farebox recovery ratio from the existing 0.22 to 0.35, 0.06 more than the benchmark design. Analysis of mode substitution effects suggest that 2.5% of trips would be drawn from ride-hail while 74% would come from driving.


Sign in / Sign up

Export Citation Format

Share Document