initial block
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 8)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 71 (11) ◽  
pp. 2695-2695
Author(s):  
Syeda Sidra Fatima ◽  
Samar Faheem

Madam, the number of new Covid-19 cases in India peaked at 362,902 on the 27th of April, 2021.1 This is the highest single day total for the world. India sold double the oxygen in 2020-21 than the previous year, now it faces a shortage of medical oxygen as it struggles with rising cases.2 Additionally, it struggles with the vaccine drive. In an article, Kamala Thiagarajan states that the initial block was mistrust of local vaccines, even among frontline healthcare workers. Other conspiracies followed, including fear of price hikes and reports about adverse effects, as approval for its own vaccines was rushed without proper evaluation to ensure safety.3 When adverse effects were observed in AstraZeneca and Johnson & Johnson trials, they were paused to conduct a safety review. However, no such thing occurred during the Covaxin trial.4 Continuous...


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2781
Author(s):  
Grzegorz Dziwoki ◽  
Marcin Kucharczyk

Channel estimation scheme for OFDM modulated transmissions usually combines an initial block-pilot-assisted stage with a tracking one based on comb or scattered pilots distributed among user data in the signal frame. The channel reconstruction accuracy in the former stage has a significant impact on tracking efficiency of the channel variations and the overall transmission quality. The paper presents a new block-pilot-assisted channel reconstruction procedure based on the DFT-based approach and the Least Square impulse response estimation. The proposed method takes into account a compressibility feature of the channel impulse response and restores its coefficients in groups of automatically controlled size. The proposition is analytically explained and tested in a OFDM simulation environment. The popular DFT-based methods including compressed sensing oriented one were used as references for comparison purposes. The obtained results show a quality improvement in terms of Bit Error Rate and Mean Square Error measures in low and mid ranges of signal-to-noise ratio without significant computational complexity growth in comparison to the classical DFT-based solutions. Moreover, additional multiplication operations can be eliminated, compared to the competitive, in terms of estimation quality, compressed sensing reconstruction method based on greedy approach.


2021 ◽  
Author(s):  
Aydin Amireghbali ◽  
Demirkan Coker

Abstract The Maxwell-slip model consists of independent mass-spring units that are slipped by a driver over a rigid, flat, fixed substrate. In the present study, the model is interpreted as a multi-asperity model and is used to study both the friction force and the mechanisms involved in the sliding of a rough elastic surface. Coulomb friction law is assumed at the single mass-spring level. A beta probability distribution function is used to generate the initial block positions randomly. The standard deviation of the initial lateral position of the blocks is interpreted as the surface roughness. The results show that when the surface is rough enough, the sequential slip of the blocks induces a steady friction force. On the other hand, when the surface is smooth enough, the collective slip of the blocks induces stick-slip. The border between the two regimes of sliding is sharply delineated by a specific roughness value. A tribological implication is that a sufficiently rough surface may bring about steady sliding. A geophysical implication is that a geological fault segment that undergoes aseismic creep may have a rougher surface compared to its locked counterpart.


2020 ◽  
Author(s):  
Jason P. Morgan ◽  
Albert de Montserrat Navarro ◽  
Paola Vannucchi ◽  
Alexander Peter Clarke ◽  
Audrey Ougier-Simonin ◽  
...  

<p>Non-volcanic tremor remains a poorly understood form of seismic activity. In its most common subduction zone setting, tremor typically occurs within the plate interface at or near the shallow and deep edges of the interseismically locked zone. Detailed seismic observations have shown that tremor is composed of repeating small low-frequency earthquakes (LFE), often accompanied by very-low-frequency earthquakes (VLF), all involving shear failure and slip. However, LFEs and VLFs within each cluster show nearly constant source durations for all observed magnitudes. This implies asperities of near-constant size,  with recent seismic observations hinting that the failure size is of order ~200m.  </p><p>We propose that geological observations and geomechanical lab measurements on heterogeneous rock assemblages representative of the shallow tremor region are most consistent with LFEs and VLFs involving the seismic failure of relatively weaker blocks within a stronger matrix.  Furthermore, in the shallow subducting rocks within a subduction shear channel, hydrothermal fluids and diagenesis have led to a strength inversion from the initial weak matrix with relatively stronger blocks to a stronger matrix with embedded relatively weaker blocks.  In this case, tremor will naturally occur as the now-weaker blocks fail seismically while their more competent surrounding matrix has not yet reached a state of general seismic failure, and instead only fails at local stress-concentrations around the tremorgenic blocks.</p><p>Here we use the recently developed code LaCoDe (de Monserrat et al., 2019) to create and explore a wide range of numerical experiments. These experiments are designed to characterize the  likely stress and strain accumulations that can develop in a heterogeneous subduction shear channel, and their implications for the genesis of tremor and its spatially associated seismic events.  In our previous modeling efforts we did not strongly vary either the block volume-fraction or the initial block and matrix geometry. Here we do both, and also explore a range of rock compressibilities ranging from seismically-inferred values to nearly incompressible behavior. We also explore models with irregular 'quasi-geological' initial block/matrix geometries. Drucker-Prager plasticity is used to characterize a fault-like mode of shear failure. This suite of experiments demonstrate that, for a wide range of block and matrix conditions,  the proposed strength-inversion mechanism can generate a mode of shallow tectonic tremor that clusters in spatially discontinuous swarms along the plate interface. At the deeper edge of the interseismically locked zone, channelised dehydration associated with subduction along a plate interface could induce a similar relative strength inversion, and thereby generate deep seated tremor.</p>


Author(s):  
E. Yu. Efremov

There is a serious threat of groundwater inrush from overlying sedimentary layers for underground mining. When ore is extracted using block caving method, the area of overburden collapse over ore zone disrupts the natural structure of high hydraulic-conductivity and low hydraulic-conductivity layers. This process creates conditions for the accumulation and transfer of groundwater to mine workings, which lead to accidents, up to disastrous proportions. The research aim is to determine the spatio-temporal distribution of mud inrushes, and to identify groundwater supply sources of inrushes to reduce the geotechnical risks of underground mining in Sokolovskaya mine. Research methods include localization, classification, and analysis of monitoring data, comparison of mud inrushes distribution with geostatistical parameters of the main aquifers.The majority of large-scale accidents caused by mud inrushes are confined to the central and northern area of caved rock zone. The most risky stage of the ore body extraction is the initial block at the lower extraction level. The sources of water supply for the majority of the mud inrushes are high water level areas of the Cretaceous aquifer to the north and west of the mine. Rational targeted drainage aimed at draining the identified areas of the aquifer is the best way to reduce the risk of accidents.


2019 ◽  
Vol 73 (3) ◽  
pp. 375-383
Author(s):  
Manuel Perea ◽  
Ana Marcet ◽  
Marta Vergara-Martínez ◽  
Pablo Gomez

Recent modelling accounts of the lexical decision task have suggested that the reading system performs evidence accumulation to carry out some functions. Evidence accumulation models have been very successful in accounting for effects in the lexical decision task, including the dissociation of repetition effects for words and nonwords (facilitative for words but inhibitory for nonwords). The familiarity of a repeated item triggers its recognition, which facilitates ‘word’ responses but hampers nonword rejection. However, reports of facilitative repetition effects for nonwords with several repetitions in short blocks challenge this hypothesis and favour models based on episodic retrieval. To shed light on the nature of the repetition effects for nonwords in lexical decision, we conducted four experiments to examine the impact of extra-lexical source of information—we induced the use of episodic retrieval traces via instructions and list composition. When the initial block was long, the repetition effect for nonwords was inhibitory, regardless of the instructions and list composition. However, the inhibitory effect was dramatically reduced when the initial block included two presentations of the stimuli and it was even facilitatory when the initial block was short. This composite pattern suggests that evidence accumulation models of lexical decision should take into account all sources of evidence—including episodic retrieval—during the process of lexical decision.


This paper a part of a larger study on the effects of blended learning, it describes classroom experiences from a teacher and learner and details the action research on 250 students who were administered a course on Communication Skills & Personality Development to the Third Year Engineering Students at HITS Chennai, India. The regular parameters of conducting a course such as {a} attendance, {b} performance based results {c} on time submission of tasks and {d}engaging them in class activities became more challenging to the researcher because of the attitude that the scholars presented towards language learning. This ranges from non-acceptance of any new format of teaching, unwillingness to put in extra effort, inhibition stemming from the perception of their peers. To that end the teaching method and assessment were developed that included constructivism, social constructivism and problem-based learning as pedagogy. The programme designed also integrated flipped learning, peer and self-reviews, consistent marking structures based on the Vygotsky Activity Theory. The results confirm initial and strong resistance towards the methodology, lack of physical, emotional and mental input, initial fear and shyness towards the perception of others. Post the initial block what followed was an environment of support, continuous motivation that stemmed from their own activity and creation of learning resources by the scholars


Water ◽  
2019 ◽  
Vol 11 (8) ◽  
pp. 1604 ◽  
Author(s):  
Bartolomé Deyà-Tortella ◽  
Celso Garcia ◽  
William Nilsson ◽  
Dolores Tirado

Water is a key aspect for any tourist destination. The pressure of tourism on water resources, and specifically by the hotel sector on islands and coastal areas, threatens the sustainability of the resource and, ultimately, of the destination. Several international organizations propose price policy as an instrument to promote efficiency and penalize excessive water consumption. This study analyzes the short-term effectiveness of a water tariff reform, implemented by the regional government of the Balearic Islands in 2013, on hotel water consumption. The change consists in moving from a linear to an increasing block rate system. The study applies quantile regression with within-artificial blocks transformation on panel data for the period 2011–2015. The results conclude that the reform was not effective as a means to reduce the levels of water consumption. The disproportionate fixed component of the water tariff and the oversized initial block of the sanitation fee can explain the ineffectiveness of the reform.


2018 ◽  
Vol 72 (6) ◽  
pp. 1379-1386
Author(s):  
Arnaud Destrebecqz ◽  
Michaël Vande Velde ◽  
Estibaliz San Anton ◽  
Axel Cleeremans ◽  
Julie Bertels

In a partial reinforcement schedule where a cue repeatedly predicts the occurrence of a target in consecutive trials, reaction times to the target tend to decrease in a monotonic fashion, while participants’ expectancies for the target decrease at the same time. This dissociation between reaction times and expectancies—the so-called Perruchet effect—challenges the propositional view of learning, which posits that human conditioned responses result from conscious inferences about the relationships between events. However, whether the reaction time pattern reflects the strength of a putative cue-target link, or only non-associative processes, such as motor priming, remains unclear. To address this issue, we implemented the Perruchet procedure in a two-choice reaction time task and compared reaction time patterns in an Experimental condition, in which a tone systematically preceded a visual target, and in a Control condition, in which the onset of the two stimuli were uncoupled. Participants’ expectancies regarding the target were recorded separately in an initial block. Reaction times decreased with the succession of identical trials in both conditions, reflecting the impact of motor priming. Importantly, reaction time slopes were steeper in the Experimental than in the Control condition, indicating an additional influence of the associative strength between the two stimuli. Interestingly, slopes were less steep for participants who showed the gambler’s fallacy in the initial block. In sum, our results suggest the mutual influences of motor priming, associative strength, and expectancies on performance. They are in line with a dual-process model of learning involving both a propositional reasoning process and an automatic link-formation mechanism.


2018 ◽  
Author(s):  
Robert D McIntosh ◽  
Elizabeth Fowler ◽  
Tianjiao Lyu ◽  
Sergio Della Sala

The Dunning-Kruger effect (DKE) is the finding that, across a wide range of tasks, poor performers greatly overestimate their ability, while top performers make more accurate self-assessments. The original account of the DKE involves the idea that metacognitive insight requires the same skills as task performance, so that unskilled people perform poorly and lack insight. However, typical global measures of self-assessment are prone to statistical and other biases that could explain the same pattern. We used psychophysical methods to examine metacognitive insight in simple movement and spatial memory tasks: pointing at a dot, or recalling its position after a short delay. We measured task skill in an initial block, and self-assessment in a second block, in which participants judged after every trial whether they had hit the target or not. Metacognitive calibration and sensitivity were indeed related to task skill, and partially mediated the DKE. In a second study, we again measured task skill in an initial block, but titrated task difficulty in the second block so that all participants performed the task with equivalent levels of success. Metacognitive measures were again related to skill, but the DKE pattern itself was eliminated. In a third study, we used statistical modelling to illuminate these findings, showing that differences in metacognitive calibration and sensitivity can contribute to the DKE, but are neither necessary nor sufficient for it. This general analysis explains and quantifies how metacognitive insight and other factors interact to determine this famous effect.


Sign in / Sign up

Export Citation Format

Share Document