Concrete Composite Properties with Modified Sodium Bentonite in Material Application Engineering

2012 ◽  
Vol 583 ◽  
pp. 154-157 ◽  
Author(s):  
Jarosław Rajczyk ◽  
Bogdan Langier

Modern concrete technology requires the use of a large scale of different types of additives that modify the concrete’s properties of features. Often, complicated technical conditions impose the usage of concretes with certain properties. One such example is a mixture transported by gravitational methods using rain gutters. In this situation, it is required to have a concrete mixture that has a high degree of fluidity and does not undergo segregation, sedimentation or other modification of uniformity while in transit. When placing the additional requirement that the concrete must have a high water resistance; bentonites may be particularly useful. This paper presents the results of mechanical and physical properties of concrete with different concentrations of ground sodium bentonite. The particle size being from 30 to 60 nm was added as a dry ingredient together with the aggregate at 1, 2, 3 and 4% by weight of the cement. The ground sodium bentonite demonstrated significant changes in the characteristics of the fresh and hardened concrete, such as: consistency, strength and the amount of capillary pressure.

Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


2017 ◽  
Vol 16 (5) ◽  
pp. 626-644 ◽  
Author(s):  
Elizaveta Sivak ◽  
Maria Yudkevich

This paper studies the dynamics of key characteristics of the academic profession in Russia based on the analysis of university faculty in the two largest cities in Russia – Moscow and St Petersburg. We use data on Russian university faculty from two large-scale comparative studies of the academic profession (‘The Carnegie Study’ carried out in 1992 in 14 countries, including Russia, and ‘The Changing Academic Profession Study’, 2007–2012, with 19 participating countries and which Russia joined in 2012) to look at how faculty’s characteristics and attitudes toward different aspects of their academic life changed over 20 years (1992–2011) such as faculty’s views on reasons to leave or to stay at a university, on university’s management and the role of faculty in decision making. Using the example of universities in the two largest Russian cities, we demonstrate that the high degree of overall centralization of governance in Russian universities barely changed in 20 years. Our paper provides comparisons of teaching/research preferences and views on statements concerning personal strain associated with work, academic career perspectives, etc., not only in Russian universities between the years 1992 and 2012, but also in Russia and other ‘Changing Academic Profession’ countries.


2010 ◽  
Vol 20-23 ◽  
pp. 700-705
Author(s):  
Tian Yuan ◽  
Shang Guan Wei ◽  
Zhi Zhong Lu

Multi-channel Virtual reality simulation technology is a kind of simulation technology, which support the grand scene and high degree of immersion, has better visualization effect. In this paper, a moving target monitoring collaboratory simulation technology based on multi-channel is studied. Firstly, study the mathematical modeling foundation of Multi-Channel technology systematically, based on the mobile target spatial model and co-simulation technology, select the appropriate applications of multi-channel technology, building laboratory simulation platform and achieved a space-based six-degree of freedom simulation of multi-channel moving target monitoring simulation. The experiment has proved that in multi-channel target monitoring co-simulation technology used in this paper has strong practicality, combine with a moving target-space model and co-simulation technology, the advantages of objective observation to solve the requirements like large-scale, realism, immersion requirements, etc.


2021 ◽  
Author(s):  
Juan Antonio Campos ◽  
Jaime Villena ◽  
Marta M. Moreno ◽  
Jesús D. Peco ◽  
Mónica Sánchez-Ormeño ◽  
...  

<p>Understanding the dynamics of plant populations and their relationship with the characteristics of the terrain (slope, texture, etc.) and with particular phenomena (erosion, pollution, environmental constrains, etc.) that could affect them is crucial in order to manage regeneration and rehabilitation projects in degraded lands. In recent years, the emphasis has been placed on the observation and assessment of microtopographic drivers as they lead to large-scale phenomena. All the ecological variables that affect a given area are interconnected and the success in unraveling the ecological patterns of operation relies on making a good characterization of all the parameters involved.</p><p>It is especially interesting to study the natural colonization processes that take place in Mediterranean areas with a high degree of seasonality, to whose climatic restrictions, the presence of pollutants and various anthropic actions, can be added. Over these degraded areas, we propose using a new tool, what we have come to call "<strong>pictorial transects</strong>", that is, one-dimensional artificial transects built from low-scale photographs (2 m<sup>2</sup>) taken along a line of work (transect) where you can see the points where ecological resources are generated, stored and lost, and their fluctuation throughout time. A derivative of these would be the "<strong>green transects</strong>" in which the green color has been discriminated using the open software Image I. It is an inexpensive, fast and straightforward pictorial method that can be used to research and monitor the spatial and temporal fluctuation of the potential input of resources (organic matter, water, fine particles, etc.) to the ecosystem.</p><p>The information obtained from pictorial transects not only refers to the measurement of the photosynthetic potential per unit area or the location of the critical points (generate, storage or sink of resources) but also makes it possible to monitor the specific composition of the plant cover. For an appropriate use of this methodology, the criteria to determine the direction and length of the different transects must be previously and carefully established according to the objectives proposed in the study. For example: a radial transect in a salty pond will give us information on the changes in the plant cover as we move away from the center and the salinity decreases. In the same pond, a transect parallel to the shore will give us information on those changes that occur in the vegetation that do not depend on the degree of salinity. There are some cases in which this method could be very useful, as in the natural colonization of a degraded mine site or to assess the progression area affected by allochthonous species or weeds in extensive crops.</p>


2021 ◽  
Vol 70 (2) ◽  
pp. 14-22
Author(s):  
Zh. KolumbayevaSh. ◽  

Globalization, informatization, digitalization, led to large-scale changes that have problematized the modern process of upbringing. The modern practice of upbringing in Kazakhstan is aimed at solving the problem of forming an intellectual nation. The key figure in the upbringing process is the teacher. The modernization of public consciousness taking place in Kazakhstan, the renewal of both the content of education and the system of upbringing require understanding not only the content, but also the methodology of the professional training of teachers for the upbringing of children, for the organization of the upbringing system in educational organizations. We believe that the analysis of traditional and clarification of modern methodological foundations of professional training of future teachers of Kazakhstan for upbringing work will give us the opportunity to develop a strategy for training future teachers in the conditions of spiritual renewal of Kazakhstan's society. The article reveals the experience of Abai KazNPU. As a result of the conducted research, we came to the conclusion that the process of training a teacher in Kazakhstan, who has a high degree of ethnic, cultural, and religious diversity, requires strengthening the upbringing and socializing components of the educational process of the university. The strategy of professional training of a modern teacher should be a polyparadigmatic concept with the leading role of ideas of personality-oriented, competence paradigm.


1995 ◽  
Vol 18 (3) ◽  
pp. 179-202
Author(s):  
Umesh Kumar

In the last decade, an important shift has taken place in the design of hardware with the advent of smaller and denser integrated circuit packages. Analysis techniques are required to ensure the proper electrical functioning of this hardware. An efficient method is presented to model the parasitic capacitance of VLSI (very large scale integration) interconnections. It is valid for conductors in a stratified medium, which is considered to be a good approximation for theSi−SiO2system of which present day ICs are made. The model approximates the charge density on the conductors as a continuous function on a web of edges. Each base function in the approximation has the form of a “spider” of edges. Here the method used [1] has very low complexity, as compared to other models used previously [2], and achieves a high degree of precision within the range of validity of the stratified medium.


1986 ◽  
Vol 59 (2) ◽  
pp. 683-693 ◽  
Author(s):  
Samuel E. Krug ◽  
Edgar F. Johns

The second-order factors structure of the 16 Personality Factor Questionnaire (16PF) was cross-validated on a large sample ( N = 17,381) of normal males and females. Subjects were sampled across a broad range of ages, socioeconomic levels, education, geographic location, and ethnicity. The purposes of this investigation were (1) to provide a precise definition of 16PF second-order factor structure, (2) to shed additional light on the nature of two second-order factors that have been previously identified but described as “unstable” and “poorly reproduced,” and (3) to determine the extent to which common factor estimation formulas for men and women would prove satisfactory for applied work. The resulting solutions were congruent with previous studies and showed a high degree of simple structure. Support was provided for one, but not both, of the two additional second-order factors. Results also supported the use of simplified estimation formulas for applied use.


Author(s):  
K. Davydova ◽  
G. Kuschk ◽  
L. Hoegner ◽  
P. Reinartz ◽  
U. Stilla

Texture mapping techniques are used to achieve a high degree of realism for computer generated large-scale and detailed 3D surface models by extracting the texture information from photographic images and applying it to the object surfaces. Due to the fact that a single image cannot capture all parts of the scene, a number of images should be taken. However, texturing the object surfaces from several images can lead to lighting variations between the neighboring texture fragments. In this paper we describe the creation of a textured 3D scene from overlapping aerial images using a Markov Random Field energy minimization framework. We aim to maximize the quality of the generated texture mosaic, preserving the resolution from the original images, and at the same time to minimize the seam visibilities between adjacent fragments. As input data we use a triangulated mesh of the city center of Munich and multiple camera views of the scene from different directions.


2016 ◽  
Author(s):  
George Dimitriadis ◽  
Joana Neto ◽  
Adam R. Kampff

AbstractElectrophysiology is entering the era of ‘Big Data’. Multiple probes, each with hundreds to thousands of individual electrodes, are now capable of simultaneously recording from many brain regions. The major challenge confronting these new technologies is transforming the raw data into physiologically meaningful signals, i.e. single unit spikes. Sorting the spike events of individual neurons from a spatiotemporally dense sampling of the extracellular electric field is a problem that has attracted much attention [22, 23], but is still far from solved. Current methods still rely on human input and thus become unfeasible as the size of the data sets grow exponentially.Here we introduce the t-student stochastic neighbor embedding (t-sne) dimensionality reduction method [27] as a visualization tool in the spike sorting process. T-sne embeds the n-dimensional extracellular spikes (n = number of features by which each spike is decomposed) into a low (usually two) dimensional space. We show that such embeddings, even starting from different feature spaces, form obvious clusters of spikes that can be easily visualized and manually delineated with a high degree of precision. We propose that these clusters represent single units and test this assertion by applying our algorithm on labeled data sets both from hybrid [23] and paired juxtacellular/extracellular recordings [15]. We have released a graphical user interface (gui) written in python as a tool for the manual clustering of the t-sne embedded spikes and as a tool for an informed overview and fast manual curration of results from other clustering algorithms. Furthermore, the generated visualizations offer evidence in favor of the use of probes with higher density and smaller electrodes. They also graphically demonstrate the diverse nature of the sorting problem when spikes are recorded with different methods and arise from regions with different background spiking statistics.


10.29007/pc58 ◽  
2018 ◽  
Author(s):  
Julia Lavid ◽  
Marta Carretero ◽  
Juan Rafael Zamorano

In this paper we set forth an annotation model for dynamic modality in English and Spanish, given its relevance not only for contrastive linguistic purposes, but also for its impact on practical annotation tasks in the Natural Language Processing (NLP) community. An annotation scheme is proposed, which captures both the functional-semantic meanings and the language-specific realisations of dynamic meanings in both languages. The scheme is validated through a reliability study performed on a randomly selected set of one hundred and twenty sentences from the MULTINOT corpus, resulting in a high degree of inter-annotator agreement. We discuss our main findings and give attention to the difficult cases as they are currently being used to develop detailed guidelines for the large-scale annotation of dynamic modality in English and Spanish.


Sign in / Sign up

Export Citation Format

Share Document