Identifying Distinctive Features of Productive and Socially Efficient Schools

2019 ◽  
Vol 7 (4) ◽  
pp. 15-23
Author(s):  
Марина Матюшкина ◽  
Marina Matyushkina ◽  
Константин Белоусов ◽  
Konstantin Belousov

The article presents the results of a series of empirical studies devoted to the analysis of the relationship between school performance (according to the Unified State Examination criterion), its social efficiency (according to the criterion of the frequency of student circulation to tutors) and various social and pedagogical characteristics of the school. A correlation analysis was carried out on an array of data obtained over 5 years of regular comprehensive surveys in schools of St. Petersburg. The sets of signs that are most characteristic for schools with high performance and for schools with high social effi ciency are identified and described. Distinctive features of successful schools are associated with a high level of use of tutoring services by students, with good material and technical conditions, teachers’ competence in the use of design and research methods, etc. In socially eff ective schools, the achievement of students’ academic results is based on the use of their own school strengths — teachers’ potential, innovative technologies with large-scale attraction of the Internet and electronic resources. The study was carried out with the fi nancial support of the Russian Foundation for Basic Research in the framework of the scientifi c project “Signs of an eff ective school in conditions of the mass distribution of tutoring practices” No. 19-013-00455.

2019 ◽  
Vol 7 (1) ◽  
pp. 55-70
Author(s):  
Moh. Zikky ◽  
M. Jainal Arifin ◽  
Kholid Fathoni ◽  
Agus Zainal Arifin

High-Performance Computer (HPC) is computer systems that are built to be able to solve computational loads. HPC can provide a high-performance technology and short the computing processes timing. This technology was often used in large-scale industries and several activities that require high-level computing, such as rendering virtual reality technology. In this research, we provide Tawaf’s Virtual Reality with 1000 of Pilgrims and realistic surroundings of Masjidil-Haram as the interactive and immersive simulation technology by imitating them with 3D models. Thus, the main purpose of this study is to calculate and to understand the processing time of its Virtual Reality with the implementation of tawaf activities using various platforms; such as computer and Android smartphone. The results showed that the outer-line or outer rotation of Kaa’bah mostly consumes minimum times although he must pass the longer distance than the closer one.  It happened because the agent with the closer area to Kaabah is facing the crowded peoples. It means an obstacle has the more impact than the distances in this case.


Author(s):  
Yassine Sabri ◽  
Aouad Siham

Multi-area and multi-faceted remote sensing (SAR) datasets are widely used due to the increasing demand for accurate and up-to-date information on resources and the environment for regional and global monitoring. In general, the processing of RS data involves a complex multi-step processing sequence that includes several independent processing steps depending on the type of RS application. The processing of RS data for regional disaster and environmental monitoring is recognized as computationally and data demanding.Recently, by combining cloud computing and HPC technology, we propose a method to efficiently solve these problems by searching for a large-scale RS data processing system suitable for various applications. Real-time on-demand service. The ubiquitous, elastic, and high-level transparency of the cloud computing model makes it possible to run massive RS data management and data processing monitoring dynamic environments in any cloud. via the web interface. Hilbert-based data indexing methods are used to optimally query and access RS images, RS data products, and intermediate data. The core of the cloud service provides a parallel file system of large RS data and an interface for accessing RS data from time to time to improve localization of the data. It collects data and optimizes I/O performance. Our experimental analysis demonstrated the effectiveness of our method platform.


2021 ◽  
Vol 15 ◽  
Author(s):  
Giordana Florimbi ◽  
Emanuele Torti ◽  
Stefano Masoli ◽  
Egidio D'Angelo ◽  
Francesco Leporati

In modern computational modeling, neuroscientists need to reproduce long-lasting activity of large-scale networks, where neurons are described by highly complex mathematical models. These aspects strongly increase the computational load of the simulations, which can be efficiently performed by exploiting parallel systems to reduce the processing times. Graphics Processing Unit (GPU) devices meet this need providing on desktop High Performance Computing. In this work, authors describe a novel Granular layEr Simulator development implemented on a multi-GPU system capable of reconstructing the cerebellar granular layer in a 3D space and reproducing its neuronal activity. The reconstruction is characterized by a high level of novelty and realism considering axonal/dendritic field geometries, oriented in the 3D space, and following convergence/divergence rates provided in literature. Neurons are modeled using Hodgkin and Huxley representations. The network is validated by reproducing typical behaviors which are well-documented in the literature, such as the center-surround organization. The reconstruction of a network, whose volume is 600 × 150 × 1,200 μm3 with 432,000 granules, 972 Golgi cells, 32,399 glomeruli, and 4,051 mossy fibers, takes 235 s on an Intel i9 processor. The 10 s activity reproduction takes only 4.34 and 3.37 h exploiting a single and multi-GPU desktop system (with one or two NVIDIA RTX 2080 GPU, respectively). Moreover, the code takes only 3.52 and 2.44 h if run on one or two NVIDIA V100 GPU, respectively. The relevant speedups reached (up to ~38× in the single-GPU version, and ~55× in the multi-GPU) clearly demonstrate that the GPU technology is highly suitable for realistic large network simulations.


2013 ◽  
Vol 21 (1-2) ◽  
pp. 1-16 ◽  
Author(s):  
Marek Blazewicz ◽  
Ian Hinder ◽  
David M. Koppelman ◽  
Steven R. Brandt ◽  
Milosz Ciznicki ◽  
...  

Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, theChemoraframework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.


2021 ◽  
Vol 5 (ICFP) ◽  
pp. 1-32
Author(s):  
Farzin Houshmand ◽  
Mohsen Lesani ◽  
Keval Vora

Graph analytics elicits insights from large graphs to inform critical decisions for business, safety and security. Several large-scale graph processing frameworks feature efficient runtime systems; however, they often provide programming models that are low-level and subtly different from each other. Therefore, end users can find implementation and specially optimization of graph analytics error-prone and time-consuming. This paper regards the abstract interface of the graph processing frameworks as the instruction set for graph analytics, and presents Grafs, a high-level declarative specification language for graph analytics and a synthesizer that automatically generates efficient code for five high-performance graph processing frameworks. It features novel semantics-preserving fusion transformations that optimize the specifications and reduce them to three primitives: reduction over paths, mapping over vertices and reduction over vertices. Reductions over paths are commonly calculated based on push or pull models that iteratively apply kernel functions at the vertices. This paper presents conditions, parametric in terms of the kernel functions, for the correctness and termination of the iterative models, and uses these conditions as specifications to automatically synthesize the kernel functions. Experimental results show that the generated code matches or outperforms handwritten code, and that fusion accelerates execution.


2020 ◽  
Vol 34 (07) ◽  
pp. 12645-12652
Author(s):  
Yifan Yang ◽  
Guorong Li ◽  
Yuankai Qi ◽  
QIngming Huang

Convolutional neural networks (CNNs) have been widely adopted in the visual tracking community, significantly improving the state-of-the-art. However, most of them ignore the important cues lying in the distribution of training data and high-level features that are tightly coupled with the target/background classification. In this paper, we propose to improve the tracking accuracy via online training. On the one hand, we squeeze redundant training data by analyzing the dataset distribution in low-level feature space. On the other hand, we design statistic-based losses to increase the inter-class distance while decreasing the intra-class variance of high-level semantic features. We demonstrate the effectiveness on top of two high-performance tracking methods: MDNet and DAT. Experimental results on the challenging large-scale OTB2015 and UAVDT demonstrate the outstanding performance of our tracking method.


2021 ◽  
Vol 11 (22) ◽  
pp. 10803
Author(s):  
Jiagang Song ◽  
Yunwu Lin ◽  
Jiayu Song ◽  
Weiren Yu ◽  
Leyuan Zhang

Mass multimedia data with geographical information (geo-multimedia) are collected and stored on the Internet due to the wide application of location-based services (LBS). How to find the high-level semantic relationship between geo-multimedia data and construct efficient index is crucial for large-scale geo-multimedia retrieval. To combat this challenge, the paper proposes a deep cross-modal hashing framework for geo-multimedia retrieval, termed as Triplet-based Deep Cross-Modal Retrieval (TDCMR), which utilizes deep neural network and an enhanced triplet constraint to capture high-level semantics. Besides, a novel hybrid index, called TH-Quadtree, is developed by combining cross-modal binary hash codes and quadtree to support high-performance search. Extensive experiments are conducted on three common used benchmarks, and the results show the superior performance of the proposed method.


1993 ◽  
Vol 50 (11) ◽  
pp. 2513-2527 ◽  
Author(s):  
David F. Millie ◽  
Hans W. Paerl ◽  
James P. Hurley

Past and current efforts at identifying microalgal phylogenetic groups rely largely on microscopic evaluation, which requires a high level of taxonomic skill, may take considerable time, can be variable among personnel, and does not allow characterization of the physiological status of the taxa. High-performance liquid chromatography (HPLC) has proven effective in rapidly separating and distinguishing chlorophylls, chlorophyll-degradation products, and carotenoids within monotypic and mixed algal samples. When coupled with absorbance and/or fluorescence spectroscopy, HPLC can accurately characterize phylogenetic groups and changes in community composition and yield information concerning microalgal physiological status, production, trophic interaction, and paleolimnology/paleooceanography. The recent widespread occurrence of toxic and noxious phytoplankton blooms has necessitated the use of remote imagery of pigment and reflectance "signatures" for monitoring and predicting bloom distribution. Because HPLC allows the processing of large numbers of samples from numerous locations relatively quickly, it is ideally suited for large-scale "ground truthing" of remotely sensed imagery. Coupled with rapidly evolving computer-based remote sensing technologies, HPLC-based pigment analyses may provide accurate assessments of aquatic biogeochemical flux, primary production, trophic state, water quality, and changes therein on local, regional, and global scales.


Geophysics ◽  
2021 ◽  
pp. 1-74
Author(s):  
Matteo Ravasi ◽  
Ivan Vasconcelos

Numerical integral operators of convolution type form the basis of most wave-equation-based methods for processing and imaging of seismic data. As several of these methods require the solution of an inverse problem, multiple forward and adjoint passes of the modeling operator are generally required to converge to a satisfactory solution. This work highlights the memory requirements and computational challenges that arise when implementing such operators on 3D seismic datasets and their usage for solving large systems of integral equations. A Python framework is presented that leverages libraries for distributed storage and computing, and provides a high-level symbolic representation of linear operators. A driving goal for our work is not only to offer a widely deployable, ready-to-use high-performance computing (HPC) framework, but to demonstrate that it enables addressing research questions that are otherwise difficult to tackle. To this end, the first example of 3D full-wavefield target-oriented imaging, which comprises of two subsequent steps of seismic redatuming, is presented. The redatumed fields are estimated by means of gradient-based inversion using the full dataset as well as spatially decimated versions of the dataset as a way to investi-gate the robustness of both inverse problems to spatial aliasing in the input dataset. Our numerical example shows that when one spatial direction is finely sampled, satisfactory redatuming and imaging can be accomplished also when the sampling in other direction is coarser than a quarter of the dominant wavelength. While aliasing introduces noise in the redatumed fields, they are less sensitive to well-known spurious artefacts compared to cheaper, adjoint-based redatuming techniques. These observations are shown to hold for a relatively simple geologic structure, and while further testing is needed for more complex scenarios, we expect them to be generally valid while possibly breaking down for extreme cases


Nanomaterials ◽  
2019 ◽  
Vol 9 (9) ◽  
pp. 1184 ◽  
Author(s):  
Francisco J. Romero ◽  
Almudena Rivadeneyra ◽  
Inmaculada Ortiz-Gomez ◽  
Alfonso Salinas ◽  
Andrés Godoy ◽  
...  

In this paper, we present a simple and inexpensive method for the fabrication of high-performance graphene-based heaters on different large-scale substrates through the laser photothermal reduction of graphene oxide (laser-reduced graphene-oxide, LrGO). This method allows an efficient and localized high level of reduction and therefore a good electrical conductivity of the treated films. The performance of the heaters is studied in terms of steady-state temperature, power consumption, and time response for different substrates and sizes. The results show that the LrGO heaters can achieve stable steady-state temperatures higher than 200 °C when a voltage of 15 V is applied, featuring a time constant of around 4 s and a heat transfer coefficient of ~200 °C cm2/W. These characteristics are compared with other technologies in this field, demonstrating that the fabrication approach described in this work is competitive and promising to fabricate large-scale flexible heaters with a very fast response and high steady-state temperatures in a cost-effective way. This technology can be easily combined with other fabrication methods, such as screen printing or spray-deposition, for the manufacturing of complete sensing systems where the temperature control is required to adjust functionalities or to tune sensitivity or selectivity.


Sign in / Sign up

Export Citation Format

Share Document