scholarly journals Urban Morphometrics and the Intangible Uniqueness of Tangible Heritage. An Evidence-Based Generative Design Experiment in Historical Kochi (IN)

Heritage ◽  
2021 ◽  
Vol 4 (4) ◽  
pp. 4399-4420
Author(s):  
Alessandro Venerandi ◽  
Ombretta Romice ◽  
Olga Chepelianskaia ◽  
Kavya Kalyan ◽  
Nitin Bhardwaj ◽  
...  

Asia is urbanising rapidly. Current urbanisation practices often compromise sustainability, prosperity, and local quality of life while context-sensitive alternatives show very limited impact. A third way is necessary to integrate mass-production, heritage, and human values. As part of UNICITI’s initiative, A Third Way of Building Asian Cities, we propose a scalable and replicable methodology which captures unique morphological traits of urban types (i.e., areas with homogenous urban form) to inform innovative large-scale and context-sensitive practices. We extract urban types from a large set of quantitative descriptors and provide a systematic way to generate figure-grounds aligned with such urban types. The application of the proposed methodology to Kochi (IN) reveals 24 distinct urban types with unique morphological features. Profiles, containing design-relevant values of morphometrics, are then produced for a selection of urban types located in the historical district of Fort Kochi/Mattancherry. Based on these, figure-ground design demonstrations are carried out in three sample sites. Outcomes seem aligned with the urban character of their respective types, while allowing distinct design expressions, suggesting that the proposed approach has potential to inform the design in historical/heritage areas and, more broadly, the search for a Third Way of Building Asian Cities.

2019 ◽  
Author(s):  
Ryther Anderson ◽  
Achay Biong ◽  
Diego Gómez-Gualdrón

<div>Tailoring the structure and chemistry of metal-organic frameworks (MOFs) enables the manipulation of their adsorption properties to suit specific energy and environmental applications. As there are millions of possible MOFs (with tens of thousands already synthesized), molecular simulation, such as grand canonical Monte Carlo (GCMC), has frequently been used to rapidly evaluate the adsorption performance of a large set of MOFs. This allows subsequent experiments to focus only on a small subset of the most promising MOFs. In many instances, however, even molecular simulation becomes prohibitively time consuming, underscoring the need for alternative screening methods, such as machine learning, to precede molecular simulation efforts. In this study, as a proof of concept, we trained a neural network as the first example of a machine learning model capable of predicting full adsorption isotherms of different molecules not included in the training of the model. To achieve this, we trained our neural network only on alchemical species, represented only by their geometry and force field parameters, and used this neural network to predict the loadings of real adsorbates. We focused on predicting room temperature adsorption of small (one- and two-atom) molecules relevant to chemical separations. Namely, argon, krypton, xenon, methane, ethane, and nitrogen. However, we also observed surprisingly promising predictions for more complex molecules, whose properties are outside the range spanned by the alchemical adsorbates. Prediction accuracies suitable for large-scale screening were achieved using simple MOF (e.g. geometric properties and chemical moieties), and adsorbate (e.g. forcefield parameters and geometry) descriptors. Our results illustrate a new philosophy of training that opens the path towards development of machine learning models that can predict the adsorption loading of any new adsorbate at any new operating conditions in any new MOF.</div>


2021 ◽  
Author(s):  
Béla Kovács ◽  
Márton Pál ◽  
Fanni Vörös

&lt;p&gt;The use of aerial photography in topography has started in the first decades of the 20&lt;sup&gt;th&lt;/sup&gt; century. Remote sensed data have become indispensable for cartographers and GIS staff when doing large-scale mapping: especially topographic, orienteering and thematic maps. The use of UAVs (unmanned aerial vehicles) for this purpose has also become widespread for some years. Various drones and sensors (RGB, multispectral and hyperspectral) with many specifications are used to capture and process the physical properties of an examined area. In parallel with the development of the hardware, new software solutions are emerging to visualize and analyse photogrammetric material: a large set of algorithms with different approaches are available for image processing.&lt;/p&gt;&lt;p&gt;Our study focuses on the large-scale topographic mapping of vegetation and land cover. Most traditional analogue and digital maps use these layers either for background or highlighted thematic purposes. We propose to use the theory of OBIA &amp;#8211; Object-based Image Analysis to differentiate cover types. This method involves pixels to be grouped into larger polygon units based on either spectral or other variables (e.g. elevation, aspect, curvature in case of DEMs). The neighbours of initial seed points are examined whether they should be added to the region according to the similarity of their attributes. Using OBIA, different land cover types (trees, grass, soils, bare rock surfaces) can be distinguished either with supervised or unsupervised classification &amp;#8211; depending on the purposes of the analyst. Our base data were high-resolution RGB and multispectral images (with 5 bands).&lt;/p&gt;&lt;p&gt;Following this methodology, not only elevation data (e.g. shaded relief or vector contour lines) can be derived from UAV imagery but vector land cover data are available for cartographers and GIS analysts. As the number of distinct land cover groups is free to choose, even quite complex thematic layers can be produced. These layers can serve as subjects of further analyses or for cartographic visualization.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;&lt;p&gt;BK is supported by Application Domain Specific Highly Reliable IT Solutions&amp;#8221; project &amp;#160;has been implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the Thematic Excellence Programme TKP2020-NKA-06 (National Challenges Subprogramme) funding scheme.&lt;/p&gt;&lt;p&gt;MP and FV are supported by EFOP-3.6.3-VEKOP-16-2017-00001: Talent Management in Autonomous Vehicle Control Technologies &amp;#8211; The Project is financed by the Hungarian Government and co-financed by the European Social Fund.&lt;/p&gt;


Author(s):  
Martin Schreiber ◽  
Pedro S Peixoto ◽  
Terry Haut ◽  
Beth Wingate

This paper presents, discusses and analyses a massively parallel-in-time solver for linear oscillatory partial differential equations, which is a key numerical component for evolving weather, ocean, climate and seismic models. The time parallelization in this solver allows us to significantly exceed the computing resources used by parallelization-in-space methods and results in a correspondingly significantly reduced wall-clock time. One of the major difficulties of achieving Exascale performance for weather prediction is that the strong scaling limit – the parallel performance for a fixed problem size with an increasing number of processors – saturates. A main avenue to circumvent this problem is to introduce new numerical techniques that take advantage of time parallelism. In this paper, we use a time-parallel approximation that retains the frequency information of oscillatory problems. This approximation is based on (a) reformulating the original problem into a large set of independent terms and (b) solving each of these terms independently of each other which can now be accomplished on a large number of high-performance computing resources. Our results are conducted on up to 3586 cores for problem sizes with the parallelization-in-space scalability limited already on a single node. We gain significant reductions in the time-to-solution of 118.3× for spectral methods and 1503.0× for finite-difference methods with the parallelization-in-time approach. A developed and calibrated performance model gives the scalability limitations a priori for this new approach and allows us to extrapolate the performance of the method towards large-scale systems. This work has the potential to contribute as a basic building block of parallelization-in-time approaches, with possible major implications in applied areas modelling oscillatory dominated problems.


1997 ◽  
Vol 50 (3) ◽  
pp. 528-559 ◽  
Author(s):  
Catriona M. Morrison ◽  
Tameron D. Chappell ◽  
Andrew W. Ellis

Studies of lexical processing have relied heavily on adult ratings of word learning age or age of acquisition, which have been shown to be strongly predictive of processing speed. This study reports a set of objective norms derived in a large-scale study of British children's naming of 297 pictured objects (including 232 from the Snodgrass & Vanderwart, 1980, set). In addition, data were obtained on measures of rated age of acquisition, rated frequency, imageability, object familiarity, picture-name agreement, and name agreement. We discuss the relationship between the objective measure and adult ratings of word learning age. Objective measures should be used when available, but where not, our data suggest that adult ratings provide a reliable and valid measure of real word learning age.


2016 ◽  
Author(s):  
Timothy N. Rubin ◽  
Oluwasanmi Koyejo ◽  
Krzysztof J. Gorgolewski ◽  
Michael N. Jones ◽  
Russell A. Poldrack ◽  
...  

AbstractA central goal of cognitive neuroscience is to decode human brain activity--i.e., to infer mental processes from observed patterns of whole-brain activation. Previous decoding efforts have focused on classifying brain activity into a small set of discrete cognitive states. To attain maximal utility, a decoding framework must be open-ended, systematic, and context-sensitive--i.e., capable of interpreting numerous brain states, presented in arbitrary combinations, in light of prior information. Here we take steps towards this objective by introducing a Bayesian decoding framework based on a novel topic model---Generalized Correspondence Latent Dirichlet Allocation---that learns latent topics from a database of over 11,000 published fMRI studies. The model produces highly interpretable, spatially-circumscribed topics that enable flexible decoding of whole-brain images. Importantly, the Bayesian nature of the model allows one to “seed” decoder priors with arbitrary images and text--enabling researchers, for the first time, to generative quantitative, context-sensitive interpretations of whole-brain patterns of brain activity.


2020 ◽  
Vol 13 (3) ◽  
pp. 232-248
Author(s):  
E. N. Fursova

The article is devoted to the study of the linguistic tradition of the Berbers, who are the indigenous people of North Africa. The Berbers have maintained a rich tradition of spoken language. At the turn of the 20th ‑21st centuries, against the backdrop of the intensification of the movement for self‑determination, their cultural and linguistic rights, the Berbers launched a large‑scale activity aimed at restoring the national written language. The author suggested that the need to develop standardized writing was partly due to the desire of the Berbers to consolidate the official status of their language in the Constitution. The author notes that the aggravation of the so‑called “Berber question” at the end of the 20th century spurred the interest of scientists and researchers in the Berber written heritage. Most of the surviving handwritten documents make Berber texts (mostly religious), recorded using the Arabic alphabet between the 15th and early 20th centuries. The study of conditions for their creation and fields of their application shows that these texts played a significant role in the dissemination of religious and scientific knowledge among the Berbers. It is concluded that despite the use of the predominantly oral form of the language, the Berbers managed to create a unique written tradition. The article discusses in detail the main problems of the study of Berber manuscripts, among which: the requirement from the researcher of serious pre‑knowledge in various fields; the problem of accessibility of texts stored in private collections; the need to develop unified approaches to the description of Berber manuscripts, their digitization and other important arrangements to ensure the availability of documents for the scientific‑research community. Particular attention is paid to the history of the creation of the first collections of Berber manuscripts and their cataloging. The author has also highlighted the work of scientists, who made a qualitative contribution to the study of the Berber manuscripts, most of which have not yet been discovered and carry significant potential aimed at pre‑ serving and enhancing the Berber cultural and historical heritage.


2019 ◽  
Author(s):  
K. Vyse ◽  
L. Faivre ◽  
M. Romich ◽  
M. Pagter ◽  
D. Schubert ◽  
...  

AbstractChromatin regulation ensures stable repression of stress-inducible genes under non-stress conditions and transcriptional activation and memory of such an activation of those genes when plants are exposed to stress. However, there is only limited knowledge on how chromatin genes are regulated at the transcriptional and post-transcriptional level upon stress exposure and relief from stress. We have therefore set-up a RT-qPCR-based platform for high-throughput transcriptional profiling of a large set of chromatin genes. We find that the expression of a large fraction of these genes is regulated by cold. In addition, we reveal an induction of several DNA and histone demethylase genes and certain histone variants after plants have been shifted back to ambient temperature (deacclimation), suggesting a role in the memory of cold acclimation. We also re-analyse large scale transcriptomic datasets for transcriptional regulation and alternative splicing (AS) of chromatin genes, uncovering an unexpected level of regulation of these genes, particularly at the splicing level. This includes several vernalization regulating genes whose AS results in cold-regulated protein diversity. Overall, we provide a profiling platform for the analysis of chromatin regulatory genes and integrative analyses of their regulation, suggesting a dynamic regulation of key chromatin genes in response to low temperature stress.


2018 ◽  
Author(s):  
Yang Xu ◽  
Barbara Claire Malt ◽  
Mahesh Srinivasan

One way that languages are able to communicate a potentially infinite set of ideas through a finite lexicon is by compressing emerging meanings into words, such that over time, individual words come to express multiple, related senses of meaning. We propose that overarching communicative and cognitive pressures have created systematic directionality in how new metaphorical senses have developed from existing word senses over the history of English. Given a large set of pairs of semantic domains, we used computational models to test which domains have been more commonly the starting points (source domains) and which the ending points (target domains) of metaphorical mappings over the past millennium. We found that a compact set of variables, including externality, embodiment, and valence, explain directionality in the majority of about 5000 metaphorical mappings recorded over the past 1100 years. These results provide the first large-scale historical evidence that metaphorical mapping is systematic, and driven by measurable communicative and cognitive principles.


2020 ◽  
Author(s):  
Olessia Jouravlev ◽  
Alexander J.E. Kell ◽  
Zachary Mineroff ◽  
A.J. Haskins ◽  
Dima Ayyash ◽  
...  

AbstractOne of the few replicated functional brain differences between individuals with autism spectrum disorders (ASD) and neurotypical (NT) controls is reduced language lateralization. However, most prior reports relied on comparisons of group-level activation maps or functional markers that had not been validated at the individual-subject level, and/or used tasks that do not isolate language processing from other cognitive processes, complicating interpretation. Furthermore, few prior studies have examined functional responses in other functional networks, as needed to determine the selectivity of the effect. Using fMRI, we compared language lateralization between 28 ASD participants and carefully pairwise-matched controls, with the language regions defined individually with a well-validated language localizer. ASD participants showed less lateralized responses due to stronger right hemisphere activations. Further, this effect did not stem from a ubiquitous reduction in lateralization across the brain: ASD participants did not differ from controls in the lateralization of two other large-scale networks—the Theory of Mind network and the Multiple Demand network. Finally, in an exploratory study, we tested whether reduced language lateralization may also be present in NT individuals with high autistic trait load. Indeed, autistic trait load in a large set of NT participants (n=189) was associated with less lateralized language activations. These results suggest that reduced language lateralization is a robust and spatially selective neural marker of autism, present in individuals with ASD, but also in NT individuals with higher genetic liability for ASD, in line with a continuum model of underlying genetic risk.


1994 ◽  
Vol 7 (3) ◽  
pp. 539-561 ◽  
Author(s):  
John Law ◽  
Madeleine Akrich

The ArgumentIn this we explore some of the ways in which a state scientific laboratory (Daresbury SERC) reacted to the rtetoric and forces of the marketpace in the 1980s. We describe laboratory attempts to create what we call “good customers” while converting itself into a “good seller” by developing a particulat set of costing practicting that were closely related to the implementation of a management accounting system. Finally, we consider how Daresbury response to “market forces” influenced scintific and organzational practice, and arsponse that the social technologies of governmentality performed by accountancy — but also by scientific and bureaucratic practice are complex, discursively heterogeneou, and used in context-sensitive ways. This means, or so we suggest, that it is difficult to mount general argunents about “science” and “the market,” and that the use of such large-scale institions impedes impedes analysis.


Sign in / Sign up

Export Citation Format

Share Document