ON THE PERFORMANCE AND TECHNOLOGICAL IMPACT OF ADDING MEMORY CONTROLLERS IN MULTI-CORE PROCESSORS

2010 ◽  
Vol 20 (04) ◽  
pp. 341-357 ◽  
Author(s):  
JOSE CARLOS SANCHO ◽  
DARREN J. KERBYSON ◽  
MICHAEL LANG

The increasing core-count on current and future processors is posing critical challenges to the memory subsystem to efficiently handle concurrent memory requests. The current trend is to increase the number of memory channels available to the processor's memory controller. In this paper we investigate the advantages and disadvantages of this approach from both a technological and an application performance viewpoint. In particular, we explore the trade-off between employing multiple memory channels per memory controller and the use of multiple memory controllers with fewer memory channels. Experiments conducted on two current state-of-the-art multi-core processors, a 6-core AMD Istanbul and a 4-core Intel Nehalem-EP, using the STREAM benchmark and a wide range of production applications. An analytical model of the STREAM performance is used to illustrate the diminishing return obtained when increasing the number of memory channels per memory controller whose effect is also seen in the application performance. In addition, we show that this performance degradation can be efficiently addressed by increasing the ratio of memory controllers to channels while keeping the number of memory channels constant. Significant performance improvements can be achieved in this scheme, up to 28%, in the case of using two memory controllers each with one channel compared with one controller with two memory channels.

Author(s):  
Paul S. Addison

Redundancy: it is a word heavy with connotations of lacking usefulness. I often hear that the rationale for not using the continuous wavelet transform (CWT)—even when it appears most appropriate for the problem at hand—is that it is ‘redundant’. Sometimes the conversation ends there, as if self-explanatory. However, in the context of the CWT, ‘redundant’ is not a pejorative term, it simply refers to a less compact form used to represent the information within the signal. The benefit of this new form—the CWT—is that it allows for intricate structural characteristics of the signal information to be made manifest within the transform space, where it can be more amenable to study: resolution over redundancy. Once the signal information is in CWT form, a range of powerful analysis methods can then be employed for its extraction, interpretation and/or manipulation. This theme issue is intended to provide the reader with an overview of the current state of the art of CWT analysis methods from across a wide range of numerate disciplines, including fluid dynamics, structural mechanics, geophysics, medicine, astronomy and finance. This article is part of the theme issue ‘Redundancy rules: the continuous wavelet transform comes of age’.


2019 ◽  
Author(s):  
Mehrdad Shoeiby ◽  
Mohammad Ali Armin ◽  
Sadegh Aliakbarian ◽  
Saeed Anwar ◽  
Lars petersson

<div>Advances in the design of multi-spectral cameras have</div><div>led to great interests in a wide range of applications, from</div><div>astronomy to autonomous driving. However, such cameras</div><div>inherently suffer from a trade-off between the spatial and</div><div>spectral resolution. In this paper, we propose to address</div><div>this limitation by introducing a novel method to carry out</div><div>super-resolution on raw mosaic images, multi-spectral or</div><div>RGB Bayer, captured by modern real-time single-shot mo-</div><div>saic sensors. To this end, we design a deep super-resolution</div><div>architecture that benefits from a sequential feature pyramid</div><div>along the depth of the network. This, in fact, is achieved</div><div>by utilizing a convolutional LSTM (ConvLSTM) to learn the</div><div>inter-dependencies between features at different receptive</div><div>fields. Additionally, by investigating the effect of different</div><div>attention mechanisms in our framework, we show that a</div><div>ConvLSTM inspired module is able to provide superior at-</div><div>tention in our context. Our extensive experiments and anal-</div><div>yses evidence that our approach yields significant super-</div><div>resolution quality, outperforming current state-of-the-art</div><div>mosaic super-resolution methods on both Bayer and multi-</div><div>spectral images. Additionally, to the best of our knowledge,</div><div>our method is the first specialized method to super-resolve</div><div>mosaic images, whether it be multi-spectral or Bayer.</div><div><br></div>


2020 ◽  
Author(s):  
Ali Fallah ◽  
Sungmin O ◽  
Rene Orth

Abstract. Precipitation is a crucial variable for hydro-meteorological applications. Unfortunately, rain gauge measurements are sparse and unevenly distributed, which substantially hampers the use of in-situ precipitation data in many regions of the world. The increasing availability of high-resolution gridded precipitation products presents a valuable alternative, especially over gauge-sparse regions. Nevertheless, uncertainties and corresponding differences across products can limit the applicability of these data. This study examines the usefulness of current state-of-the-art precipitation datasets in hydrological modelling. For this purpose, we force a conceptual hydrological model with multiple precipitation datasets in > 200 European catchments. We consider a wide range of precipitation products, which are generated via (1) interpolation of gauge measurements (E-OBS and GPCC V.2018), (2) combination of multiple sources (MSWEP V2) and (3) data assimilation into reanalysis models (ERA-Interim, ERA5, and CFSR). For each catchment, runoff and evapotranspiration simulations are obtained by forcing the model with the various precipitation products. Evaluation is done at the monthly time scale during the period of 1984–2007. We find that simulated runoff values are highly dependent on the accuracy of precipitation inputs, and thus show significant differences between the simulations. By contrast, simulated evapotranspiration is generally much less influenced. The results are further analysed with respect to different hydro-climatic regimes. We find that the impact of precipitation uncertainty on simulated runoff increases towards wetter regions, while the opposite is observed in the case of evapotranspiration. Finally, we perform an indirect performance evaluation of the precipitation datasets by comparing the runoff simulations with streamflow observations. Thereby, E-OBS yields the best agreement, while furthermore ERA5, GPCC V.2018 and MSWEP V2 show good performance. In summary, our findings highlight a climate-dependent propagation of precipitation uncertainty through the water cycle; while runoff is strongly impacted in comparatively wet regions such as Central Europe, there are increasing implications on evapotranspiration towards drier regions.


2021 ◽  
Vol 7 ◽  
pp. e495
Author(s):  
Saleh Albahli ◽  
Hafiz Tayyab Rauf ◽  
Abdulelah Algosaibi ◽  
Valentina Emilia Balas

Artificial intelligence (AI) has played a significant role in image analysis and feature extraction, applied to detect and diagnose a wide range of chest-related diseases. Although several researchers have used current state-of-the-art approaches and have produced impressive chest-related clinical outcomes, specific techniques may not contribute many advantages if one type of disease is detected without the rest being identified. Those who tried to identify multiple chest-related diseases were ineffective due to insufficient data and the available data not being balanced. This research provides a significant contribution to the healthcare industry and the research community by proposing a synthetic data augmentation in three deep Convolutional Neural Networks (CNNs) architectures for the detection of 14 chest-related diseases. The employed models are DenseNet121, InceptionResNetV2, and ResNet152V2; after training and validation, an average ROC-AUC score of 0.80 was obtained competitive as compared to the previous models that were trained for multi-class classification to detect anomalies in x-ray images. This research illustrates how the proposed model practices state-of-the-art deep neural networks to classify 14 chest-related diseases with better accuracy.


Author(s):  
Pushpak Bhattacharyya ◽  
Mitesh Khapra

This chapter discusses the basic concepts of Word Sense Disambiguation (WSD) and the approaches to solving this problem. Both general purpose WSD and domain specific WSD are presented. The first part of the discussion focuses on existing approaches for WSD, including knowledge-based, supervised, semi-supervised, unsupervised, hybrid, and bilingual approaches. The accuracy value for general purpose WSD as the current state of affairs seems to be pegged at around 65%. This has motivated investigations into domain specific WSD, which is the current trend in the field. In the latter part of the chapter, we present a greedy neural network inspired algorithm for domain specific WSD and compare its performance with other state-of-the-art algorithms for WSD. Our experiments suggest that for domain-specific WSD, simply selecting the most frequent sense of a word does as well as any state-of-the-art algorithm.


2020 ◽  
Vol 39 (2) ◽  
pp. 2249-2261
Author(s):  
Antonio Hernández-Illera ◽  
Miguel A. Martínez-Prieto ◽  
Javier D. Fernández ◽  
Antonio Fariña

RDF self-indexes compress the RDF collection and provide efficient access to the data without a previous decompression (via the so-called SPARQL triple patterns). HDT is one of the reference solutions in this scenario, with several applications to lower the barrier of both publication and consumption of Big Semantic Data. However, the simple design of HDT takes a compromise position between compression effectiveness and retrieval speed. In particular, it supports scan and subject-based queries, but it requires additional indexes to resolve predicate and object-based SPARQL triple patterns. A recent variant, HDT++, improves HDT compression ratios, but it does not retain the original HDT retrieval capabilities. In this article, we extend HDT++ with additional indexes to support full SPARQL triple pattern resolution with a lower memory footprint than the original indexed HDT (called HDT-FoQ). Our evaluation shows that the resultant structure, iHDT++ , requires 70 - 85% of the original HDT-FoQ space (and up to 48 - 72% for an HDT Community variant). In addition, iHDT++ shows significant performance improvements (up to one level of magnitude) for most triple pattern queries, being competitive with state-of-the-art RDF self-indexes.


2017 ◽  
Vol 243 (3) ◽  
pp. 291-299 ◽  
Author(s):  
Daniel F Carr ◽  
Munir Pirmohamed

Adverse drug reactions can be caused by a wide range of therapeutics. Adverse drug reactions affect many bodily organ systems and vary widely in severity. Milder adverse drug reactions often resolve quickly following withdrawal of the casual drug or sometimes after dose reduction. Some adverse drug reactions are severe and lead to significant organ/tissue injury which can be fatal. Adverse drug reactions also represent a financial burden to both healthcare providers and the pharmaceutical industry. Thus, a number of stakeholders would benefit from development of new, robust biomarkers for the prediction, diagnosis, and prognostication of adverse drug reactions. There has been significant recent progress in identifying predictive genomic biomarkers with the potential to be used in clinical settings to reduce the burden of adverse drug reactions. These have included biomarkers that can be used to alter drug dose (for example, Thiopurine methyltransferase (TPMT) and azathioprine dose) and drug choice. The latter have in particular included human leukocyte antigen (HLA) biomarkers which identify susceptibility to immune-mediated injuries to major organs such as skin, liver, and bone marrow from a variety of drugs. This review covers both the current state of the art with regard to genomic adverse drug reaction biomarkers. We also review circulating biomarkers that have the potential to be used for both diagnosis and prognosis, and have the added advantage of providing mechanistic information. In the future, we will not be relying on single biomarkers (genomic/non-genomic), but on multiple biomarker panels, integrated through the application of different omics technologies, which will provide information on predisposition, early diagnosis, prognosis, and mechanisms. Impact statement • Genetic and circulating biomarkers present significant opportunities to personalize patient therapy to minimize the risk of adverse drug reactions. ADRs are a significant heath issue and represent a significant burden to patients, healthcare providers, and the pharmaceutical industry. • This review details the current state of the art in biomarkers of ADRs (both genetic and circulating). There is still significant variability in patient response which cannot be explained by current knowledge of genetic risk factors for ADRs; however, we discussed how specific advances in genomics have the potential to yield better and more predictive models. • Many current clinically utilized circulating biomarkers of tissue injury are valid biomarkers for a number of ADRs. However, they often give little insight into the specific cell or tissue subtype which may be affected. Emerging circulating biomarkers with potential to provide greater information on the etiology/pathophysiology of ADRs are described.


2020 ◽  
Vol 6 (3) ◽  
pp. 591-601
Author(s):  
Ausamah Al Houri ◽  
Ahed Habib ◽  
Ahmed Elzokra ◽  
Maan Habib

Tensile strength of soil is indeed one of the important parameters to many civil engineering applications. It is related to wide range of cracks specially in places such as slops, embankment dams, retaining walls or landfills. Despite of the fact that tensile strength is usually presumed to be zero or negligible, its effect on the erosion and cracks development in soil is significant. Thus, to study the tensile strength and behavior of soil several techniques and devices were introduced. These testing methods are classified into direct and indirect ways depending on the loading conditions. The direct techniques including c-shaped mold and 8-shaped mold are in general complicated tests and require high accuracy as they are based on applying a uniaxial tension load directly to the specimen. On the other hand, the indirect tensile tests such as the Brazilian, flexure beam, double punch and hollow cylinder tests provide easy ways to assess the tensile strength of soil under controlled conditions. Although there are many studies in this topic the current state of the art lack of a detailed article that reviews these methodologies. Therefore, this paper is intended to summarize and compare available tests for investigating the tensile behavior of soils.


Author(s):  
Vladimir Frolov ◽  
Alexey Gennadievich Voloboy ◽  
Sergey Valentinovich Ershov ◽  
Vladimir Alexandrovich Galaktionov

Modern realistic computer graphics are based on light transport simulation. In this case, one of the main and difficult to calculate tasks is to calculate the global illumination, i.e. distribution of light in a virtual scene, taking into account multiple reflections and scattering of light and all kinds of its interaction with objects in the scene. Hundreds of publications and describing dozens of methods are devoted to this problem. In this state-of-the-art review, we would like not only to list and briefly describe these methods, but also to give some “map” of existing works, which will allow the reader to navigate, understand their advantages and disadvantages, and, thereby, choose a right method for themselves. Particular attention is paid to such characteristics of the methods as robustness and universality in relation to the used mathematical models, the transparency of the method verification, the possibility of efficient implementation on the GPU, as well as restrictions imposed on the scene or illumination phenomena. In contrast to the existing survey papers, not only the efficiency of the methods is analyzed, but also their limitations and the complexity of software implementation. In addition, we provide the results of our own numerical experiments with various methods that serve as illustrations for the conclusions.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5165
Author(s):  
Chen Dong ◽  
Yi Xu ◽  
Ximeng Liu ◽  
Fan Zhang ◽  
Guorong He ◽  
...  

Diverse and wide-range applications of integrated circuits (ICs) and the development of Cyber Physical System (CPS), more and more third-party manufacturers are involved in the manufacturing of ICs. Unfortunately, like software, hardware can also be subjected to malicious attacks. Untrusted outsourced manufacturing tools and intellectual property (IP) cores may bring enormous risks from highly integrated. Attributed to this manufacturing model, the malicious circuits (known as Hardware Trojans, HTs) can be implanted during the most designing and manufacturing stages of the ICs, causing a change of functionality, leakage of information, even a denial of services (DoS), and so on. In this paper, a survey of HTs is presented, which shows the threatens of chips, and the state-of-the-art preventing and detecting techniques. Starting from the introduction of HT structures, the recent researches in the academic community about HTs is compiled and comprehensive classification of HTs is proposed. The state-of-the-art HT protection techniques with their advantages and disadvantages are further analyzed. Finally, the development trends in hardware security are highlighted.


Sign in / Sign up

Export Citation Format

Share Document