scholarly journals Review of Optimization Dynamically Applied in the Construction and the Application Potential of ICT

2021 ◽  
Vol 13 (10) ◽  
pp. 5478
Author(s):  
Boda Liu ◽  
Bin Yang ◽  
Jianzhuang Xiao ◽  
Dayu Zhu ◽  
Binghan Zhang ◽  
...  

Currently, construction projects are getting more complex, applying more information and communication technologies (ICT), while few studies use real-time data to dynamically optimize construction. The purpose of this article is to study the current development status of the optimization applied dynamically in the construction phase and their potential for applying real data collected by ICT. This article reviews 72 relevant optimization methods and identified some of the ICT research studies that can provide them with dynamic data. The dynamic triggering mode of each research is first analyzed, then its dynamic way, dynamic data, data resource, optimization object, and method are identified and formulated. The results reveal the great value of dynamic optimization in dealing with the complicated and uncertain contextual conditions in construction. Different dynamic triggering modes have different affinities with real data. Then, through the analysis of ICT articles, the huge potential of these dynamic optimization methods in applying real data is shown. This paper points out the most practical dynamic mode for engineers or managers to continuously apply optimization methods to solve dynamic problems in construction, and put forward scientific questions for related researchers: How does one combine ICT with the event dynamics or uncertain parameters? Based on this, the research gap of this area is identified a conceptual solution is proposed.

2018 ◽  
Vol 8 (11) ◽  
pp. 2216
Author(s):  
Jiahui Jin ◽  
Qi An ◽  
Wei Zhou ◽  
Jiakai Tang ◽  
Runqun Xiong

Network bandwidth is a scarce resource in big data environments, so data locality is a fundamental problem for data-parallel frameworks such as Hadoop and Spark. This problem is exacerbated in multicore server-based clusters, where multiple tasks running on the same server compete for the server’s network bandwidth. Existing approaches solve this problem by scheduling computational tasks near the input data and considering the server’s free time, data placements, and data transfer costs. However, such approaches usually set identical values for data transfer costs, even though a multicore server’s data transfer cost increases with the number of data-remote tasks. Eventually, this hampers data-processing time, by minimizing it ineffectively. As a solution, we propose DynDL (Dynamic Data Locality), a novel data-locality-aware task-scheduling model that handles dynamic data transfer costs for multicore servers. DynDL offers greater flexibility than existing approaches by using a set of non-decreasing functions to evaluate dynamic data transfer costs. We also propose online and offline algorithms (based on DynDL) that minimize data-processing time and adaptively adjust data locality. Although DynDL is NP-complete (nondeterministic polynomial-complete), we prove that the offline algorithm runs in quadratic time and generates optimal results for DynDL’s specific uses. Using a series of simulations and real-world executions, we show that our algorithms are 30% better than algorithms that do not consider dynamic data transfer costs in terms of data-processing time. Moreover, they can adaptively adjust data localities based on the server’s free time, data placement, and network bandwidth, and schedule tens of thousands of tasks within subseconds or seconds.


2021 ◽  
Vol 17 (3) ◽  
pp. e1008256
Author(s):  
Shuonan Chen ◽  
Jackson Loper ◽  
Xiaoyin Chen ◽  
Alex Vaughan ◽  
Anthony M. Zador ◽  
...  

Modern spatial transcriptomics methods can target thousands of different types of RNA transcripts in a single slice of tissue. Many biological applications demand a high spatial density of transcripts relative to the imaging resolution, leading to partial mixing of transcript rolonies in many voxels; unfortunately, current analysis methods do not perform robustly in this highly-mixed setting. Here we develop a new analysis approach, BARcode DEmixing through Non-negative Spatial Regression (BarDensr): we start with a generative model of the physical process that leads to the observed image data and then apply sparse convex optimization methods to estimate the underlying (demixed) rolony densities. We apply BarDensr to simulated and real data and find that it achieves state of the art signal recovery, particularly in densely-labeled regions or data with low spatial resolution. Finally, BarDensr is fast and parallelizable. We provide open-source code as well as an implementation for the ‘NeuroCAAS’ cloud platform.


Author(s):  
Rizwan Patan ◽  
Rajasekhara Babu M ◽  
Suresh Kallam

A Big Data Stream Computing (BDSC) Platform handles real-time data from various applications such as risk management, marketing management and business intelligence. Now a days Internet of Things (IoT) deployment is increasing massively in all the areas. These IoTs engender real-time data for analysis. Existing BDSC is inefficient to handle Real-data stream from IoTs because the data stream from IoTs is unstructured and has inconstant velocity. So, it is challenging to handle such real-time data stream. This work proposes a framework that handles real-time data stream through device control techniques to improve the performance. The frame work includes three layers. First layer deals with Big Data platforms that handles real data streams based on area of importance. Second layer is performance layer which deals with performance issues such as low response time, and energy efficiency. The third layer is meant for Applying developed method on existing BDSC platform. The experimental results have been shown a performance improvement 20%-30% for real time data stream from IoT application.


Big Data ◽  
2016 ◽  
pp. 2165-2198
Author(s):  
José Carlos Cavalcanti

Analytics (discover and communication of patterns, with significance, in data) of Big Data (basically characterized by large structured and unstructured data volumes, from a variety of sources, at high velocity - i.e., real-time data capture, storage, and analysis), through the use of Cloud Computing (a model of network computing) is becoming the new “ABC” of information and communication technologies (ICTs), with important effects for the generation of new firms and for the restructuring of those ones already established. However, as this chapter argues, successful application of these new ABC technologies and tools depends on two interrelated policy aspects: 1) the use of a proper model which could help one to approach the structure and dynamics of the firm, and, 2) how the complex trade-off between information technology (IT) and communication technology (CT) costs is handled within, between and beyond firms, organizations and institutions.


Author(s):  
José Carlos Cavalcanti

Analytics (discover and communication of patterns, with significance, in data) of Big Data (basically characterized by large structured and unstructured data volumes, from a variety of sources, at high velocity - i.e., real-time data capture, storage, and analysis), through the use of Cloud Computing (a model of network computing) is becoming the new “ABC” of information and communication technologies (ICTs), with important effects for the generation of new firms and for the restructuring of those ones already established. However, as this chapter argues, successful application of these new ABC technologies and tools depends on two interrelated policy aspects: 1) the use of a proper model which could help one to approach the structure and dynamics of the firm, and, 2) how the complex trade-off between information technology (IT) and communication technology (CT) costs is handled within, between and beyond firms, organizations and institutions.


2016 ◽  
Vol 28 (4) ◽  
pp. 686-715 ◽  
Author(s):  
Kishan Wimalawarne ◽  
Ryota Tomioka ◽  
Masashi Sugiyama

We theoretically and experimentally investigate tensor-based regression and classification. Our focus is regularization with various tensor norms, including the overlapped trace norm, the latent trace norm, and the scaled latent trace norm. We first give dual optimization methods using the alternating direction method of multipliers, which is computationally efficient when the number of training samples is moderate. We then theoretically derive an excess risk bound for each tensor norm and clarify their behavior. Finally, we perform extensive experiments using simulated and real data and demonstrate the superiority of tensor-based learning methods over vector- and matrix-based learning methods.


2020 ◽  
Author(s):  
Nikolay Miloshev ◽  
Petya Trifonova ◽  
Ivan Georgiev ◽  
Tania Marinova ◽  
Nikolay Dobrev ◽  
...  

<p>The National Geo-Information Center (NGIC) is a distributed research infrastructure funded by the National road map for scientific infrastructure (2017-2023) of Bulgaria. It operates in a variety of disciplines such as geophysics, geology, seismology, geodesy, oceanology, climatology, soil science, etc. providing data products and services. Created as a partnership between four institutes working in the field of Earth observation: the National Institute of Geophysics, Geodesy and Geography (NIGGG), the National Institute of Meteorology and Hydrology (NIMH), the Institute of Oceanology (IO), the Geological Institute (GI), and two institutes competent in ICT: the Institute of Mathematics and Informatics (IMI) and the Institute of Information and Communication Technologies (IICT), NGIC consortium serve as primary community of data collectors for national geoscience research. Besides the science, NGIC aims to support decision makers during the process of prevention and protection of the population from natural and anthropogenic risks and disasters.</p><p>Individual NGIC partners originated independently and differ from one another in management and disciplinary scope. Thus, the conceptual model of the NGIC system architecture is based on a federated model structure in which the partners retain their independence and contribute to the development of the common infrastructure through the data and research they carry out. The basic conceptual model of architecture uses both service and microservice concepts and may be altered according to the specifics of the organization environment and development goals of the NGIC information system. It consists of three layers: “Sources” layer containing the providers of Data, Data products, Services and Soft-ware (DDSS), “Interoperability”- regulating the access, automation of discovery and selection of DDSS and data collection from the sources, and “Integration” layer which produces integrated data products.</p><p>The diversity of NGIC’s data, data products, and services is a major strength and of high value to its users like governmental institutions and agencies, research organizations and universities, private sector enterprises, media and the public. NGIC will pursue collaboration with initiatives, projects and research infrastructures for Earth observation to enhance access to an integrated global data resource.</p>


2017 ◽  
Author(s):  
Moens Vincent ◽  
Zenon Alexandre

AbstractThe Drift Diffusion Model (DDM) is a popular model of behaviour that accounts for patterns of accuracy and reaction time data. In the Full DDM implementation, parameters are allowed to vary from trial-to-trial, making the model more powerful but also more challenging to fit to behavioural data. Current approaches yield typically poor fitting quality, are computationally expensive and usually require assuming constant threshold parameter across trials. Moreover, in most versions of the DDM, the sequence of participants’ choices is considered independent and identically distributed(i.i.d.), a condition often violated in real data.Our contribution to the field is threefold: first, we introduce Variational Bayes as a method to fit the full DDM. Second, we relax thei.i.d. assumption, and propose a data-driven algorithm based on a Recurrent Auto-Encoder (RAE-DDM), that estimates the local posterior probability of the DDM parameters at each trial based on the sequence of parameters and data preceding the current data point. Finally, we extend this algorithm to illustrate that the RAE-DDM provides an accurate modelling framework for regression analysis. An important result of the approach we propose is that inference at the trial level can be achieved efficiently for each and every parameter of the DDM, threshold included. This data-driven approach is highly generic and self-contained, in the sense that no external input (e.g. regressors or physiological measure) is necessary to fit the data. Using simulations, we show that this method outperformsi.i.d.-based approaches (either Markov Chain Monte Carlo ori.i.d.-VB) without making any assumption about the nature of the between-trial correlation of the parameters.


Sign in / Sign up

Export Citation Format

Share Document