Using a distributed deep learning algorithm for analyzing big data in smart cities

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mohammed Anouar Naoui ◽  
Brahim Lejdel ◽  
Mouloud Ayad ◽  
Abdelfattah Amamra ◽  
Okba kazar

PurposeThe purpose of this paper is to propose a distributed deep learning architecture for smart cities in big data systems.Design/methodology/approachWe have proposed an architectural multilayer to describe the distributed deep learning for smart cities in big data systems. The components of our system are Smart city layer, big data layer, and deep learning layer. The Smart city layer responsible for the question of Smart city components, its Internet of things, sensors and effectors, and its integration in the system, big data layer concerns data characteristics 10, and its distribution over the system. The deep learning layer is the model of our system. It is responsible for data analysis.FindingsWe apply our proposed architecture in a Smart environment and Smart energy. 10; In a Smart environment, we study the Toluene forecasting in Madrid Smart city. For Smart energy, we study wind energy foresting in Australia. Our proposed architecture can reduce the time of execution and improve the deep learning model, such as Long Term Short Memory10;.Research limitations/implicationsThis research needs the application of other deep learning models, such as convolution neuronal network and autoencoder.Practical implicationsFindings of the research will be helpful in Smart city architecture. It can provide a clear view into a Smart city, data storage, and data analysis. The 10; Toluene forecasting in a Smart environment can help the decision-maker to ensure environmental safety. The Smart energy of our proposed model can give a clear prediction of power generation.Originality/valueThe findings of this study are expected to contribute valuable information to decision-makers for a better understanding of the key to Smart city architecture. Its relation with data storage, processing, and data analysis.

Big data is one of the most influential technologies of the modern era. However, in order to support maturity of big data systems, development and sustenance of heterogeneous environments is requires. This, in turn, requires integration of technologies as well as concepts. Computing and storage are the two core components of any big data system. With that said, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings the facet of big data file formats into picture. This paper classifies available big data file formats into five categories namely text-based, row-based, column-based, in-memory and data storage services. It also compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Lastly, it provides a discussion on tradeoffs that must be considered while choosing a file format for a big data system, providing a framework for creation for file format selection criteria.


2018 ◽  
Vol 210 ◽  
pp. 04042
Author(s):  
Ammar Alhaj Ali ◽  
Pavel Varacha ◽  
Said Krayem ◽  
Roman Jasek ◽  
Petr Zacek ◽  
...  

Nowadays, a wide set of systems and application, especially in high performance computing, depends on distributed environments to process and analyses huge amounts of data. As we know, the amount of data increases enormously, and the goal to provide and develop efficient, scalable and reliable storage solutions has become one of the major issue for scientific computing. The storage solution used by big data systems is Distributed File Systems (DFSs), where DFS is used to build a hierarchical and unified view of multiple file servers and shares on the network. In this paper we will offer Hadoop Distributed File System (HDFS) as DFS in big data systems and we will present an Event-B as formal method that can be used in modeling, where Event-B is a mature formal method which has been widely used in a number of industry projects in a number of domains, such as automotive, transportation, space, business information, medical device and so on, And will propose using the Rodin as modeling tool for Event-B, which integrates modeling and proving as well as the Rodin platform is open source, so it supports a large number of plug-in tools.


2017 ◽  
Vol 37 (1) ◽  
pp. 75-104 ◽  
Author(s):  
Rashid Mehmood ◽  
Royston Meriton ◽  
Gary Graham ◽  
Patrick Hennelly ◽  
Mukesh Kumar

Purpose The purpose of this paper is to advance knowledge of the transformative potential of big data on city-based transport models. The central question guiding this paper is: how could big data transform smart city transport operations? In answering this question the authors present initial results from a Markov study. However the authors also suggest caution in the transformation potential of big data and highlight the risks of city and organizational adoption. A theoretical framework is presented together with an associated scenario which guides the development of a Markov model. Design/methodology/approach A model with several scenarios is developed to explore a theoretical framework focussed on matching the transport demands (of people and freight mobility) with city transport service provision using big data. This model was designed to illustrate how sharing transport load (and capacity) in a smart city can improve efficiencies in meeting demand for city services. Findings This modelling study is an initial preliminary stage of the investigation in how big data could be used to redefine and enable new operational models. The study provides new understanding about load sharing and optimization in a smart city context. Basically the authors demonstrate how big data could be used to improve transport efficiency and lower externalities in a smart city. Further how improvement could take place by having a car free city environment, autonomous vehicles and shared resource capacity among providers. Research limitations/implications The research relied on a Markov model and the numerical solution of its steady state probabilities vector to illustrate the transformation of transport operations management (OM) in the future city context. More in depth analysis and more discrete modelling are clearly needed to assist in the implementation of big data initiatives and facilitate new innovations in OM. The work complements and extends that of Setia and Patel (2013), who theoretically link together information system design to operation absorptive capacity capabilities. Practical implications The study implies that transport operations would actually need to be re-organized so as to deal with lowering CO2 footprint. The logistic aspects could be seen as a move from individual firms optimizing their own transportation supply to a shared collaborative load and resourced system. Such ideas are radical changes driven by, or leading to more decentralized rather than having centralized transport solutions (Caplice, 2013). Social implications The growth of cities and urban areas in the twenty-first century has put more pressure on resources and conditions of urban life. This paper is an initial first step in building theory, knowledge and critical understanding of the social implications being posed by the growth in cities and the role that big data and smart cities could play in developing a resilient and sustainable transport city system. Originality/value Despite the importance of OM to big data implementation, for both practitioners and researchers, we have yet to see a systematic analysis of its implementation and its absorptive capacity contribution to building capabilities, at either city system or organizational levels. As such the Markov model makes a preliminary contribution to the literature integrating big data capabilities with OM capabilities and the resulting improvements in system absorptive capacity.


2020 ◽  
Vol 8 ◽  
pp. 65-70
Author(s):  
Oleksii Duda ◽  
◽  
Liliana Dzhydzhora ◽  
Oleksandr Matsiuk ◽  
Andrii Stanko ◽  
...  

The concept of creating a multi-level mobile personalized system for fighting viral diseases, in particular Covid-19, was developed. Using the integration of the Internet of Things, Cloud Computing and Big Data technologies, the system involves a combination of two architectures: client-server and publication-subscription. The advantage of the system is the permanent help with viral diseases, namely on communication, information, and medical stages. The smart city concept in the context of viral disease control focuses on the application of Big Data analysis methods and the improvement of forecasting procedures and emergency treatment protocols. Using different technologies, cloud server stores the positioning data obtained from different devices, and the application accesses API to display and analyze the positioning data in real time. Due to the technologies combination, internal and external positioning can be used with a certain accuracy degree, being useful for various medical and emergency situations and analysis and the following processing by other smart city information systems. The result of the given investigation is the development of the conceptual model of multi-level mobile personalized health status monitoring system used for intellectual data analysis, prediction, treatment and prevention of viral diseases such as Covid-19 in modern “smart city”.


2020 ◽  
Vol 12 (11) ◽  
pp. 190
Author(s):  
Elarbi Badidi ◽  
Zineb Mahrez ◽  
Essaid Sabir

Demographic growth in urban areas means that modern cities face challenges in ensuring a steady supply of water and electricity, smart transport, livable space, better health services, and citizens’ safety. Advances in sensing, communication, and digital technologies promise to mitigate these challenges. Hence, many smart cities have taken a new step in moving away from internal information technology (IT) infrastructure to utility-supplied IT delivered over the Internet. The benefit of this move is to manage the vast amounts of data generated by the various city systems, including water and electricity systems, the waste management system, transportation system, public space management systems, health and education systems, and many more. Furthermore, many smart city applications are time-sensitive and need to quickly analyze data to react promptly to the various events occurring in a city. The new and emerging paradigms of edge and fog computing promise to address big data storage and analysis in the field of smart cities. Here, we review existing service delivery models in smart cities and present our perspective on adopting these two emerging paradigms. We specifically describe the design of a fog-based data pipeline to address the issues of latency and network bandwidth required by time-sensitive smart city applications.


Author(s):  
Sylva Girtelschmid ◽  
Matthias Steinbauer ◽  
Vikash Kumar ◽  
Anna Fensel ◽  
Gabriele Kotsis

Purpose – The purpose of this article is to propose and evaluate a novel system architecture for Smart City applications which uses ontology reasoning and a distributed stream processing framework on the cloud. In the domain of Smart City, often methodologies of semantic modeling and automated inference are applied. However, semantic models often face performance problems when applied in large scale. Design/methodology/approach – The problem domain is addressed by using methods from Big Data processing in combination with semantic models. The architecture is designed in a way that for the Smart City model still traditional semantic models and rule engines can be used. However, sensor data occurring at such Smart Cities are pre-processed by a Big Data streaming platform to lower the workload to be processed by the rule engine. Findings – By creating a real-world implementation of the proposed architecture and running simulations of Smart Cities of different sizes, on top of this implementation, the authors found that the combination of Big Data streaming platforms with semantic reasoning is a valid approach to the problem. Research limitations/implications – In this article, real-world sensor data from only two buildings were extrapolated for the simulations. Obviously, real-world scenarios will have a more complex set of sensor input values, which needs to be addressed in future work. Originality/value – The simulations show that merely using a streaming platform as a buffer for sensor input values already increases the sensor data throughput and that by applying intelligent filtering in the streaming platform, the actual number of rule executions can be limited to a minimum.


2017 ◽  
Vol 21 (1) ◽  
pp. 92-112 ◽  
Author(s):  
Helen N. Rothberg ◽  
G. Scott Erickson

Purpose This paper aims to bring together the existing theory from knowledge management (KM), competitive intelligence (CI) and big data analytics to develop a more comprehensive view of the full range of intangible assets (data, information, knowledge and intelligence). By doing so, the interactions of the intangibles are better understood and recommendations can be made for the appropriate structure of big data systems in different circumstances. Metrics are also applied to illustrate how one can identify and understand what these different circumstances might look like. Design/methodology/approach The approach is chiefly conceptual, combining theory from multiple disciplines enhanced with practical applications. Illustrative data drawn from other empirical work are applied to illustrate some concepts. Findings Theory suggests that the KM theory is particularly useful in guiding big data system installations that focus primarily on the transfer of data/information. For big data systems focused on analytical insights, the CI theory might be a better match, as the system structures are actually quite similar. Practical implications Though the guidelines are general, practitioners should be able to evaluate their own situations and perhaps make better decisions about the direction of their big data systems. One can make the case that all the disciplines have something to add to improving how intangibles are deployed and applied and that improving coordination between KM and analytics/intelligence functions will help all intangibles systems to work more effectively. Originality/value To the authors’ knowledge, very few scholars work in this area, at the intersection of multiple types of intangible assets. The metrics are unique, especially in their scale and attachment to theory, allowing insights that provide more clarity to scholars and practical direction to industry.


2021 ◽  
Vol 1 (2) ◽  
pp. 91-99
Author(s):  
Zainab Salih Ageed ◽  
Subhi R. M. Zeebaree ◽  
Mohammed Mohammed Sadeeq ◽  
Shakir Fattah Kak ◽  
Zryan Najat Rashid ◽  
...  

Many policymakers envisage using a community model and Big Data technology to achieve the sustainability demanded by intelligent city components and raise living standards. Smart cities use different technology to make their residents more successful in their health, housing, electricity, learning, and water supplies. This involves reducing prices and the utilization of resources and communicating more effectively and creatively for our employees. Extensive data analysis is a comparatively modern technology that is capable of expanding intelligent urban facilities. Digital extraction has resulted in the processing of large volumes of data that can be used in several valuable areas since digitalization is an essential part of daily life. In many businesses and utility domains, including the intelligent urban domain, successful exploitation and multiple data use is critical. This paper examines how big data can be used for more innovative societies. It explores the possibilities, challenges, and benefits of applying big data systems in intelligent cities and compares and contrasts different intelligent cities and big data ideas. It also seeks to define criteria for the creation of big data applications for innovative city services.


Sign in / Sign up

Export Citation Format

Share Document