ShoBeVODSDT: Shodan and Binary Edge based vulnerable open data sources detection tool or what Internet of Things Search Engines know about you

Author(s):  
Artjoms Daskevics ◽  
Anastasija Nikiforova
Epidemiologia ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 315-324
Author(s):  
Juan M. Banda ◽  
Ramya Tekumalla ◽  
Guanyu Wang ◽  
Jingyuan Yu ◽  
Tuo Liu ◽  
...  

As the COVID-19 pandemic continues to spread worldwide, an unprecedented amount of open data is being generated for medical, genetics, and epidemiological research. The unparalleled rate at which many research groups around the world are releasing data and publications on the ongoing pandemic is allowing other scientists to learn from local experiences and data generated on the front lines of the COVID-19 pandemic. However, there is a need to integrate additional data sources that map and measure the role of social dynamics of such a unique worldwide event in biomedical, biological, and epidemiological analyses. For this purpose, we present a large-scale curated dataset of over 1.12 billion tweets, growing daily, related to COVID-19 chatter generated from 1 January 2020 to 27 June 2021 at the time of writing. This data source provides a freely available additional data source for researchers worldwide to conduct a wide and diverse number of research projects, such as epidemiological analyses, emotional and mental responses to social distancing measures, the identification of sources of misinformation, stratified measurement of sentiment towards the pandemic in near real time, among many others.


2020 ◽  
Vol 19 (10) ◽  
pp. 1602-1618 ◽  
Author(s):  
Thibault Robin ◽  
Julien Mariethoz ◽  
Frédérique Lisacek

A key point in achieving accurate intact glycopeptide identification is the definition of the glycan composition file that is used to match experimental with theoretical masses by a glycoproteomics search engine. At present, these files are mainly built from searching the literature and/or querying data sources focused on posttranslational modifications. Most glycoproteomics search engines include a default composition file that is readily used when processing MS data. We introduce here a glycan composition visualizing and comparative tool associated with the GlyConnect database and called GlyConnect Compozitor. It offers a web interface through which the database can be queried to bring out contextual information relative to a set of glycan compositions. The tool takes advantage of compositions being related to one another through shared monosaccharide counts and outputs interactive graphs summarizing information searched in the database. These results provide a guide for selecting or deselecting compositions in a file in order to reflect the context of a study as closely as possible. They also confirm the consistency of a set of compositions based on the content of the GlyConnect database. As part of the tool collection of the Glycomics@ExPASy initiative, Compozitor is hosted at https://glyconnect.expasy.org/compozitor/ where it can be run as a web application. It is also directly accessible from the GlyConnect database.


资源科学 ◽  
2020 ◽  
Vol 42 (10) ◽  
pp. 1965-1974
Author(s):  
Yi SUN ◽  
Mengyang FANG ◽  
Jianning HE ◽  
Jiufen LIU ◽  
Siyuan ZHANG ◽  
...  

Repositor ◽  
2020 ◽  
Vol 2 (4) ◽  
pp. 463
Author(s):  
Rendiyono Wahyu Saputro ◽  
Aminuddin Aminuddin ◽  
Yuda Munarko

AbstrakPerkembangan teknologi telah mengakibatkan pertumbuhan data yang semakin cepat dan besar setiap waktunya. Hal tersebut disebabkan oleh banyaknya sumber data seperti mesin pencari, RFID, catatan transaksi digital, arsip video dan foto, user generated content, internet of things, penelitian ilmiah di berbagai bidang seperti genomika, meteorologi, astronomi, fisika, dll. Selain itu, data - data tersebut memiliki karakteristik yang unik antara satu dengan lainnya, hal ini yang menyebabkan tidak dapat diproses oleh teknologi basis data konvensional.Oleh karena itu, dikembangkan beragam framework komputasi terdistribusi seperti Apache Hadoop dan Apache Spark yang memungkinkan untuk memproses data secara terdistribusi dengan menggunakan gugus komputer.Adanya ragam framework komputasi terdistribusi, sehingga diperlukan sebuah pengujian untuk mengetahui kinerja komputasi keduanya. Pengujian dilakukan dengan memproses dataset dengan beragam ukuran dan dalam gugus komputer dengan jumlah node yang berbeda. Dari semua hasil pengujian, Apache Hadoop memerlukan waktu yang lebih sedikit dibandingkan dengan Apache Spark. Hal tersebut terjadi karena nilai throughput dan throughput/node Apache Hadoop lebih tinggi daripada Apache Spark.AbstractTechnological developments have resulted in rapid and growing data growth every time. This is due to the large number of data sources such as search engines, RFID, digital transaction records, video and photo archives, user generated content, internet of things, scientific research in areas such as genomics, meteorology, astronomy, physics, In addition, these data have unique characteristics of each other, this is the cause can not be processed by conventional database technology. Therefore, developed various distributed computing frameworks such as Apache Hadoop and Apache Spark that enable to process data in a distributed by using computer cluster.The existence of various frameworks of distributed computing, so required a test to determine the performance of both computing. Testing is done by processing datasets of various sizes and in clusters of computers with different number of nodes. Of all the test results, Apache Hadoop takes less time than the Apache Spark. This happens because the value of throuhgput and throughput / node Apache Hadoop is higher than Apache Spark.


Author(s):  
Wassila Guebli ◽  
Abdelkader Belkhir

The emergence of the internet of things in the smart homes has given rise to many services to meet the user's expectations. It is possible to control the temperature, the brightness, the sound system, and even the security of the house via a smartphone, at the request of the inhabitant or by scheduling it. This growing number of “things” must deal with material constraints such as home network infrastructure, but also applicative due to the number of proposed services. The heterogeneity of users' preferences often creates conflicts between them like turn on and off light or using a heater and an air conditioner in the same time. To manage these conflicts, the authors proposed a solution based on linked open data (LOD). The LOD allows defining the relation between the different services and things in the house and a better exploitation of the attributes of the inhabitant's profile and services. It consists to find inconsistency relation between the equipment using the antonym thesaurus.


Author(s):  
Kåre Synnes ◽  
Matthias Kranz ◽  
Juwel Rana ◽  
Olov Schelén

Pervasive computing was envisioned by pioneers like Mark Weiser but has yet to become an everyday technology in our society. The recent advances regarding Internet of Things, social computing, and mobile access technologies converge to make pervasive computing truly ubiquitous. The key challenge is to make simple and robust solutions for normal users, which shifts the focus from complex platforms involving machine learning and artificial intelligence to more hands on construction of services that are tailored or personalized for individual users. This chapter discusses Internet of Things together with Social Computing as a basis for components that users in a “digital city” could utilize to make their daily life better, safer, etc. A novel environment for user-created services, such as social apps, is presented as a possible solution for this. The vision is that anyone could make a simple service based on Internet-enabled devices (Internet of Things) and encapsulated digital resources such as Open Data, which also can have social aspects embedded. This chapter also aims to identify trends, challenges, and recommendations in regard of Social Interaction for Digital Cities. This work will help expose future themes with high innovation and business potential based on a timeframe roughly 15 years ahead of now. The purpose is to create a common outlook on the future of Information and Communication Technologies (ICT) based on the extrapolation of current trends and ongoing research efforts.


Author(s):  
Francesco Corcoglioniti ◽  
Marco Rospocher ◽  
Roldano Cattoni ◽  
Bernardo Magnini ◽  
Luciano Serafini

This chapter describes the KnowledgeStore, a scalable, fault-tolerant, and Semantic Web grounded open-source storage system to jointly store, manage, retrieve, and query interlinked structured and unstructured data, especially designed to manage all the data involved in Knowledge Extraction applications. The chapter presents the concept, design, function and implementation of the KnowledgeStore, and reports on its concrete usage in four application scenarios within the NewsReader EU project, where it has been successfully used to store and support the querying of millions of news articles interlinked with billions of RDF triples, both extracted from text and imported from Linked Open Data sources.


2022 ◽  
pp. 1-26
Author(s):  
Hengshuo Liang ◽  
Lauren Burgess ◽  
Weixian Liao ◽  
Chao Lu ◽  
Wei Yu

The advance of internet of things (IoT) techniques enables a variety of smart-world systems in energy, transportation, home, and city infrastructure, among others. To provide cost-effective data-oriented service, internet of things search engines (IoTSE) have received growing attention as a platform to support efficient data analytics. There are a number of challenges in designing efficient and intelligent IoTSE. In this chapter, the authors focus on the efficiency issue of IoTSE and design the named data networking (NDN)-based approach for IoTSE. To be specific, they first design a simple simulation environment to compare the IP-based network's performance against named data networking (NDN). They then create four scenarios tailored to study the approach's resilience to address network issues and scalability with the growing number of queries in IoTSE. They implement the four scenarios using ns-3 and carry out extensive performance evaluation to determine the efficacy of the approach concerning network resilience and scalability. They also discuss some remaining issues that need further research.


2019 ◽  
Vol 32 (15) ◽  
pp. e4024 ◽  
Author(s):  
Mohammad Wazid ◽  
Poonam Reshma Dsouza ◽  
Ashok Kumar Das ◽  
Vivekananda Bhat K ◽  
Neeraj Kumar ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document